Show simple item record

dc.contributor.advisorFiorello, Catherine A.
dc.creatorBloomfield, Alison Elizabeth
dc.date.accessioned2020-11-03T15:34:18Z
dc.date.available2020-11-03T15:34:18Z
dc.date.issued2015
dc.identifier.other920555124
dc.identifier.urihttp://hdl.handle.net/20.500.12613/2608
dc.description.abstractThe School-wide Evaluation Tool (SET) is a commonly used measure of the implementation fidelity of school-wide positive behavior interventions and supports (SWPBIS) programs. The current study examines the content and concurrent validity of the SET to establish whether an alternative approach to weighting and scoring the SET might provide a more accurate assessment of SWPBIS implementation fidelity. Twenty published experts in the field of SWPBIS completed online surveys to obtain ratings of the relative importance of each item on the SET to sustainable SWPBIS implementation. Using the experts' mean ratings, four novel SET scoring approaches were developed: unweighted, reweighted using mean ratings, unweighted dropping lowest quartile items, and reweighted dropping lowest quartile items. SET 2.1 data from 1,018 schools were used to compare the four novel and two established SET scoring methods and examine their concurrent validity with the Team Implementation Checklist 3.1 (TIC; across a subsample of 492 schools). Correlational data indicated that the two novel SET scoring methods with dropped items were both significantly stronger predictors of TIC scores than the established SET scoring methods. Continuous SET scoring methods have greater concurrent validity with the TIC overall score and greater sensitivity than the dichotomous SET 80/80 Criterion. Based on the equivalent concurrent validity of the unweighted SET with dropped items and the reweighted SET with dropped items compared to the TIC, this study recommends that the unweighted SET with dropped items be used by schools and researchers to obtain a more cohesive and prioritized set of SWPBIS elements than the existing or other SET scoring methods developed in this study.
dc.format.extent137 pages
dc.language.isoeng
dc.publisherTemple University. Libraries
dc.relation.ispartofTheses and Dissertations
dc.rightsIN COPYRIGHT- This Rights Statement can be used for an Item that is in copyright. Using this statement implies that the organization making this Item available has determined that the Item is in copyright and either is the rights-holder, has obtained permission from the rights-holder(s) to make their Work(s) available, or makes the Item available under an exception or limitation to copyright (including Fair Use) that entitles it to make the Item available.
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/
dc.subjectEducational Psychology
dc.subjectEducational Tests & Measurements
dc.subjectSchool-wide Evaluation Tool
dc.subjectSchoolwide Positive Behavior Support
dc.subjectSet
dc.subjectSwpbis
dc.subjectSwpbs
dc.titleAn Investigation of the Content and Concurrent Validity of the School-wide Evaluation Tool
dc.typeText
dc.type.genreThesis/Dissertation
dc.contributor.committeememberPendergast, Laura
dc.contributor.committeememberDuCette, Joseph P.
dc.contributor.committeememberFarley, Frank
dc.contributor.committeememberTincani, Matthew J.
dc.description.departmentSchool Psychology
dc.relation.doihttp://dx.doi.org/10.34944/dspace/2590
dc.ada.noteFor Americans with Disabilities Act (ADA) accommodation, including help with reading this content, please contact scholarshare@temple.edu
dc.description.degreePh.D.
refterms.dateFOA2020-11-03T15:34:18Z


Files in this item

Thumbnail
Name:
TETDEDXBloomfield-temple-0225E ...
Size:
798.5Kb
Format:
PDF

This item appears in the following Collection(s)

Show simple item record