Primary Investigators: Dr. Sandra M. Chafouleas and Dr. T. Chris Riley-Tillman
Time frame: 2010 – present
Source: Institute for Education Sciences, US Dept of Education (R324B060014)
The development of a web-based application will serve to increase the utility of DBR in behavioral assessment given ease of data entry, analysis, and presentation. For example, such an application would allow for data analysis of the individual child, the entire class, or school, which would likely enhance the quality and the scope of what educators could do with DBR data when using data-based decision making within a multi-tiered framework to promote student success. We are currently providing support for users to test the system.
Project VIABLE-II: Unified validation of Direct Behavior Rating (DBR) in a problem-solving model
Previous work has established Direct Behavior Rating (DBR) as a viable behavior assessment method that combines strengths of systematic direct observation and behavior rating scales. In brief, DBR is completed by a rater (e.g. teacher) who quantifies perception of a well-defined behavior of a ratee (e.g. student) through rating that behavior in close proximity to the time and place of behavior observation. For example, a teacher might use DBR to estimate the proportion of time Sam was academically engaged during science instruction on Tuesday. It is expected that DBR outcomes provide a data stream useful in informing decisions about student behavior. In an initial IES-funded Goal 5 project (R324B060014), DBR foundations of measurement were addressed, including questions related to how the DBR scale should be comprised, which targets of behavior should be included, how many observations should occur, and what are the effects of rater. Results have established recommended instrumentation and procedures as well as an initial base of psychometric adequacy related to single-item DBR scales uses for behavior classes of academic engagement, respectful, and disruptive. Focus in the initial project was directed toward DBR use in progress monitoring, yet potential exists for dual assessment applications through extension to screening. Evaluation of DBR use across both purposes, and within a unified validity framework, is important toward understanding overall utility within a school-based problem-solving model of service delivery. Through Goal 5 (Measurement) of the competition in Social and Behavioral Outcomes to Support Student Learning, we aim to further a systematic line of empirical investigation of single-item DBR scales. In Project VIABLE-II, we propose to extend single-item DBR scale investigation related to unified validation within a problem-solving model which emphasizes a) validation surrounding screening, b) validation in progress monitoring, and c) foundational psychometric standards. With regard to screening assessment, we propose to establish appropriate cut-points for current and predictive student risk in both elementary and middle school student samples located in districts across three states. Concurrently, continued examination of traditional psychometric indicators (e.g. construct validity, criterion-related validity, reliability) along with other forms of information relevant to score interpretation and use (e.g. social and educational consequences, relevance and utility) will occur. At the end of the project, results will provide clear direction regarding how single-item DBR scales could be incorporated to facilitate defensible, efficient, flexible, and repeatable school-based behavior assessment practices within a problem-solving model.
Project VIABLE: Validation of Instruments for Assessing Behavior Longitudinally and Efficiently
An extensive knowledge base exists with regard to establishing a full continuum of PBS practices that can support all youth (Center on Positive Behavioral Interventions and Supports; Gresham, Sugai, & Horner, 2001; Sugai & Horner, 2005), however, the technology for formative evaluation of individual and specific behavioral response to those practices has not kept pace. The importance of progress monitoring, particularly for students exhibiting serious behavior problems, cannot be overstated. Yet, a significant gap exists regarding available tools for behavioral progress monitoring that are empirically-supported and, equally important, feasible for use in applied settings. Empirical attention to the development and validation of viable formative measures of social behavior is essential if we are to effectively evaluate the success of positive behavior interventions put in place to address challenging student behavior.