Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sandra M. Chafouleas is active.

Publication


Featured researches published by Sandra M. Chafouleas.


Remedial and Special Education | 2013

An Application of the What Works Clearinghouse Standards for Evaluating Single-Subject Research Synthesis of the Self-Management Literature Base

Daniel M. Maggin; Amy M. Briesch; Sandra M. Chafouleas

The use of single-subject research in the development and evaluation of academic, psychological, and behavioral interventions has led to the experimental validation of an array of treatment options for a diverse set of educational challenges. However, the synthesis of these bodies of research has been the subject of considerable debate. In an effort to utilize the findings from studies using single-subject methodologies to identify effective educational practices, the What Works Clearinghouse developed a set of criteria to evaluate the strength of evidence for various strategies. In this article, an application of these standards is demonstrated using a body of self-management intervention studies drawn from a recently published systematic review. The utility of the standards for identifying evidence-based practices validated with single-subject research methods is discussed. In addition, a comparison of the What Works Clearinghouse procedures to previously developed methods for identifying evidence-based practices with single-subject research is provided, and implications for research and practice are described.


Journal of School Psychology | 2011

A Systematic Evaluation of Token Economies as a Classroom Management Tool for Students with Challenging Behavior.

Daniel M. Maggin; Sandra M. Chafouleas; Katelyn M. Goddard; Austin H. Johnson

A two-part systematic review was undertaken to assess the effectiveness of token economies in increasing rates of appropriate classroom behavior for students demonstrating behavioral difficulties. The first part of the review utilized the recently published What Works Clearinghouse (WWC) standards for evaluating single-subject research to determine the extent to which eligible studies demonstrated sufficient evidence to classify the token economy as an evidence-based practice. The second part of the review employed meta-analytic techniques across four different types of effect sizes to evaluate the quantitative strength of the findings. Methodological strengths and weaknesses across the studies were systematically investigated. Results indicated that the extant research on token economies does not provide sufficient evidence to be deemed best-practice based on the WWC criteria.


Education and Treatment of Children | 2011

Direct Behavior Rating: A Review of the Issues and Research in Its Development

Sandra M. Chafouleas

The conceptual foundation for Direct Behavior Rating as a behavior assessment method is reviewed. A contemporary definition of Direct Behavior Rating is framed as combining strengths of systematic direct observation and behavior rating scales, which may result in a usable and defensible assessment tool for educators engaged in formative purposes. The rationale behind development of Direct Behavior Rating Single Item Scales as general outcome measures for school-based behavioral risk is provided. Research related to development of instrumentation and procedures for Direct Behavior Rating Single Item Scales is discussed, along with implications for future research and practice.


Journal of Positive Behavior Interventions | 2008

Examining the Agreement of Direct Behavior Ratings and Systematic Direct Observation Data for On-Task and Disruptive Behavior.

T. Chris Riley-Tillman; Sandra M. Chafouleas; Kari Sassu; Julie A. M. Chanese; Amy D. Glazer

The purpose of this study was to replicate previous findings indicating a moderate association between teacher perceptions of behavior as measured by direct behavior ratings (DBRs) and systematic direct observation (SDO) conducted by an external observer. In this study, data regarding student on-task and disruptive behavior were collected via SDO from trained external observers and via DBRs from classroom teachers. Data were collected across 15 teachers and three observation sessions, and the agreement between the two methods was compared as a way to examine concurrent validity. Results supported previous work suggesting that DBRs are significantly correlated with SDO data, thereby suggesting that the DBR might be used as a compatible tool with SDO. Implications for practice, limitations of the study, and directions for future research are discussed.


Journal of School Psychology | 2012

A systematic evidence review of school-based group contingency interventions for students with challenging behavior ☆

Daniel M. Maggin; Austin H. Johnson; Sandra M. Chafouleas; Laura M. Ruberto; Melissa Berggren

The purpose of this review was to synthesize the research underlying group contingency interventions to determine whether there is sufficient evidence to support their use for managing the classroom behavior of students with behavioral difficulties. An application of the What Works Clearinghouse (WWC) procedures for evaluating single-subject research revealed that the research investigating group contingencies demonstrated sufficient rigor, evidence, and replication to label the intervention as evidence-based. These findings were further supported across five quantitative indices of treatment effect. The results associated with the application of the WWC procedures and quantitative evaluations were supplemented with additional systematic coding of methodological features and study characteristics to evaluate the populations and conditions under which the effects of the group contingency best generalize. Findings associated with this coding revealed that the lack of detailed reporting across studies limited our ability to determine for whom and under what conditions group contingencies are best suited.


Journal of School Psychology | 2013

Assessing influences on intervention implementation: revision of the usage rating profile-intervention.

Amy M. Briesch; Sandra M. Chafouleas; Sabina Rak Neugebauer; T. Chris Riley-Tillman

Although treatment acceptability was originally proposed as a critical factor in determining the likelihood that a treatment will be used with integrity, more contemporary findings suggest that whether something is likely to be adopted into routine practice is dependent on the complex interplay among a number of different factors. The Usage Rating Profile-Intervention (URP-I; Chafouleas, Briesch, Riley-Tillman, & McCoach, 2009) was recently developed to assess these additional factors, conceptualized as potentially contributing to the quality of intervention use and maintenance over time. The purpose of the current study was to improve upon the URP-I by expanding and strengthening each of the original four subscales. Participants included 1005 elementary teachers who completed the instrument in response to a vignette depicting a common behavior intervention. Results of exploratory and confirmatory factor analyses, as well as reliability analyses, supported a measure containing 29 items and yielding 6 subscales: Acceptability, Understanding, Feasibility, Family-School Collaboration, System Climate, and System Support. Collectively, these items provide information about potential facilitators and barriers to usage that exist at the level of the individual, intervention, and environment. Information gleaned from the instrument is therefore likely to aid consultants in both the planning and evaluation of intervention efforts.


Journal of Positive Behavior Interventions | 2007

Daily Behavior Report Cards An Investigation of the Consistency of On-Task Data Across Raters and Methods

Sandra M. Chafouleas; T. Chris Riley-Tillman; Kari Sassu; Mary J. LaFrance; Shamim S. Patwa

In this study, the consistency of on-task data collected across raters using either a Daily Behavior Report Card (DBRC) or systematic direct observation was examined to begin to understand the decision reliability of using DBRCs to monitor student behavior. Results suggested very similar conclusions might be drawn when visually examining data collected by an external observer using either systematic direct observation or a DBRC. In addition, similar conclusions might be drawn upon visual analysis of either systematic direct observation or DBRC data collected by an external observer versus a teacher-completed DBRC. Examination of effect sizes from baseline to intervention phases suggested greater potential for different conclusions to be drawn about student behavior, dependent on the method and rater. In summary, overall consistency of data across method and rater found in this study lends support to the use of DBRCs to estimate global classroom behavior as part of a multimethod assessment. Implications, limitations, and future research directions are discussed.


Journal of School Psychology | 2010

An investigation of the generalizability and dependability of direct behavior rating single item scales (DBR-SIS) to measure academic engagement and disruptive behavior of middle school students.

Sandra M. Chafouleas; Amy M. Briesch; T. Chris Riley-Tillman; Theodore J. Christ; Anne C. Black; Stephen P. Kilgus

A total of 4 raters, including 2 teachers and 2 research assistants, used Direct Behavior Rating Single Item Scales (DBR-SIS) to measure the academic engagement and disruptive behavior of 7 middle school students across multiple occasions. Generalizability study results for the full model revealed modest to large magnitudes of variance associated with persons (students), occasions of measurement (day), and associated interactions. However, an unexpectedly low proportion of the variance in DBR data was attributable to the facet of rater, as well as a negligible variance component for the facet of rating occasion nested within day (10-min interval within a class period). Results of a reduced model and subsequent decision studies specific to individual rater and rater type (research assistant and teacher) suggested degree of reliability-like estimates differed substantially depending on rater. Overall, findings supported previous recommendations that in the absence of estimates of rater reliability and firm recommendations regarding rater training, ratings obtained from DBR-SIS, and subsequent analyses, be conducted within rater. Additionally, results suggested that when selecting a teacher rater, the person most likely to substantially interact with target students during the specified observation period may be the best choice.


Journal of Behavioral Education | 2002

Using Brief Experimental Analysis to Select Oral Reading Interventions: An Investigation of Treatment Utility

Tracy L. VanAuken; Sandra M. Chafouleas; Tracy A. Bradley; Brian K. Martens

This study examined the treatment utility of brief experimental analysis for selecting skill-based oral reading interventions that targeted acquisition and fluency. Two second and one third grade student served as participants. The potentially most and least effective instructional packages identified from the brief experimental analysis for each student were alternated during an extended analysis phase. The instructional components that were compared were based on an ease of implementation hierarchy, with the brief experimental analysis used to select the hypothesized most effective instructional package for oral reading. Visual analysis of extended analysis data revealed that the hypothesized most effective combination of instructional components identified from the brief analysis produced greater initial gains in reading for two children (i.e., over 29 and 21 intervention days) and greater gains in reading throughout the extended analysis phase for the third child. Thus, the investigation provided preliminary evidence for the treatment utility of using brief experimental analysis to select effective and efficient oral reading instructional interventions. Implications, limitations, and future research topics are discussed.


Journal of School Psychology | 2014

Generalizability theory: A practical guide to study design, implementation, and interpretation

Amy M. Briesch; Hariharan Swaminathan; Megan E. Welsh; Sandra M. Chafouleas

Generalizability Theory (GT) offers increased utility for assessment research given the ability to concurrently examine multiple sources of variance, inform both relative and absolute decision making, and determine both the consistency and generalizability of results. Despite these strengths, assessment researchers within the fields of education and psychology have been slow to adopt and utilize a GT approach. This underutilization may be due to an incomplete understanding of the conceptual underpinnings of GT, the actual steps involved in designing and implementing generalizability studies, or some combination of both issues. The goal of the current article is therefore two-fold: (a) to provide readers with the conceptual background and terminology related to the use of GT and (b) to facilitate understanding of the range of issues that need to be considered in the design, implementation, and interpretation of generalizability and dependability studies. Given the relevance of this analytic approach to applied assessment contexts, there exists a need to ensure that GT is both accessible to, and understood by, researchers in education and psychology. Important methodological and analytical considerations are presented and implications for applied use are described.

Collaboration


Dive into the Sandra M. Chafouleas's collaboration.

Top Co-Authors

Avatar

T. Chris Riley-Tillman

State University of New York System

View shared research outputs
Top Co-Authors

Avatar

Amy M. Briesch

University of Connecticut

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Megan E. Welsh

University of Connecticut

View shared research outputs
Top Co-Authors

Avatar

Amy M. Briesch

University of Connecticut

View shared research outputs
Top Co-Authors

Avatar

Melissa A. Bray

University of Connecticut

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James L. McDougal

State University of New York at Oswego

View shared research outputs
Researchain Logo
Decentralizing Knowledge