Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where T. Chris Riley-Tillman is active.

Publication


Featured researches published by T. Chris Riley-Tillman.


Journal of Positive Behavior Interventions | 2008

Examining the Agreement of Direct Behavior Ratings and Systematic Direct Observation Data for On-Task and Disruptive Behavior.

T. Chris Riley-Tillman; Sandra M. Chafouleas; Kari Sassu; Julie A. M. Chanese; Amy D. Glazer

The purpose of this study was to replicate previous findings indicating a moderate association between teacher perceptions of behavior as measured by direct behavior ratings (DBRs) and systematic direct observation (SDO) conducted by an external observer. In this study, data regarding student on-task and disruptive behavior were collected via SDO from trained external observers and via DBRs from classroom teachers. Data were collected across 15 teachers and three observation sessions, and the agreement between the two methods was compared as a way to examine concurrent validity. Results supported previous work suggesting that DBRs are significantly correlated with SDO data, thereby suggesting that the DBR might be used as a compatible tool with SDO. Implications for practice, limitations of the study, and directions for future research are discussed.


Journal of School Psychology | 2013

Assessing influences on intervention implementation: revision of the usage rating profile-intervention.

Amy M. Briesch; Sandra M. Chafouleas; Sabina Rak Neugebauer; T. Chris Riley-Tillman

Although treatment acceptability was originally proposed as a critical factor in determining the likelihood that a treatment will be used with integrity, more contemporary findings suggest that whether something is likely to be adopted into routine practice is dependent on the complex interplay among a number of different factors. The Usage Rating Profile-Intervention (URP-I; Chafouleas, Briesch, Riley-Tillman, & McCoach, 2009) was recently developed to assess these additional factors, conceptualized as potentially contributing to the quality of intervention use and maintenance over time. The purpose of the current study was to improve upon the URP-I by expanding and strengthening each of the original four subscales. Participants included 1005 elementary teachers who completed the instrument in response to a vignette depicting a common behavior intervention. Results of exploratory and confirmatory factor analyses, as well as reliability analyses, supported a measure containing 29 items and yielding 6 subscales: Acceptability, Understanding, Feasibility, Family-School Collaboration, System Climate, and System Support. Collectively, these items provide information about potential facilitators and barriers to usage that exist at the level of the individual, intervention, and environment. Information gleaned from the instrument is therefore likely to aid consultants in both the planning and evaluation of intervention efforts.


Journal of Positive Behavior Interventions | 2007

Daily Behavior Report Cards An Investigation of the Consistency of On-Task Data Across Raters and Methods

Sandra M. Chafouleas; T. Chris Riley-Tillman; Kari Sassu; Mary J. LaFrance; Shamim S. Patwa

In this study, the consistency of on-task data collected across raters using either a Daily Behavior Report Card (DBRC) or systematic direct observation was examined to begin to understand the decision reliability of using DBRCs to monitor student behavior. Results suggested very similar conclusions might be drawn when visually examining data collected by an external observer using either systematic direct observation or a DBRC. In addition, similar conclusions might be drawn upon visual analysis of either systematic direct observation or DBRC data collected by an external observer versus a teacher-completed DBRC. Examination of effect sizes from baseline to intervention phases suggested greater potential for different conclusions to be drawn about student behavior, dependent on the method and rater. In summary, overall consistency of data across method and rater found in this study lends support to the use of DBRCs to estimate global classroom behavior as part of a multimethod assessment. Implications, limitations, and future research directions are discussed.


Journal of School Psychology | 2010

An investigation of the generalizability and dependability of direct behavior rating single item scales (DBR-SIS) to measure academic engagement and disruptive behavior of middle school students.

Sandra M. Chafouleas; Amy M. Briesch; T. Chris Riley-Tillman; Theodore J. Christ; Anne C. Black; Stephen P. Kilgus

A total of 4 raters, including 2 teachers and 2 research assistants, used Direct Behavior Rating Single Item Scales (DBR-SIS) to measure the academic engagement and disruptive behavior of 7 middle school students across multiple occasions. Generalizability study results for the full model revealed modest to large magnitudes of variance associated with persons (students), occasions of measurement (day), and associated interactions. However, an unexpectedly low proportion of the variance in DBR data was attributable to the facet of rater, as well as a negligible variance component for the facet of rating occasion nested within day (10-min interval within a class period). Results of a reduced model and subsequent decision studies specific to individual rater and rater type (research assistant and teacher) suggested degree of reliability-like estimates differed substantially depending on rater. Overall, findings supported previous recommendations that in the absence of estimates of rater reliability and firm recommendations regarding rater training, ratings obtained from DBR-SIS, and subsequent analyses, be conducted within rater. Additionally, results suggested that when selecting a teacher rater, the person most likely to substantially interact with target students during the specified observation period may be the best choice.


Journal of School Psychology | 2013

Direct behavior rating as a school-based behavior screener for elementary and middle grades☆

Sandra M. Chafouleas; Stephen P. Kilgus; Rose Jaffery; T. Chris Riley-Tillman; Megan E. Welsh; Theodore J. Christ

The purpose of this study was to investigate how Direct Behavior Rating Single Item Scales (DBR-SIS) involving targets of academically engaged, disruptive, and respectful behaviors function in school-based screening assessment. Participants included 831 students in kindergarten through eighth grades who attended schools in the northeastern United States. Teachers provided behavior ratings for a sample of students in their classrooms on the DBR-SIS, the Behavioral and Emotional Screening System (Kamphaus & Reynolds, 2007), and the Student Risk Screening Scale (Drummond, 1994). Given variations in rating procedures to accommodate scheduling differences across grades, analysis was conducted separately for elementary school and middle school grade levels. Results suggested that the recommended cut scores, the combination of behavior targets, and the resulting conditional probability indices varied depending on grade level grouping (lower elementary, upper elementary, middle). For example, for the lower elementary grade level grouping, a combination of disruptive behavior (cut score=2) and academically engaged behavior (cut score=8) was considered to offer the best balance among indices of diagnostic accuracy, whereas a cut score of 1 for disruptive behavior and 8 for academically engaged behavior were recommended for the upper elementary school grade level grouping and cut scores of 1 and 9, respectively, were suggested for middle school grade level grouping. Generally, DBR-SIS cut scores considered optimal for screening using single or combined targets including academically engaged behavior and disruptive behavior by offering a reasonable balance of indices for sensitivity (.51-.90), specificity (.47-.83), negative predictive power (.94-.98), and positive predictive power (.14-.41). The single target of respectful behavior performed poorly across all grade level groups, and performance of DBR-SIS targets was relatively better in the elementary school than middle school grade level groups. Overall, results supported that disruptive behavior is highly important in evaluating risk status in lower grade levels and that academically engaged behavior becomes more pertinent as students reach higher grade levels. Limitations, future directions, and implications are discussed.


Journal of Educational and Psychological Consultation | 2008

An Initial Comparison Of Collaborative And Expert-Driven Consultation On Treatment Integrity

Constance Kelleher; T. Chris Riley-Tillman; Thomas J. Power

Although over 15 years have passed since Witt (1990) noted that no empirical evidence exists to support the contention that a collaborative approach to consultation leads to more positive outcomes than a hierarchical or expert driven approach, this issue generally remains unaddressed (Schulte & Osborne, 2003). While the literature documenting the benefits of consultation has continued to grow, a true head-to-head comparison has not been conducted. The purpose of the present study was to directly address Witts call by empirically examining the impact of two consultation styles on a critical variable, practitioner treatment integrity. It was hypothesized that the involvement of practitioners in all aspects of intervention design would increase their level of treatment integrity. Two single-subject experiments using multiple baseline across subjects designs were used to examine the difference in level of treatment integrity for an imported, expert-driven intervention and a partnership-designed intervention. The first experiment was divided into three phases: (a) Phase I, Expert-driven Model; (b) Phase II, Treatment Integrity Intervention; and (c) Phase III, Partnership Model. The second experiment presented the three phases in reverse order to address the possibility of presentation effects: (a) Phase I, Partnership Model; (b) Phase II, Expert-driven Model; and (c) Phase III, Treatment Integrity Intervention. In general, the five participants who completed the three phases of the experiments demonstrated higher levels of treatment integrity during the partnership phase. Overall, the results suggest that engaging with consultees in a collaborative approach may increase the level of integrity with which the intervention is applied.


School Psychology Quarterly | 2015

A comparison of measures to screen for social, emotional, and behavioral risk.

Faith G. Miller; Daniel Cohen; Sandra M. Chafouleas; T. Chris Riley-Tillman; Megan E. Welsh; Gregory A. Fabiano

The purpose of this study was to examine the relation between teacher-implemented screening measures used to identify social, emotional, and behavioral risk. To this end, 5 screening options were evaluated: (a) Direct Behavior Rating - Single Item Scales (DBR-SIS), (b) Social Skills Improvement System - Performance Screening Guide (SSiS), (c) Behavioral and Emotional Screening System - Teacher Form (BESS), (d) Office discipline referrals (ODRs), and (e) School nomination methods. The sample included 1974 students who were assessed tri-annually by their teachers (52% female, 93% non-Hispanic, 81% white). Findings indicated that teacher ratings using standardized rating measures (DBR-SIS, BESS, and SSiS) resulted in a larger proportion of students identified at-risk than ODRs or school nomination methods. Further, risk identification varied by screening option, such that a large percentage of students were inconsistently identified depending on the measure used. Results further indicated weak to strong correlations between screening options. The relation between broad behavioral indicators and mental health screening was also explored by examining classification accuracy indices. Teacher ratings using DBR-SIS and SSiS correctly identified between 81% and 91% of the sample as at-risk using the BESS as a criterion. As less conservative measures of risk, DBR-SIS and SSiS identified more students as at-risk relative to other options. Results highlight the importance of considering the aims of the assessment when selecting broad screening measures to identify students in need of additional support.


Journal of School Psychology | 2014

Direct behavior rating as a school-based behavior universal screener: replication across sites.

Stephen P. Kilgus; T. Chris Riley-Tillman; Sandra M. Chafouleas; Theodore J. Christ; Megan E. Welsh

The purpose of this study was to evaluate the utility of Direct Behavior Rating Single Item Scale (DBR-SIS) targets of disruptive, engaged, and respectful behavior within school-based universal screening. Participants included 31 first-, 25 fourth-, and 23 seventh-grade teachers and their 1108 students, sampled from 13 schools across three geographic locations (northeast, southeast, and midwest). Each teacher rated approximately 15 of their students across three measures, including DBR-SIS, the Behavioral and Emotional Screening System (Kamphaus & Reynolds, 2007), and the Student Risk Screening Scale (Drummond, 1994). Moderate to high bivariate correlations and area under the curve statistics supported concurrent validity and diagnostic accuracy of DBR-SIS. Receiver operating characteristic curve analyses indicated that although respectful behavior cut scores recommended for screening remained constant across grade levels, cut scores varied for disruptive behavior and academic engaged behavior. Specific cut scores for first grade included 2 or less for disruptive behavior, 7 or greater for academically engaged behavior, and 9 or greater for respectful behavior. In fourth and seventh grades, cut scores changed to 1 or less for disruptive behavior and 8 or greater for academically engaged behavior, and remained the same for respectful behavior. Findings indicated that disruptive behavior was particularly appropriate for use in screening at first grade, whereas academically engaged behavior was most appropriate at both fourth and seventh grades. Each set of cut scores was associated with acceptable sensitivity (.79-.87), specificity (.71-.82), and negative predictive power (.94-.96), but low positive predictive power (.43-.44). DBR-SIS multiple gating procedures, through which students were only considered at risk overall if they exceeded cut scores on 2 or more DBR-SIS targets, were also determined acceptable in first and seventh grades, as the use of both disruptive behavior and academically engaged behavior in defining risk yielded acceptable conditional probability indices. Overall, the current findings are consistent with previous research, yielding further support for the DBR-SIS as a universal screener. Limitations, implications for practice, and directions for future research are discussed.


Journal of Positive Behavior Interventions | 2011

The Impact of Observation Duration on the Accuracy of Data Obtained from Direct Behavior Rating (DBR).

T. Chris Riley-Tillman; Theodore J. Christ; Sandra M. Chafouleas; Christina H. Boice-Mallach; Amy M. Briesch

In this study, evaluation of direct behavior rating (DBR) occurred with regard to two primary areas: (a) accuracy of ratings with varied instrumentation (anchoring: proportional or absolute) and procedures (observation length: 5 min, 10 min, or 20 min) and (b) one-week test—retest reliability. Participants viewed video clips of a typical third grade student and then used single-item DBR scales to rate disruptive and academically engaged behavior. Overall, ratings tended to overestimate the actual occurrence of behavior. Although ratings of academic engagement were not affected by duration of the observation, ratings of disruptive behavior were, as the longer the duration, the more the ratings of disruptive behavior were overestimated. In addition, the longer the student was disruptive, the greater the overestimation effect. Results further revealed that anchoring the DBR scale as proportional versus absolute number of minutes did not affect rating accuracy. Finally, test—retest analyses revealed low to moderate consistency across time points for 10-min and 20-min observations, with increased consistency as the number of raters or number of ratings increased (e.g., four 5-min vs. one 20-min). Overall, results contribute to the technical evaluation of DBR as a behavior assessment method and provide preliminary information regarding the influence of duration of an observation period on DBR data.


Educational and Psychological Measurement | 2009

Generalizability of Scaling Gradients on Direct Behavior Ratings

Sandra M. Chafouleas; Theodore J. Christ; T. Chris Riley-Tillman

Generalizability theory is used to examine the impact of scaling gradients on a single-item Direct Behavior Rating (DBR). A DBR refers to a type of rating scale used to efficiently record target behavior(s) following an observation occasion. Variance components associated with scale gradients are estimated using a random effects design for persons (p) by raters (r) by occasions (o). Data from 106 undergraduate student participants are used in the analysis. Each participant viewed and rated video clips of six elementary-aged students who were engaged in a difficult task. Participant ratings are collected three times for each of two behaviors within three scale gradient conditions (6-, 10-, 14-point scale). Scale gradient does not substantially contribute to the magnitude of observed score variances. In contrast, the largest proportions of variance are attributed to rater and error across all scale gradient conditions. Implications, limitations, and future research considerations are discussed.

Collaboration


Dive into the T. Chris Riley-Tillman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Megan E. Welsh

University of Connecticut

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James L. McDougal

State University of New York at Oswego

View shared research outputs
Top Co-Authors

Avatar

Rose Jaffery

University of Connecticut

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge