Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Theodore J. Christ is active.

Publication


Featured researches published by Theodore J. Christ.


Journal of School Psychology | 2013

A systematic review and summarization of the recommendations and research surrounding Curriculum-Based Measurement of oral reading fluency (CBM-R) decision rules

Scott P. Ardoin; Theodore J. Christ; Laura S. Morena; Damien C. Cormier; David A. Klingbeil

Research and policy have established that data are necessary to guide decisions within education. Many of these decisions are made within problem solving and response to intervention frameworks for service delivery. Curriculum-Based Measurement in Reading (CBM-R) is a widely used data collection procedure within those models of service delivery. Although the evidence for CBM-R as a screening and benchmarking procedure has been summarized multiple times in the literature, there is no comprehensive review of the evidence for its application to monitor and evaluate individual student progress. The purpose of this study was to identify and summarize the psychometric and empirical evidence for CBM-R as it is used to monitor and evaluate student progress. There was an emphasis on the recommended number of data points collected during progress monitoring and interpretive guidelines. The review identified 171 journal articles, chapters, and instructional manuals using online search engines and research databases. Recommendations and evidence from 102 documents that met the study criteria were evaluated and summarized. Results indicate that most decision-making practices are based on expert opinion and that there is very limited psychometric or empirical support for such practices. There is a lack of published evidence to support program evaluation and progress monitoring with CBM-R. More research is required to inform data collection procedures and interpretive guidelines.


Journal of School Psychology | 2013

Curriculum-Based Measurement of Oral Reading: Multi-study evaluation of schedule, duration, and dataset quality on progress monitoring outcomes

Theodore J. Christ; Cengiz Zopluoglu; Barbara D. Monaghen; Ethan R. Van Norman

Curriculum-Based Measurement of Oral Reading (CBM-R) is used to collect time series data, estimate the rate of student achievement, and evaluate program effectiveness. A series of 5 studies were carried out to evaluate the validity, reliability, precision, and diagnostic accuracy of progress monitoring across a variety of progress monitoring durations, schedules, and dataset quality conditions. A sixth study evaluated the relation between the various conditions of progress monitoring (duration, schedule, and dataset quality) and the precision of weekly growth estimates. Model parameters were derived from a large extant progress monitoring dataset of second-grade (n=1517) and third-grade students (n=1561) receiving supplemental reading intervention as part of a Tier II response-to-intervention program. A linear mixed effects regression model was used to simulate true and observed CBM-R progress monitoring data. The validity and reliability of growth estimates were evaluated with squared correlations between true and observed scores along with split-half reliabilities of observed scores. The precision of growth estimates were evaluated with root mean square error between true and observed estimates of growth. Finally, receiver operator curves were used to evaluate the diagnostic accuracy and optimize decision thresholds. Results are interpreted to guide progress monitoring practices and inform future research.


Exceptional Children | 2012

Curriculum-Based Measurement of Oral Reading: Quality of Progress Monitoring Outcomes:

Theodore J. Christ; Cengiz Zopluoglu; Jeffery D. Long; Barbara D. Monaghen

Curriculum-based measurement of oral reading (CBM-R) is frequently used to set student goals and monitor student progress. This study examined the quality of growth estimates derived from CBM-R progress monitoring data. The authors used a linear mixed effects regression (LMER) model to simulate progress monitoring data for multiple levels of progress monitoring duration (i.e., 6, 8, 10 … 20 weeks) and data set quality which was operationalized as residual/error in the model (σε = 5, 10, 15, and 20). The number of data points, quality of data, and method used to estimate growth all influenced the reliability and precision of estimated growth rates. Results indicated that progress monitoring outcomes are sufficient to guide educational decisions if (a) ordinary least-squares regression is used to derive trend lines estimates, (b) a very good progress monitoring data set is used, and (c) the data set comprises a minimum of 14 CBMs-R. The article discusses implications and future directions.


Journal of School Psychology | 2010

An investigation of the generalizability and dependability of direct behavior rating single item scales (DBR-SIS) to measure academic engagement and disruptive behavior of middle school students.

Sandra M. Chafouleas; Amy M. Briesch; T. Chris Riley-Tillman; Theodore J. Christ; Anne C. Black; Stephen P. Kilgus

A total of 4 raters, including 2 teachers and 2 research assistants, used Direct Behavior Rating Single Item Scales (DBR-SIS) to measure the academic engagement and disruptive behavior of 7 middle school students across multiple occasions. Generalizability study results for the full model revealed modest to large magnitudes of variance associated with persons (students), occasions of measurement (day), and associated interactions. However, an unexpectedly low proportion of the variance in DBR data was attributable to the facet of rater, as well as a negligible variance component for the facet of rating occasion nested within day (10-min interval within a class period). Results of a reduced model and subsequent decision studies specific to individual rater and rater type (research assistant and teacher) suggested degree of reliability-like estimates differed substantially depending on rater. Overall, findings supported previous recommendations that in the absence of estimates of rater reliability and firm recommendations regarding rater training, ratings obtained from DBR-SIS, and subsequent analyses, be conducted within rater. Additionally, results suggested that when selecting a teacher rater, the person most likely to substantially interact with target students during the specified observation period may be the best choice.


Assessment for Effective Intervention | 2008

Implications of Recent Research: Curriculum-Based Measurement of Math Computation

Theodore J. Christ; Sarah Scullin; Anicke Tolbize; Cynthia L. Jiban

Curriculum-based measurement of mathematics (CBM-M) comprises a set of procedures and instrumentation to assess the level and trend of student achievement in early mathematics. The purpose of this article is to review the recent research and psychometric evidence for CBM-M. Although recent developments in CBM-M include procedures to assess early numeracy and application problems, this review focuses exclusively on computation assessment. The results of this review provide evidence that CBM-M is sufficiently reliable and valid for some applications; however, interpretation must be informed by the context and the scope of assessment domain. Mathematics computation is a subdomain of mathematics curriculum and assessment, and therefore, the validity of CBM-M is limited by its construct representation (i.e., stimulus set and task demands). Nevertheless, the review provides support for ongoing development and use of CBM-M as both a general outcome measure and subskill mastery measure for computation. Implications for research and practice are discussed.


Journal of School Psychology | 2013

Direct behavior rating as a school-based behavior screener for elementary and middle grades☆

Sandra M. Chafouleas; Stephen P. Kilgus; Rose Jaffery; T. Chris Riley-Tillman; Megan E. Welsh; Theodore J. Christ

The purpose of this study was to investigate how Direct Behavior Rating Single Item Scales (DBR-SIS) involving targets of academically engaged, disruptive, and respectful behaviors function in school-based screening assessment. Participants included 831 students in kindergarten through eighth grades who attended schools in the northeastern United States. Teachers provided behavior ratings for a sample of students in their classrooms on the DBR-SIS, the Behavioral and Emotional Screening System (Kamphaus & Reynolds, 2007), and the Student Risk Screening Scale (Drummond, 1994). Given variations in rating procedures to accommodate scheduling differences across grades, analysis was conducted separately for elementary school and middle school grade levels. Results suggested that the recommended cut scores, the combination of behavior targets, and the resulting conditional probability indices varied depending on grade level grouping (lower elementary, upper elementary, middle). For example, for the lower elementary grade level grouping, a combination of disruptive behavior (cut score=2) and academically engaged behavior (cut score=8) was considered to offer the best balance among indices of diagnostic accuracy, whereas a cut score of 1 for disruptive behavior and 8 for academically engaged behavior were recommended for the upper elementary school grade level grouping and cut scores of 1 and 9, respectively, were suggested for middle school grade level grouping. Generally, DBR-SIS cut scores considered optimal for screening using single or combined targets including academically engaged behavior and disruptive behavior by offering a reasonable balance of indices for sensitivity (.51-.90), specificity (.47-.83), negative predictive power (.94-.98), and positive predictive power (.14-.41). The single target of respectful behavior performed poorly across all grade level groups, and performance of DBR-SIS targets was relatively better in the elementary school than middle school grade level groups. Overall, results supported that disruptive behavior is highly important in evaluating risk status in lower grade levels and that academically engaged behavior becomes more pertinent as students reach higher grade levels. Limitations, future directions, and implications are discussed.


Journal of School Psychology | 2014

Direct behavior rating as a school-based behavior universal screener: replication across sites.

Stephen P. Kilgus; T. Chris Riley-Tillman; Sandra M. Chafouleas; Theodore J. Christ; Megan E. Welsh

The purpose of this study was to evaluate the utility of Direct Behavior Rating Single Item Scale (DBR-SIS) targets of disruptive, engaged, and respectful behavior within school-based universal screening. Participants included 31 first-, 25 fourth-, and 23 seventh-grade teachers and their 1108 students, sampled from 13 schools across three geographic locations (northeast, southeast, and midwest). Each teacher rated approximately 15 of their students across three measures, including DBR-SIS, the Behavioral and Emotional Screening System (Kamphaus & Reynolds, 2007), and the Student Risk Screening Scale (Drummond, 1994). Moderate to high bivariate correlations and area under the curve statistics supported concurrent validity and diagnostic accuracy of DBR-SIS. Receiver operating characteristic curve analyses indicated that although respectful behavior cut scores recommended for screening remained constant across grade levels, cut scores varied for disruptive behavior and academic engaged behavior. Specific cut scores for first grade included 2 or less for disruptive behavior, 7 or greater for academically engaged behavior, and 9 or greater for respectful behavior. In fourth and seventh grades, cut scores changed to 1 or less for disruptive behavior and 8 or greater for academically engaged behavior, and remained the same for respectful behavior. Findings indicated that disruptive behavior was particularly appropriate for use in screening at first grade, whereas academically engaged behavior was most appropriate at both fourth and seventh grades. Each set of cut scores was associated with acceptable sensitivity (.79-.87), specificity (.71-.82), and negative predictive power (.94-.96), but low positive predictive power (.43-.44). DBR-SIS multiple gating procedures, through which students were only considered at risk overall if they exceeded cut scores on 2 or more DBR-SIS targets, were also determined acceptable in first and seventh grades, as the use of both disruptive behavior and academically engaged behavior in defining risk yielded acceptable conditional probability indices. Overall, the current findings are consistent with previous research, yielding further support for the DBR-SIS as a universal screener. Limitations, implications for practice, and directions for future research are discussed.


Journal of Positive Behavior Interventions | 2011

The Impact of Observation Duration on the Accuracy of Data Obtained from Direct Behavior Rating (DBR).

T. Chris Riley-Tillman; Theodore J. Christ; Sandra M. Chafouleas; Christina H. Boice-Mallach; Amy M. Briesch

In this study, evaluation of direct behavior rating (DBR) occurred with regard to two primary areas: (a) accuracy of ratings with varied instrumentation (anchoring: proportional or absolute) and procedures (observation length: 5 min, 10 min, or 20 min) and (b) one-week test—retest reliability. Participants viewed video clips of a typical third grade student and then used single-item DBR scales to rate disruptive and academically engaged behavior. Overall, ratings tended to overestimate the actual occurrence of behavior. Although ratings of academic engagement were not affected by duration of the observation, ratings of disruptive behavior were, as the longer the duration, the more the ratings of disruptive behavior were overestimated. In addition, the longer the student was disruptive, the greater the overestimation effect. Results further revealed that anchoring the DBR scale as proportional versus absolute number of minutes did not affect rating accuracy. Finally, test—retest analyses revealed low to moderate consistency across time points for 10-min and 20-min observations, with increased consistency as the number of raters or number of ratings increased (e.g., four 5-min vs. one 20-min). Overall, results contribute to the technical evaluation of DBR as a behavior assessment method and provide preliminary information regarding the influence of duration of an observation period on DBR data.


Educational and Psychological Measurement | 2009

Generalizability of Scaling Gradients on Direct Behavior Ratings

Sandra M. Chafouleas; Theodore J. Christ; T. Chris Riley-Tillman

Generalizability theory is used to examine the impact of scaling gradients on a single-item Direct Behavior Rating (DBR). A DBR refers to a type of rating scale used to efficiently record target behavior(s) following an observation occasion. Variance components associated with scale gradients are estimated using a random effects design for persons (p) by raters (r) by occasions (o). Data from 106 undergraduate student participants are used in the analysis. Each participant viewed and rated video clips of six elementary-aged students who were engaged in a difficult task. Participant ratings are collected three times for each of two behaviors within three scale gradient conditions (6-, 10-, 14-point scale). Scale gradient does not substantially contribute to the magnitude of observed score variances. In contrast, the largest proportions of variance are attributed to rater and error across all scale gradient conditions. Implications, limitations, and future research considerations are discussed.


Assessment for Effective Intervention | 2009

Rating Scale Items: A Brief Review of Nomenclature, Components, and Formatting to Inform the Development of Direct Behavior Rating (DBR)

Theodore J. Christ; Christina H. Boice

Ratings scales are a common component of many multisource, multimethod frameworks for socioemotional and behavior assessment of children. There is a modest literature base to support the use of attitudinal, behavioral, and personality rating scales. Much of that historic literature focuses on the characteristics and interpretations of specific scales, which are mostly Likert-type scales. There are many more scale types and item types that receive less attention within the literature and less application in practice. This article provides a brief summary of the literature relevant to formats, components, and nomenclature associated with rating scale item types. This article is intended to provide basic information that might inspire and contribute to the development and evaluation of novel rating scales with a variety of item types, especially those relevant to Direct Behavior Rating methods of assessment.

Collaboration


Dive into the Theodore J. Christ's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter M. Nelson

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John M. Hintze

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge