Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ethan R. Van Norman is active.

Publication


Featured researches published by Ethan R. Van Norman.


Journal of School Psychology | 2013

Curriculum-Based Measurement of Oral Reading: Multi-study evaluation of schedule, duration, and dataset quality on progress monitoring outcomes

Theodore J. Christ; Cengiz Zopluoglu; Barbara D. Monaghen; Ethan R. Van Norman

Curriculum-Based Measurement of Oral Reading (CBM-R) is used to collect time series data, estimate the rate of student achievement, and evaluate program effectiveness. A series of 5 studies were carried out to evaluate the validity, reliability, precision, and diagnostic accuracy of progress monitoring across a variety of progress monitoring durations, schedules, and dataset quality conditions. A sixth study evaluated the relation between the various conditions of progress monitoring (duration, schedule, and dataset quality) and the precision of weekly growth estimates. Model parameters were derived from a large extant progress monitoring dataset of second-grade (n=1517) and third-grade students (n=1561) receiving supplemental reading intervention as part of a Tier II response-to-intervention program. A linear mixed effects regression model was used to simulate true and observed CBM-R progress monitoring data. The validity and reliability of growth estimates were evaluated with squared correlations between true and observed scores along with split-half reliabilities of observed scores. The precision of growth estimates were evaluated with root mean square error between true and observed estimates of growth. Finally, receiver operator curves were used to evaluate the diagnostic accuracy and optimize decision thresholds. Results are interpreted to guide progress monitoring practices and inform future research.


Remedial and Special Education | 2014

Exclusionary Discipline of Students With Disabilities Student and School Characteristics Predicting Suspension

Amanda L. Sullivan; Ethan R. Van Norman; David A. Klingbeil

Given the negative outcomes associated with suspension, scholars and practitioners are concerned with discipline disparities. This study explored patterns and predictors of suspension in a sample of 2,750 students with disabilities in 39 schools in a Midwestern district. Hierarchical generalized linear modeling demonstrated that disability type, gender, race/ethnicity, and free/reduced lunch status were significant predictors of suspension among students with disabilities. Adjusting for gender and race/ethnicity attenuated suspension risk associated with disability type, and adjusting for student-level socioeconomic variables attenuated risk associated with race/ethnicity, but significant disparities remained. School characteristics were not predictive of suspension risk, but their inclusion in the models was associated with increased risk of suspension among students with emotional disturbance. Results underscore the value of multilevel modeling when identifying predictors of suspension and the need to explore a wider variety of classroom and school factors that may account for inequitable discipline.


Assessment for Effective Intervention | 2013

Curriculum-Based Measurement of Oral Reading: Evaluation of Growth Estimates Derived With Pre–Post Assessment Methods

Theodore J. Christ; Barbara D. Monaghen; Cengiz Zopluoglu; Ethan R. Van Norman

Curriculum-based measurement of oral reading (CBM-R) is used to index the level and rate of student growth across the academic year. The method is frequently used to set student goals and monitor student progress. This study examined the diagnostic accuracy and quality of growth estimates derived from pre–post measurement using CBM-R data. A linear mixed effects regression model was used to simulate progress-monitoring data for multiple levels of progress-monitoring duration (6, 8, 10, . . ., 20 weeks) and data set quality, which was operationalized as residual/error in the model (σε= 5, 10, 15, and 20). Results indicate that the duration of instruction, quality of data, and method used to estimate growth influenced the reliability and precision of estimated growth rates, in addition to the diagnostic accuracy. Pre–post methods to derive CBM-R growth estimates are likely to require 14 or more weeks of instruction between pre–post occasions. Implications and future directions are discussed.


School Psychology Review | 2016

Curriculum-based measurement of reading: Accuracy of recommendations from three-point decision rules

Ethan R. Van Norman; Theodore J. Christ

Abstract. Despite their widespread use, there is little research to support the accuracy of curriculum-based measurement of reading progress monitoring decision rules. The purpose of this study was to investigate the accuracy of a common data point decision rule. This study used a three-point rule with a goal line of 1.50 words read correctly per minute (WRCM) across six levels of true growth (range = 0–3 WRCM), two levels of dataset quality or residual (5 and 10 WRCM), and 13 levels of data collection (range = 3–15 weeks). We estimated the probability of a correct decision as well as the probability of each outcome (change instruction, increase the goal, maintain instruction) across each condition with probability theory and a spreadsheet program. In general, results indicate that recommendations are often inaccurate. Further, the probability of a correct recommendation is below chance in most situations. Results of multiple regression analyses indicate that residual, duration, and true growth interacted to influence decision accuracy. Results are discussed along with implications for future research and practice.


Journal of School Psychology | 2016

How accurate are interpretations of curriculum-based measurement progress monitoring data? Visual analysis versus decision rules

Ethan R. Van Norman; Theodore J. Christ

Curriculum based measurement of oral reading (CBM-R) is used to monitor the effects of academic interventions for individual students. Decisions to continue, modify, or terminate these interventions are made by interpreting time series CBM-R data. Such interpretation is founded upon visual analysis or the application of decision rules. The purpose of this study was to compare the accuracy of visual analysis and decision rules. Visual analysts interpreted 108 CBM-R progress monitoring graphs one of three ways: (a) without graphic aids, (b) with a goal line, or (c) with a goal line and a trend line. Graphs differed along three dimensions, including trend magnitude, variability of observations, and duration of data collection. Automated trend line and data point decision rules were also applied to each graph. Inferential analyses permitted the estimation of the probability of a correct decision (i.e., the student is improving - continue the intervention, or the student is not improving - discontinue the intervention) for each evaluation method as a function of trend magnitude, variability of observations, and duration of data collection. All evaluation methods performed better when students made adequate progress. Visual analysis and decision rules performed similarly when observations were less variable. Results suggest that educators should collect data for more than six weeks, take steps to control measurement error, and visually analyze graphs when data are variable. Implications for practice and research are discussed.


School Psychology Review | 2017

Curriculum-Based Measurement of Reading Progress Monitoring: The Importance of Growth Magnitude and Goal Setting in Decision Making

Ethan R. Van Norman; Theodore J. Christ; Kirsten W. Newell

Abstract Research regarding the technical adequacy of growth estimates from curriculum-based measurement of reading progress monitoring data suggests that current decision-making frameworks are likely to yield inaccurate recommendations unless data are collected for extensive periods of time. Instances where data may not need to be collected for long periods to make defensible decisions are presented. Recommendations to collect data for upwards of 3 months may be appropriate for students whose rate of improvement (ROI) approximates the criterion to which their performance is being compared. A framework is presented to help evaluate whether a students ROI is substantially discrepant from an expected rate of growth (i.e., goal line). A spreadsheet program was created that used user-specified parameters for goal line magnitude, dataset variability, and data collection duration, in order to identify critical ROIs to determine whether students were making adequate progress with different levels of certainty. Analyses suggest that decisions may be feasible sooner than previously thought, particularly when growth is highly discrepant from the goal line and variability in the data is limited. Implications, limitations, and directions for future research are discussed.


School Psychology Review | 2016

A Comparison of Methods to Screen Middle School Students for Reading and Math Difficulties

Peter M. Nelson; Ethan R. Van Norman; Stacey K. Lackner

Abstract. The current study explored multiple ways in which middle schools can use and integrate data sources to predict proficiency on future high-stakes state achievement tests. The diagnostic accuracy of (a) prior achievement data, (b) teacher rating scale scores, (c) a composite score combining state test scores and rating scale responses, and (d) two gated screening approaches was compared in a sample of 614 middle school students. Prior state test performance emerged as the strongest single predictor of future state test scores; however, results provide evidence that educators may consider locally derived cut scores or alternative screening procedures that incorporate multiple data sources. Specifically, the combination of prior achievement data and teacher ratings of student competence often resulted in increases in either sensitivity or specificity as a function of how data sources were combined.


Archive | 2016

Foundations of Fluency-Based Assessments in Behavioral and Psychometric Paradigms

Theodore J. Christ; Ethan R. Van Norman; Peter M. Nelson

The chapter presents previous and near term applications and innovations for the assessment of rate-based measures such as fluency. The historical and future developments are discussed within the context of an ideographic behavioral and nomothetic psychometric paradigms of assessment. These paradigms are described and contrasted with descriptions of classical test theory (CTT), generalizability theory (GT), and item response theory (IRT). The interpretation and use argument (IUA) is used to frame the contemporary view on unified validity. These theoretical models are combined with an applied perspective to contextualize and encourage future developments in the measurement of fluency.


School Psychology Quarterly | 2013

The Effects of Baseline Estimation on the Reliability, Validity, and Precision of CBM-R Growth Estimates.

Ethan R. Van Norman; Theodore J. Christ; Cengiz Zopluoglu

This study examined the effect of baseline estimation on the quality of trend estimates derived from Curriculum Based Measurement of Oral Reading (CBM-R) progress monitoring data. The authors used a linear mixed effects regression (LMER) model to simulate progress monitoring data for schedules ranging from 6-20 weeks for datasets with high and low levels of residual variance (poor and good quality datasets respectively). Three observations per day for the first three days of data collection were generated for baseline estimation. As few as one and as many as nine observations were used to calculate baseline. The number of weeks of progress monitoring and the quality of the dataset were highly influential on the reliability, validity, and precision of simulated growth estimates. Results supported the use of using the median of three observations collected on the first day to estimate baseline, particularly when the first observation of that day systematically underestimated student performance. Collecting a large number of observations to estimate baseline does not appear to improve the quality of CBM-R growth estimates.


School Psychology Quarterly | 2017

Single Measure and Gated Screening Approaches for Identifying Students At-Risk for Academic Problems: Implications for Sensitivity and Specificity.

Ethan R. Van Norman; Peter M. Nelson; David A. Klingbeil

Educators need recommendations to improve screening practices without limiting students’ instructional opportunities. Repurposing previous years’ state test scores has shown promise in identifying at-risk students within multitiered systems of support. However, researchers have not directly compared the diagnostic accuracy of previous years’ state test scores with data collected during fall screening periods to identify at-risk students. In addition, the benefit of using previous state test scores in conjunction with data from a separate measure to identify at-risk students has not been explored. The diagnostic accuracy of 3 types of screening approaches were tested to predict proficiency on end-of-year high-stakes assessments: state test data obtained during the previous year, data from a different measure administered in the fall, and both measures combined (i.e., a gated model). Extant reading and math data (N = 2,996) from 10 schools in the Midwest were analyzed. When used alone, both measures yielded similar sensitivity and specificity values. The gated model yielded superior specificity values compared with using either measure alone, at the expense of sensitivity. Implications, limitations, and ideas for future research are discussed.

Collaboration


Dive into the Ethan R. Van Norman's collaboration.

Top Co-Authors

Avatar

Peter M. Nelson

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David A. Klingbeil

University of Wisconsin–Milwaukee

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge