Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kelli D. Cummings is active.

Publication


Featured researches published by Kelli D. Cummings.


Assessment for Effective Intervention | 2013

Form Effects on DIBELS Next Oral Reading Fluency Progress- Monitoring Passages

Kelli D. Cummings; Yonghan Park; Holle A. Bauer Schaper

The purpose of this article is to describe passage effects on Dynamic Indicators of Basic Early Literacy Skills–Next Edition Oral Reading Fluency (DIBELS Next ORF) progress-monitoring measures for Grades 1 through 6. Approximately 572 students per grade (total N with at least one data point = 3,092) read all three DIBELS Next winter benchmark passages in the prescribed order, and within 2 weeks read four additional progress-monitoring passages in a randomly assigned and counterbalanced order. All 20 progress-monitoring passages were read by students in Grades 1 through 4; 16 passages were read in Grade 5 and 12 passages were read in Grade 6. Results focus on the persistence of form effects in spite of a priori criteria used in passage development. The authors describe the utility of three types of equating methods (i.e., mean, linear, and equipercentile equating) in ameliorating these effects. Their conclusions focus on preferred equating methods with small samples, the impact of form effects on progress-monitoring decision making, and recommendations for future use of ORF passages for progress monitoring.


Assessment for Effective Intervention | 2011

Assessing Phonemic Awareness in Preschool and Kindergarten: Development and Initial Validation of First Sound Fluency

Kelli D. Cummings; Ruth A. Kaminski; Roland H. Good; Maya Elin O'Neil

This article presents initial findings from a study examining First Sound Fluency (FSF), which is a brief measure of early phonemic awareness (PA) skills. Students in prekindergarten and kindergarten (preK and K) were assessed three times (fall, winter, and spring) over one school year, which resulted in multiple reliability and validity coefficients. In addition, a subset of students in both preK and K was assessed monthly between benchmark periods using alternate forms of the FSF measure to estimate delayed alternate-form reliability. The FSF measure displayed adequate reliability and validity for decision making in early literacy for students in both grades. Implications of these findings are discussed.


Journal of School Psychology | 2013

In search of average growth: Describing within-year oral reading fluency growth across Grades 1–8☆☆☆

Joseph F. T. Nese; Gina Biancarosa; Kelli D. Cummings; Patrick C. Kennedy; Julie Alonzo; Gerald Tindal

Measures of oral reading fluency (ORF) are perhaps the most often used assessment to monitor student progress as part of a response to intervention (RTI) model. Rates of growth in research and aim lines in practice are used to characterize student growth; in either case, growth is generally defined as linear, increasing at a constant rate. Recent research suggests ORF growth follows a nonlinear trajectory, but limitations related to the datasets used in such studies, composed of only three testing occasions, curtails their ability to examine the true functional form of ORF growth. The purpose of this study was to model within-year ORF growth using up to eight testing occasions for 1448 students in Grades 1 to 8 to assess (a) the average growth trajectory for within-year ORF growth, (b) whether students vary significantly in within-year ORF growth, and (c) the extent to which findings are consistent across grades. Results demonstrated that for Grades 1 to 7, a quadratic growth model fit better than either linear or cubic growth models, and for Grade 8, there was no substantial, stable growth. Findings suggest that the expectation for linear growth currently used in practice may be unrealistic.


Assessment for Effective Intervention | 2013

Advanced (Measurement) Applications of Curriculum-Based Measurement in Reading

Yaacov Petscher; Kelli D. Cummings; Gina Biancarosa; Hank Fien

The purpose of this article is to provide a commentary on the current state of several measurement issues pertaining to curriculum-based measures of reading (R-CBM1). We begin by providing an overview of the utility of R-CBM, followed by a presentation of five specific measurements considerations: (a) the reliability of R-CBM oral reading fluency (ORF), (b) issues pertaining to form effects, (c) the generalizability of scores from R-CBM, (d) measurement error, and (e) linearity of growth in R-CBM. We then conclude with a presentation of the purpose for this issue and broadly introduce the articles in the special issue. Because ORF is one of the most common measures of R-CBM, much of the review is focused on this particular type of assessment; however, the issues presented extend to other assessments of R-CBM.


Journal of Psychoeducational Assessment | 2016

Evaluation of the DIBELS (Sixth Edition) Diagnostic System for the Selection of Native and Proficient English Speakers at Risk of Reading Difficulties.

Keith Smolkowski; Kelli D. Cummings

This comprehensive evaluation of the Dynamic Indicators of Basic Early Literacy Skills Sixth Edition (DIBELS6) set of measures gives a practical illustration of signal detection methods, the methods used to determine the value of screening and diagnostic systems, and offers an updated set of cut scores (decision thresholds). Data were drawn from a sample of 13,507 English-proficient students in kindergarten through Grade 3, with more than 4,500 students per grade level. Results indicate that most DIBELS6 measures accurately predict comprehensive test performance and that previously published decision thresholds for DIBELS6 are generally appropriate with some key exceptions. For example, the performance of phoneme segmentation fluency did not always meet expectations. The revised DIBELS6 decision thresholds can satisfactorily identify students who may require additional supports.


Review of Educational Research | 2014

Assessment Fidelity in Reading Intervention Research: A Synthesis of the Literature

Deborah K. Reed; Kelli D. Cummings; Andrew Schaper; Gina Biancarosa

Recent studies indicate that examiners make a number of intentional and unintentional errors when administering reading assessments to students. Because these errors introduce construct-irrelevant variance in scores, the fidelity of test administrations could influence the results of evaluation studies. To determine how assessment fidelity is being addressed in reading intervention research, we systematically reviewed 46 studies conducted with students in Grades K–8 identified as having a reading disability or at-risk for reading failure. Articles were coded for features such as the number and type of tests administered, experience and role of examiners, tester to student ratio, initial and follow-up training provided, monitoring procedures, testing environment, and scoring procedures. Findings suggest assessment integrity data are rarely reported. We discuss the results in a framework of potential threats to assessment fidelity and the implications of these threats for interpreting intervention study results.


The Rural Special Education Quarterly | 2011

Utility of Oral Reading and Retell Fluency in Predicting Proficiency on the Montana Comprehensive Assessment System

Trent L. Atkins; Kelli D. Cummings

The purpose of this study was to document the relationship between two commonly-used indicators of reading proficiency (i.e., Oral Reading Fluency [ORF] and Retell Fluency [RTF]) and two reading outcome tests in the state of Montana. Third and fourth grade students were assessed over 1 school year (2005-2006). Each student was assessed at three time points (fall, winter, and spring) with both the ORF and RTF measures. In addition, students participated in the standard, end-of-year, state comprehensive reading assessment. Both indicators displayed strong correlations with the criterion tests at the end of the school year. RTF added a small portion of unique variance explained to end-of-year outcomes when the raw scores (rather than ratio scores) were used. We highlight the way in which the ORF and RTF measures might be used together as efficient, economical predictors of important reading outcomes. We also provide information regarding the way in which the findings from this study impact special education services in rural settings.


Archive | 2016

An Introduction to the Statistical Evaluation of Fluency Measures with Signal Detection Theory

Keith Smolkowski; Kelli D. Cummings; Lisa A. Strycker

Fluency represents the learned ability to respond quickly, effortlessly, and accurately to a given stimuli. The fluent application of a skill, however, requires frequent and deliberate practice on all relevant subskills, not simply the repetition of subskills that is already fluent. Dancers, for example, learn best through marking, where they practice only partial movements of a performance. Diagnosing the source of the disfluency is critical for educators. Judgments grounded on data, statistical models, and even informal prediction models, however, outperform those based on intuition alone. Teachers can easily and accurately select the students in most need of supplemental instructions or support through the use of diagnostic or classification systems.


Assessment for Effective Intervention | 2015

Evaluation of Diagnostic Systems The Selection of Students at Risk of Academic Difficulties

Keith Smolkowski; Kelli D. Cummings

Diagnostic tools can help schools more consistently and fairly match instructional resources to the needs of their students. To ensure the best educational outcome for each child, diagnostic decision-making systems seek to balance time, clarity, and accuracy. However, recent research notes that many educational decisions tend to be made using professional judgment alone. Judgments grounded on data, statistical models, and even informal prediction models, however, outperform those based on intuition alone. The purpose of this manuscript is to describe the theoretical basis for signal detection and methods for statistically evaluating diagnostic decisions in education. We make recommendations to help test developers and consumers apply this methodology to other diagnostic systems in education and interpret the use of signal detection methods for educational screeners and diagnostic tests.


Assessment for Effective Intervention | 2015

Selecting Students at Risk of Academic Difficulties

Kelli D. Cummings; Keith Smolkowski

This paper aims to translate for practitioners the principles and methods for evaluating screening measures in education, including benchmark goals and cut points, from our technical manuscript “Evaluation of Diagnostic Systems: The Selection of Students at Risk of Academic Difficulties” (this issue). We offer a brief description of procedures developed over the past 50 years including the receiver operating characteristic (ROC) curves, the area under the ROC curve as a general measure of screener accuracy, and approaches to selecting a specific cut score to indicate risk. We also provide reporting standards to help practitioners evaluate research on screeners supported by best practices and to encourage researchers to attend to key reporting principles, such as using confidence bounds as estimates of precision. We then discuss examples from the literature and emphasize the imprecision of statistical estimates from small samples. Screeners and diagnostic tests, developed and evaluated with care and implemented consistently in schools, can improve educators’ decisions about resource allocation and ultimately improve the delivery of supports to students.

Collaboration


Dive into the Kelli D. Cummings's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Beverly L. Weiser

Southern Methodist University

View shared research outputs
Researchain Logo
Decentralizing Knowledge