Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jeremy W. Ford is active.

Publication


Featured researches published by Jeremy W. Ford.


Remedial and Special Education | 2014

Using Curriculum-Based Measures With Postsecondary Students With Intellectual and Developmental Disabilities

John L. Hosp; Kiersten Hensley; Sally M. Huddle; Jeremy W. Ford

The purpose of this study was to provide preliminary evidence of the criterion-related validity of curriculum-based measurement (CBM) for reading, mathematics, and written expression with postsecondary students with intellectual and developmental disabilities (ID). The participants included 41 postsecondary students with ID enrolled in a 2-year certificate program at a large Midwestern university. CBMs were administered to participants using standardized procedures, and results were compared with performance on the Woodcock–Johnson Tests of Achievement. Descriptive statistics were calculated as were bivariate correlations between CBM measures and the content-appropriate criterion measure. Results are discussed in terms of the potential use of CBMs as indicators of academic performance for postsecondary students with ID.


Journal of Applied School Psychology | 2016

Comparing Two CBM Maze Selection Tools: Considering Scoring and Interpretive Metrics for Universal Screening

Jeremy W. Ford; Kristen N. Missall; John L. Hosp; Jennifer L. Kuhle

ABSTRACT Advances in maze selection curriculum-based measurement have led to several published tools with technical information for interpretation (e.g., norms, benchmarks, cut-scores, classification accuracy) that have increased their usefulness for universal screening. A range of scoring practices have emerged for evaluating student performance on maze selection (e.g., correct restoration, incorrect restoration, correct restoration minus incorrect restoration, and correct restoration minus one-half incorrect restoration). However, lack of clear understanding about the intersection between scoring and interpretation has resulted in limited evidence about using maze selection for making universal screening decisions. In this study, 925 students in Grades 3–6 completed two curriculum-based measurements for maze selection. Student performance on the two was compared across different scoring metrics. Limitations and practical implications are discussed.


Assessment for Effective Intervention | 2018

The Importance of Replication in Measurement Research: Using Curriculum-Based Measures with Postsecondary Students with Developmental Disabilities.

John L. Hosp; Jeremy W. Ford; Sally M. Huddle; Kiersten Hensley

Replication is a foundation of the development of a knowledge base in an evidence-based field such as education. This study includes two direct replications of Hosp, Hensley, Huddle, and Ford which found evidence of criterion-related validity of curriculum-based measurement (CBM) for reading and mathematics with postsecondary students with developmental disabilities (DD). Participants included two cohorts of postsecondary students with DD enrolled in a 2-year certificate program at a large Midwestern university (n = 24 and 21). Using the same standardized procedures as Hosp et al., participants were administered CBMs for Oral Passage Reading (OPR), Maze, Math Computation, and Math Concepts and Applications. Descriptive statistics and bivariate correlations between CBMs and the content-appropriate Woodcock–Johnson Tests of Achievement–Third Edition were calculated. No significant differences in criterion-related validity coefficients between cohorts were found but differences between the correlations for Math Computation and Math Concepts and Applications identified in Hosp et al. were not found in either replication cohort.


Assessment for Effective Intervention | 2018

A Comparison of Two Content Area Curriculum-Based Measurement Tools.

Jeremy W. Ford; Sarah J. Conoyer; Erica S. Lembke; R. Alex Smith; John L. Hosp

In the present study, two types of curriculum-based measurement (CBM) tools in science, Vocabulary Matching (VM) and Statement Verification for Science (SV-S), a modified Sentence Verification Technique, were compared. Specifically, this study aimed to determine whether the format of information presented (i.e., SV-S vs. VM) produces differences in alternate form reliability and validity of scores or any differences in accuracy of prediction of scores on the state standardized science assessment. Overall, 25 eighth-grade science students were administered two SV-S and two VM forms with identical items along with spring eighth-grade maze passages from Aimsweb. Students had recently taken the eighth-grade state science test. Results regarding technical adequacy for each CBM tool were consistent with past findings. However, this study extends the literature base on CBM tools in science by providing evidence for using standards to develop VM forms. In addition, despite probable ceiling effects, additional evidence was found for the potential of SV-S as a CBM tool in science.


School Psychology Review | 2017

Examining Oral Passage Reading Rate Across Three Curriculum-Based Measurement Tools for Predicting Grade-Level Proficiency

Jeremy W. Ford; Kristen N. Missall; John L. Hop; Jennifer L. Kuhle

Abstract Curriculum-based measurement (CBM) for oral passage reading (OPR) is among the most commonly used tools for making screening decisions regarding academic proficiency status for students in first through sixth grades. Multiple publishers make available OPR tools, and while they are designed to measure the same broad construct of reading, research suggests that student performance varies within grades and across publishers. Despite the existence of multiple publishers of CBM tools for OPR, many of which include publisher-specific recommendations comparing student performance to a proficiency standard, the use of normative-based cut scores to interpret student performance remains prevalent. In the current study, three commercially available CBM tools for OPR were administered to 1,482 students in first through sixth grade. Results suggest differences between normative- and criterion-based approaches to determining cut scores for screening decisions. Implications regarding resource allocation for students in need of additional intervention are discussed.


Archive | 2016

Learning Disabilities/Special Education

John L. Hosp; Sally M. Huddle; Jeremy W. Ford; Kiersten Hensley

Special education is one of the many foundations of response to intervention (RTI) in addition to public health, medicine, and others. Evaluation and determination of eligibility for special education services, particularly under the category of specific learning disabilities (LD), was one of the main catalysts to trigger widespread adoption of RTI. This chapter presents the research base for aspects of LD and special education in relation to the important components of RTI. Universal screening, progress monitoring, and the use of evidence-based practices (EBPs) that are implemented with fidelity are applicable throughout the tiers of service delivery characteristic of RTI. Although students with LD are included within these in an RTI system, there are some additional considerations. The research base is better developed in some areas than others, and is in a constant state of improvement. Areas for future research within each of the components of RTI are discussed as well as implications for practice from the current research base.


Learning Disabilities Research and Practice | 2018

Improving Efficiency for Making Screening Decisions: A Statewide Comparison of Early Literacy Curriculum-Based Measurement Tools: IMPROVING EFFICIENCY IN SCREENING

Jeremy W. Ford; Amanda M. Kern; Michelle K. Hosp; Kristen N. Missall; John L. Hosp

Universal screening practices play a critical role in preventing reading difficulties. Screening decisions typically rely on results from several curriculum-based measurement (CBM) tools. In this study, data from 236 first graders were pulled as a subsample from a statewide study. Participants completed multiple early literacy CBM tools and an outcome measure. Performance differences were compared across tools and publishers to examine classification accuracy. Results show no differences in performance between nonsense word fluency tools across publishers, yet differences were found examining classification accuracy. We also report results of an exploratory analysis examining whether improvements in testing efficiency in early literacy screening are possible via multiple-gating procedures. Improving the accuracy and efficiency of screening procedures are discussed as implications for practice.


Journal of Psychoeducational Assessment | 2018

Examining Curriculum-Based Measurement Screening Tools in Middle School Science: A Scaled Replication Study

Sarah J. Conoyer; Jeremy W. Ford; R. Alex Smith; Erica N. Mason; Erica S. Lembke; John L. Hosp

This replication study examined the alternate form reliability, criterion validity, and predictive utility of two curriculum-based measurement (CBM) tools in science, Vocabulary-Matching (VM) and Statement Verification for Science (SV-S), for the purpose of screening. In all, 205 seventh-grade students from four middle schools were given alternate forms of each science CBM tool. Scores from the Idaho Standards Achievement Test (ISAT) science assessment were obtained. Stronger evidence of reliability and validity with the ISAT was found for VM compared with SV-S. With regard to predictive utility, VM more accurately classified students’ at-risk status compared with SV-S for identifying proficiency on the ISAT. Practical implications and directions for future research are also discussed.


Exceptionality | 2017

Statement Verification for Science: Theory and Examining Technical Adequacy of Alternate Forms

Jeremy W. Ford; John L. Hosp

ABSTRACT While curriculum-based measurement (CBM) tools for screening decisions in reading, mathematics, and written language have been well examined, tools for use in content areas (e.g., science and social studies) remain in the beginning stages of research. In this study, two alternate forms of a new CBM tool (Statement Verification for Science; SV-S), for screening decisions regarding students’ science content knowledge, is examined for technical adequacy. A total of 1,545 students across Grades 7 (N = 799) and 8 (N = 746) completed two alternate forms of SV-S concurrently with a statewide high-stakes test of accountability. Promising results were found for reliability, in particular internal consistency, while results related to evidence of criterion- and construct-related validity were less than desired. Such results, along with additional exploratory analyses, provide support for future research of SV-S as a CBM tool to assist teachers and other educators with making screening decisions.


Psychology in the Schools | 2017

Empirical Synthesis of the Effect of Standard Error of Measurement on Decisions Made within Brief Experimental Analyses of Reading Fluency

Matthew K. Burns; Crystal N. Taylor; Kristy L. Warmbold-Brann; June L. Preast; John L. Hosp; Jeremy W. Ford

Collaboration


Dive into the Jeremy W. Ford's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kiersten Hensley

Minnesota State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

R. Alex Smith

University of Southern Mississippi

View shared research outputs
Top Co-Authors

Avatar

Sarah J. Conoyer

Southern Illinois University Edwardsville

View shared research outputs
Top Co-Authors

Avatar

Amanda M. Kern

University of Nebraska Omaha

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge