Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nathan H. Clemens is active.

Publication


Featured researches published by Nathan H. Clemens.


Assessment for Effective Intervention | 2009

A Conceptual Model for Evaluating System Effects of Response to Intervention

Edward S. Shapiro; Nathan H. Clemens

Implementing a Response to Intervention (RTI) system could improve overall student achievement and the way in which students with disabilities are identified. In order to evaluate the effectiveness of an RTI system (i.e., “Is our RTI system accomplishing its stated goals?”), a set of data-based indicators are needed. This paper will describe a set of five measure-able indicators from three domains of evaluation that schools can use to obtain frequent feedback on the impact of their RTI system on reading instruction and achievement. The evaluation methodology provides multiple, sensitive metrics that can be used soon after RTI implementation begins, and does not require that schools wait for more long-term, singular outcome measures such as performance on high-stakes tests to determine if the RTI system is functioning as intended. The data used for each indicator and the way in which the data can impact decisions is described. Issues related to RTI evaluation and areas of further research are discussed.


Journal of School Psychology | 2014

Assessing spelling in kindergarten: Further comparison of scoring metrics and their relation to reading skills

Nathan H. Clemens; Eric L. Oslund; Leslie E. Simmons; Deborah C. Simmons

Early reading and spelling development share foundational skills, yet spelling assessment is underutilized in evaluating early reading. This study extended research comparing the degree to which methods for scoring spelling skills at the end of kindergarten were associated with reading skills measured at the same time as well as at the end of first grade. Five strategies for scoring spelling responses were compared: totaling the number of words spelled correctly, totaling the number of correct letter sounds, totaling the number of correct letter sequences, using a rubric for scoring invented spellings, and calculating the Spelling Sensitivity Score (Masterson & Apel, 2010b). Students (N=287) who were identified at kindergarten entry as at risk for reading difficulty and who had received supplemental reading intervention were administered a standardized spelling assessment in the spring of kindergarten, and measures of phonological awareness, decoding, word recognition, and reading fluency were administered concurrently and at the end of first grade. The five spelling scoring metrics were similar in their strong relations with factors summarizing reading subskills (phonological awareness, decoding, and word reading) on a concurrent basis. Furthermore, when predicting first-grade reading skills based on spring-of-kindergarten performance, spelling scores from all five metrics explained unique variance over the autoregressive effects of kindergarten word identification. The practical advantages of using a brief spelling assessment for early reading evaluation and the relative tradeoffs of each scoring metric are discussed.


Journal of Research on Educational Effectiveness | 2014

Integrating Content Knowledge-Building and Student-Regulated Comprehension Practices in Secondary English Language Arts Classes

Deborah C. Simmons; Melissa Fogarty; Eric L. Oslund; Leslie E. Simmons; Angela Hairrell; John L. Davis; Leah Anderson; Nathan H. Clemens; Sharon Vaughn; Greg Roberts; Stephanie Stillman; Anna-Mária Fall

Abstract In this experimental study we examined the effects of integrating teacher-directed knowledge-building and student-regulated comprehension practices in 7th- to 10th-grade English language arts classes. We also investigated the effect of instructional quality and whether integrating practices differentially benefitted students with lower entry-level reading comprehension. The study was conducted in 6 schools, involving 17 teachers and 921 students. Teachers’ English language arts classes were randomly assigned to intervention (n = 36) or typical practice comparison (n = 29) conditions, and all teachers taught in both conditions. Students in both conditions grew significantly from pretest to posttest on proximal measures of narrative (ES =.09) and expository comprehension (ES =.22), as well as a standardized distal comprehension measure (ES =.46); however, no statistically significant between-group differences were found. Although intervention fidelity did not significantly influence outcomes, observational data indicated that teachers increasingly incorporated comprehension practices in their typical instruction. Effect sizes indicated a differential influence of entry-level reading comprehension on proximal and distal comprehension with higher performing readers in the intervention condition benefiting more than their lower performing peers on expository comprehension.


Journal of Special Education | 2012

Defensible Progress Monitoring Data for Medium- and High-Stakes Decisions:

Richard I. Parker; Kimberly J. Vannest; John L. Davis; Nathan H. Clemens

Within a response to intervention model, educators increasingly use progress monitoring (PM) to support medium- to high-stakes decisions for individual students. For PM to serve these more demanding decisions requires more careful consideration of measurement error. That error should be calculated within a fixed linear regression model rather than a classical test theory model, which has been more common. Seven practical skills are described for improving the use of PM data for medium- to high-stakes decisions: (a) estimating a static performance level in PM, (b) fitting a level of confidence to an educational decision, (c) expressing an estimated score (Yhat) with its measurement error, (d) judging reliable improvement from one time to a later time, (e) properly using slope versus trendedness, (f) expressing “rate of improvement” (slope) with error, and (g) controlling autocorrelation. An example data set and PM graphs are used to illustrate each.


School Psychology Review | 2015

The Predictive Validity of a Computer-Adaptive Assessment of Kindergarten and First-Grade Reading Skills

Nathan H. Clemens; Shanna Hagan-Burke; Wen Luo; Carissa Cerda; Alane Blakely; Jennifer Frosch; Brenda Gamez-Patience; Meredith Jones

Abstract. This study examined the predictive validity of a computer-adaptive assessment for measuring kindergarten reading skills using the STAR Early Literacy (SEL) test. The findings showed that the results of SEL assessments administered during the fall, winter, and spring of kindergarten were moderate and statistically significant predictors of year-end reading and reading-related skills, and they explained 35% to 38% of the variance in a latent variable of word-reading skills. Similar results were observed with a subsample of 71 participants who received follow-up assessments in first grade. End-of-kindergarten analyses indicated that, when added as predictors with SEL, paper-based measures of letter naming, letter-sound fluency, and word-reading fluency improved the amount of explained variance in kindergarten and first-grade year-end word-reading skills. Classification-accuracy analyses found that the SEL literacy classifications aligned with word-reading skills measured by paper-based assessments for students with higher SEL scores, but less alignment was found for students with lower SEL scores. In addition, SEL cut scores showed problematic accuracy, especially in predicting outcomes at the end of first grade. The addition of paper-based assessments tended to improve accuracy over using SEL in isolation. Overall, SEL shows promise as a universal screening tool for kindergarten reading skills, although it may not yet be able to completely replace paper-based assessments of early reading.


Journal of Learning Disabilities | 2014

Monitoring Early First-Grade Reading Progress A Comparison of Two Measures

Nathan H. Clemens; Edward S. Shapiro; Jiun-Yu Wu; Aaron B. Taylor; Grace I. L. Caskie

This study compared the validity of progress monitoring slope of nonsense word fluency (NWF) and word identification fluency (WIF) with early first-grade readers. Students (N = 80) considered to be at risk for reading difficulty were monitored with NWF and WIF on a 1-2 week basis across 11 weeks. Reading skills at the end of first grade were assessed using measures of passage reading fluency, real and pseudoword reading efficiency, and basic comprehension. Latent growth models indicated that although slope on both measures significantly predicted year-end reading skills, models including WIF accounted for more variance in spring reading skills than NWF, and WIF slope was more strongly associated with reading outcomes than NWF slope. Analyses of student growth plots suggested that WIF slope was more positively associated with later reading skills and discriminated more clearly between students according to successful or unsuccessful year-end reading outcomes. Although both measures may be used to monitor reading growth of at-risk students in early first grade, WIF may provide a clearer index of reading growth. Implications for data-based decision-making are discussed.


Journal of Psychoeducational Assessment | 2015

Interpreting Secondary Students’ Performance on a Timed, Multiple-Choice Reading Comprehension Assessment: The Prevalence and Impact of Non-Attempted Items

Nathan H. Clemens; John L. Davis; Leslie E. Simmons; Eric L. Oslund; Deborah C. Simmons

Standardized measures are often used as an index of students’ reading comprehension and scores have important implications, particularly for students who perform below expectations. This study examined secondary-level students’ patterns of responding and the prevalence and impact of non-attempted items on a timed, group-administered, multiple-choice test of reading comprehension. The Reading Comprehension subtest from the Gates-MacGinitie Reading Test was administered to 694 students in Grades 7 to 9. Students were categorized according to their test performance (low-, middle-, and high-achieving). Scores of the lowest achieving subgroup were affected significantly by high rates of non-attempted items, particularly on the later third of the test. Furthermore, the percentage of students who completed the assessment was far below that reported by the test authors. The results send a cautionary message to researchers and educators that, when text comprehension is the primary assessment target, to consider rates of non-attempted items and their impact on interpreting students’ text processing skills. Practical considerations are presented.


Reading Psychology | 2012

Tracing Student Responsiveness to Intervention With Early Literacy Skills Indicators: Do They Reflect Growth Toward Text Reading Outcomes?

Nathan H. Clemens; Alexandra Hilt-Panahon; Edward S. Shapiro; Myeongsun Yoon

This study investigated four widely-used early literacy skills indicators in reflecting growth toward first-grade text reading skills. Examining the progress of 101 students across kindergarten and first grade, Letter Naming Fluency (LNF) and Nonsense Word Fluency (NWF) were more accurate than Initial Sounds Fluency and Phoneme Segmentation Fluency in discriminating between students grouped according to successful or unsuccessful first-grade reading outcomes. LNF and NWF slope also discriminated between groups, but graphed observed scores suggested potential problems in identifying students with persistently low achievement. Results suggest the need for continued refinement of early literacy skills measures for instructional decision-making.


Archive | 2016

Screening Assessment Within a Multi-Tiered System of Support: Current Practices, Advances, and Next Steps

Nathan H. Clemens; Milena A. Keller-Margulis; Timothy Scholten; Myeongsun Yoon

Within multi-tiered systems of support (MTSS), screening assessments play an important role in identifying students who are in need of supplemental support strategies. In this chapter, the authors review the tools and methods commonly used in MTSS for academic skills screening, identify limitations with these practices, and highlight potential areas of improvement regarding assessment methods and content of screening tools, decision-making processes used to identify students in need of support, and methods used for evaluating screening tools. A set of recommendations and directions for future work are offered for advancing screening assessment and improving decision-making processes in schools with MTSS.


Journal of Psychoeducational Assessment | 2017

The Prevalence of Reading Fluency and Vocabulary Difficulties among Adolescents Struggling with Reading Comprehension.

Nathan H. Clemens; Deborah C. Simmons; Leslie E. Simmons; Huan Wang; Oi-man Kwok

This study sought to better understand the prevalence of concurrent and specific difficulties in reading fluency and vocabulary among adolescents with low reading comprehension. Latent class analysis (LCA) was used to identify a sample of 180 students in sixth through eighth grades with reading comprehension difficulties. A subsequent LCA identified subgroups of students with common patterns of strengths and weaknesses in reading fluency and vocabulary. Results indicated that more than 96% of the students demonstrated deficits in at least one area, with the largest subgroup exhibiting co-occurring difficulties in fluency and vocabulary. Difficulties in fluency were more common than difficulties in vocabulary. Students with low reading comprehension but adequate scores in reading fluency or vocabulary represented only a very small portion of the sample. Coupled with findings from prior studies, results indicate that large numbers of adolescents with reading comprehension difficulties are likely in need of intervention in foundational skill and knowledge areas, which may not be viewed as instructional priorities among secondary educators.

Collaboration


Dive into the Nathan H. Clemens's collaboration.

Top Co-Authors

Avatar

Eric L. Oslund

Middle Tennessee State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Greg Roberts

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge