Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eric L. Oslund is active.

Publication


Featured researches published by Eric L. Oslund.


Journal of School Psychology | 2014

Assessing spelling in kindergarten: Further comparison of scoring metrics and their relation to reading skills

Nathan H. Clemens; Eric L. Oslund; Leslie E. Simmons; Deborah C. Simmons

Early reading and spelling development share foundational skills, yet spelling assessment is underutilized in evaluating early reading. This study extended research comparing the degree to which methods for scoring spelling skills at the end of kindergarten were associated with reading skills measured at the same time as well as at the end of first grade. Five strategies for scoring spelling responses were compared: totaling the number of words spelled correctly, totaling the number of correct letter sounds, totaling the number of correct letter sequences, using a rubric for scoring invented spellings, and calculating the Spelling Sensitivity Score (Masterson & Apel, 2010b). Students (N=287) who were identified at kindergarten entry as at risk for reading difficulty and who had received supplemental reading intervention were administered a standardized spelling assessment in the spring of kindergarten, and measures of phonological awareness, decoding, word recognition, and reading fluency were administered concurrently and at the end of first grade. The five spelling scoring metrics were similar in their strong relations with factors summarizing reading subskills (phonological awareness, decoding, and word reading) on a concurrent basis. Furthermore, when predicting first-grade reading skills based on spring-of-kindergarten performance, spelling scores from all five metrics explained unique variance over the autoregressive effects of kindergarten word identification. The practical advantages of using a brief spelling assessment for early reading evaluation and the relative tradeoffs of each scoring metric are discussed.


Exceptional Children | 2013

Adjusting Beginning Reading Intervention Based on Student Performance: An Experimental Evaluation:

Michael D. Coyne; Deborah C. Simmons; Shanna Hagan-Burke; Leslie E. Simmons; Oi-man Kwok; Minjung Kim; Melissa Fogarty; Eric L. Oslund; Aaron B. Taylor; Ashley Capozzoli-Oldham; Sharon Ware; Mary E. Little; D'Ann M. Rawlinson

This experimental study evaluated a model in which the delivery of a supplemental beginning reading intervention was adjusted based on student performance. Kindergarten students identified as at risk for reading difficulties were assigned to one of two versions of the Early Reading Intervention (ERI; Pearson/Scott Foresman, 2004). Students assigned to the experimental condition received the intervention with systematic adjustments based on student performance. Students in the comparison condition received the same intervention without instructional modifications. The experimental group outperformed the comparison group on all posttest measures at the end of kindergarten. Follow-up analyses at the end of first grade revealed a continued advantage for the experimental group. Findings suggest that systematically adjusting intervention support in response to student performance may be feasible and efficacious.


Journal of Research on Educational Effectiveness | 2014

Integrating Content Knowledge-Building and Student-Regulated Comprehension Practices in Secondary English Language Arts Classes

Deborah C. Simmons; Melissa Fogarty; Eric L. Oslund; Leslie E. Simmons; Angela Hairrell; John L. Davis; Leah Anderson; Nathan H. Clemens; Sharon Vaughn; Greg Roberts; Stephanie Stillman; Anna-Mária Fall

Abstract In this experimental study we examined the effects of integrating teacher-directed knowledge-building and student-regulated comprehension practices in 7th- to 10th-grade English language arts classes. We also investigated the effect of instructional quality and whether integrating practices differentially benefitted students with lower entry-level reading comprehension. The study was conducted in 6 schools, involving 17 teachers and 921 students. Teachers’ English language arts classes were randomly assigned to intervention (n = 36) or typical practice comparison (n = 29) conditions, and all teachers taught in both conditions. Students in both conditions grew significantly from pretest to posttest on proximal measures of narrative (ES =.09) and expository comprehension (ES =.22), as well as a standardized distal comprehension measure (ES =.46); however, no statistically significant between-group differences were found. Although intervention fidelity did not significantly influence outcomes, observational data indicated that teachers increasingly incorporated comprehension practices in their typical instruction. Effect sizes indicated a differential influence of entry-level reading comprehension on proximal and distal comprehension with higher performing readers in the intervention condition benefiting more than their lower performing peers on expository comprehension.


Reading Psychology | 2012

Predicting Kindergarteners’ Response to Early Reading Intervention: An Examination of Progress-Monitoring Measures

Eric L. Oslund; Shanna Hagan-Burke; Aaron B. Taylor; Deborah C. Simmons; Leslie E. Simmons; Oi-man Kwok; Caitlin Johnson; Michael D. Coyne

This study examined the predictive validity of combinations of progress-monitoring measures: (a) curriculum-embedded phonemic awareness and alphabetic/decoding measures, and (b) Dynamic Indicators of Basic Early Literacy Skills (DIBELS; Good & Kaminski, 2002) nonsense word fluency and phoneme segmentation fluency on reading outcomes of kindergarten students in a tier 2 intervention. Results of multiple-regression analyses indicated that curriculum-embedded mastery checks and DIBELS measures each explained a significant amount of variance on the outcome measure. However, curriculum-embedded measures explained statistically significantly more variance at each time point supporting their utility in documenting progress of kindergarten students receiving intervention.


Journal of Psychoeducational Assessment | 2015

Interpreting Secondary Students’ Performance on a Timed, Multiple-Choice Reading Comprehension Assessment: The Prevalence and Impact of Non-Attempted Items

Nathan H. Clemens; John L. Davis; Leslie E. Simmons; Eric L. Oslund; Deborah C. Simmons

Standardized measures are often used as an index of students’ reading comprehension and scores have important implications, particularly for students who perform below expectations. This study examined secondary-level students’ patterns of responding and the prevalence and impact of non-attempted items on a timed, group-administered, multiple-choice test of reading comprehension. The Reading Comprehension subtest from the Gates-MacGinitie Reading Test was administered to 694 students in Grades 7 to 9. Students were categorized according to their test performance (low-, middle-, and high-achieving). Scores of the lowest achieving subgroup were affected significantly by high rates of non-attempted items, particularly on the later third of the test. Furthermore, the percentage of students who completed the assessment was far below that reported by the test authors. The results send a cautionary message to researchers and educators that, when text comprehension is the primary assessment target, to consider rates of non-attempted items and their impact on interpreting students’ text processing skills. Practical considerations are presented.


Learning Disability Quarterly | 2015

Can Curriculum-Embedded Measures Predict the Later Reading Achievement of Kindergarteners at Risk of Reading Disability?.

Eric L. Oslund; Deborah C. Simmons; Shanna Hagan-Burke; Oi-man Kwok; Leslie E. Simmons; Aaron B. Taylor; Michael D. Coyne

This study examined the changing role and longitudinal predictive validity of curriculum-embedded progress-monitoring measures (CEMs ) for kindergarten students receiving Tier 2 intervention and identified as at risk of developing reading difficulties. Multiple measures were examined to determine whether they could predict comprehensive latent first- and second-grade reading outcomes and whether their predictive validity changed concurrent with reading development. CEMs of phonemic, alphabetic, and integrated tasks were given 3 times during the kindergarten year to 299 students. Structural equation modeling indicates that CEMs explained a significant amount of variance on first- (54%–63%) and second-grade (34%–41%) outcomes. The predictive validity of specific measures varied over the kindergarten year with sound and letter identification measures being predictive early and segmenting and word reading becoming important as reading abilities progressed. Findings suggest that CEMs may be viable and helpful tools for making data-driven instructional decisions in a response to intervention framework.


Journal of Learning Disabilities | 2015

Examining the Effects of Linking Student Performance and Progression in a Tier 2 Kindergarten Reading Intervention

Deborah C. Simmons; Minjung Kim; Oi-man Kwok; Michael D. Coyne; Leslie E. Simmons; Eric L. Oslund; Melissa Fogarty; Shanna Hagan-Burke; Mary E. Little; D’Ann Rawlinson

Despite the emerging evidence base on response to intervention, there is limited research regarding how to effectively use progress-monitoring data to adjust instruction for students in Tier 2 intervention. In this study, we analyzed extant data from a series of randomized experimental studies of a kindergarten supplemental reading intervention to determine whether linking performance on formative assessments to curriculum progression improved kindergarten reading outcomes over standard implementation. We were interested in whether specific progression adjustments would enhance the effects of supplemental reading intervention. Growth-mixture modeling using data from kindergarteners (n = 136) whose intervention progression (e.g. repeat lessons, skip lessons) was adjusted every 4 weeks based on mastery data identified four latent classes characterized by unique profiles of curriculum progression adjustments. Multilevel analyses comparing the performance of students in the four classes with that of propensity matched groups whose intervention was not adjusted (n = 101) indicated positive effects of curriculum progression for (a) students whose formative assessment performance exceeded 90% and received early and sustained lesson acceleration and (b) students who initially performed below 70% on assessments and who repeated early lessons and progressed to conventional implementation. Effects of curriculum adjustments for the two smallest groups were less clear.


Exceptional Children | 2018

Skill Moderators of the Effects of a Reading Comprehension Intervention

Nathan H. Clemens; Eric L. Oslund; Oi-man Kwok; Melissa Fogarty; Deborah C. Simmons; John L. Davis

This study utilized secondary analyses of a randomized controlled trial and investigated the extent to which prestest word identification efficiency, reading fluency, and vocabulary knowledge moderated the effects of an intervention on reading comprehension outcomes for struggling readers in sixth through eighth grades. Given that the experimental intervention included components that targeted word reading, reading fluency, and vocabulary, we hypothesized that students with lower pretest performance in those skill domains would benefit more from the intervention compared to students with relatively stronger pretest performance or students who received school-implemented (business-as-usual) intervention. Results indicated that pretest word identification efficiency and vocabulary did not moderate the effects of the intervention; however, moderation effects were observed for pretest oral reading fluency such that reading comprehension gains of students with lower pretest fluency were greater in the experimental intervention compared to students with higher pretest fluency or in the comparison condition. Reasons for the moderation effect are discussed. Findings underscore the use of moderation analyses when evaluating multicomponent interventions.


Journal of Learning Disabilities | 2017

Predictive Validity of Curriculum-Embedded Measures on Outcomes of Kindergarteners Identified as At Risk for Reading Difficulty.

Eric L. Oslund; Shanna Hagan-Burke; Deborah C. Simmons; Nathan H. Clemens; Leslie E. Simmons; Aaron B. Taylor; Oi-man Kwok; Michael D. Coyne

This study examined the predictive validity of formative assessments embedded in a Tier 2 intervention curriculum for kindergarten students identified as at risk for reading difficulty. We examined when (i.e., months during the school year) measures could predict reading outcomes gathered at the end of kindergarten and whether the predictive validity of measures changed across the kindergarten year. Participants consisted of 137 kindergarten students whose reading development was assessed four times from October to February. Measures aligned with content taught in the curriculum and assessed a range of phonologic, alphabetic, and word-reading skills. Results from structural equation modeling indicate that 36.3% to 65.2% of the variance was explained on the latent decoding outcome and 62.0% to 86.8% on the latent phonological outcome across the four time points. Furthermore, the predictive validity of specific skills increased over the kindergarten year, with more complicated tasks (e.g., word segmentation) becoming more predictive at subsequent measurement occasions. Results suggest that curriculum-embedded measures may be viable tools for assessing and predicting reading performance.


Educational Psychology Review | 2014

Examining the Effectiveness of a Multicomponent Reading Comprehension Intervention in Middle Schools: A Focus on Treatment Fidelity

Melissa Fogarty; Eric L. Oslund; Deborah C. Simmons; John L. Davis; Leslie E. Simmons; Leah Anderson; Nathan H. Clemens; Greg Roberts

Collaboration


Dive into the Eric L. Oslund's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mary E. Little

University of Central Florida

View shared research outputs
Researchain Logo
Decentralizing Knowledge