Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jeremy Miciak is active.

Publication


Featured researches published by Jeremy Miciak.


School Psychology Quarterly | 2014

Patterns of cognitive strengths and weaknesses: Identification rates, agreement, and validity for learning disabilities identification.

Jeremy Miciak; Jack M. Fletcher; Karla K. Stuebing; Sharon Vaughn; Tammy D. Tolar

Few empirical investigations have evaluated learning disabilities (LD) identification methods based on a pattern of cognitive strengths and weaknesses (PSW). This study investigated the reliability and validity of two proposed PSW methods: the concordance/discordance method (C/DM) and cross battery assessment (XBA) method. Cognitive assessment data for 139 adolescents demonstrating inadequate response to intervention was utilized to empirically classify participants as meeting or not meeting PSW LD identification criteria using the two approaches, permitting an analysis of: (a) LD identification rates, (b) agreement between methods, and (c) external validity. LD identification rates varied between the 2 methods depending upon the cut point for low achievement, with low agreement for LD identification decisions. Comparisons of groups that met and did not meet LD identification criteria on external academic variables were largely null, raising questions of external validity. This study found low agreement and little evidence of validity for LD identification decisions based on PSW methods. An alternative may be to use multiple measures of academic achievement to guide intervention.


Learning Disability Quarterly | 2014

Why Intensive Interventions Matter: Longitudinal Studies of Adolescents with Reading Disabilities and Poor Reading Comprehension.

Michael Solis; Jeremy Miciak; Sharon Vaughn; Jack M. Fletcher

We describe findings from a series of longitudinal studies utilizing a response to intervention framework implemented over 3 years with students in Grades 6 through 8 with reading disabilities and poor reading comprehension. Students were identified based on reading comprehension scores in Grade 5 (n = 1,083) and then randomized to treatment or comparison conditions. Beginning in sixth grade, students assigned to intervention were provided treatment for 1, 2, or 3 years based on their response to instruction in each preceding year. Screening procedures, progress monitoring tools, tiers of instruction, and findings from each year of the study are reported. Additional studies investigating reading and behavioral outcomes through multi-level, growth modeling, and studies of the cognitive and neural correlates of inadequate response are also reported.


School Psychology Quarterly | 2015

The Effect of Achievement Test Selection on Identification of Learning Disabilities within a Patterns of Strengths and Weaknesses Framework.

Jeremy Miciak; W. Pat Taylor; Carolyn A. Denton; Jack M. Fletcher

Few empirical investigations have evaluated learning disabilities (LD) identification methods based on a pattern of cognitive strengths and weaknesses (PSW). This study investigated the reliability of LD classification decisions of the concordance/discordance method (C/DM) across different psychoeducational assessment batteries. C/DM criteria were applied to assessment data from 177 second-grade students based on 2 psychoeducational assessment batteries. The achievement tests were different, but were highly correlated and measured the same latent construct. Resulting LD identifications were then evaluated for agreement across batteries on LD status and the academic domain of eligibility. The 2 batteries identified a similar number of participants as having LD (80 and 74). However, indices of agreement for classification decisions were low (κ = .29), especially for percent positive agreement (62%). The 2 batteries demonstrated agreement on the academic domain of eligibility for only 25 participants. Cognitive discrepancy frameworks for LD identification are inherently unstable because of imperfect reliability and validity at the observed level. Methods premised on identifying a PSW profile may never achieve high reliability because of these underlying psychometric factors. An alternative is to directly assess academic skills to identify students in need of intervention.


Review of Educational Research | 2015

Are Child Cognitive Characteristics Strong Predictors of Responses to Intervention? A Meta-Analysis:

Karla K. Stuebing; Amy E. Barth; Lisa H. Trahan; Radhika R. Reddy; Jeremy Miciak; Jack M. Fletcher

We conducted a meta-analysis of 28 studies comprising 39 samples to ask the question, “What is the magnitude of the association between various baseline child cognitive characteristics and response to reading intervention?” Studies were located via literature searches, contact with researchers in the field, and review of references from the National Reading Panel Report. Eligible participant populations included at-risk elementary school children enrolled in the third grade or below. Effects were analyzed using a shifting unit of analysis approach within three statistical models: cognitive characteristics predicting growth curve slope (Model 1, mean r = .31), gain (Model 2, mean r = .21), or postintervention reading controlling for preintervention reading (Model 3, mean r = .15). Effects were homogeneous within each model when effects were aggregated within study. The small size of the effects calls into question the practical significance and utility of using cognitive characteristics for prediction of response when baseline reading is available.


Journal of Research on Educational Effectiveness | 2016

Effects From a Randomized Control Trial Comparing Researcher and School-Implemented Treatments With Fourth Graders With Significant Reading Difficulties

Sharon Vaughn; Michael Solis; Jeremy Miciak; W. Pat Taylor; Jack M. Fletcher

ABSTRACT This study examined the effectiveness of a researcher-provided intervention with fourth graders with significant reading difficulties. The intervention emphasized multisyllable word reading, fluent reading of high-frequency words and phrases, vocabulary, and comprehension. To identify the participants, 1,695 fourth-grade students were screened using the Gates-MacGinitie Reading Test, and those whose standard scores were 85 or lower were included in the study (N = 483). Participants were randomly assigned (2:1) to receive either researcher-provided intervention (n = 323) or intervention provided by school personnel (business as usual, BAU) (n = 161). Findings revealed no statistically significant differences between students in the researcher-provided intervention and BAU groups. Using effect sizes as an indicator of impact, students in the researcher-implemented treatment generally outperformed students in the school-implemented treatment (BAU). Examining growth in standard scores, both groups made significant gains in reading outcomes with standard score growth from pretest to posttest of 3 standard score points on decoding, 5 on fluency, and 2.0 to 7 standard score points on reading comprehension measures.


Topics in Language Disorders | 2014

Agreement and coverage of indicators of response to intervention: A multimethod comparison and simulation

Jack M. Fletcher; Karla K. Stuebing; Amy E. Barth; Jeremy Miciak; David J. Francis; Carolyn A. Denton

Purpose: Agreement across methods for identifying students as inadequate responders or as learning disabled is often poor. We report (1) an empirical examination of final status (postintervention benchmarks) and dual-discrepancy growth methods based on growth during the intervention and final status for assessing response to intervention and (2) a statistical simulation of psychometric issues that may explain low agreement. Methods: After a Tier 2 intervention, final status benchmark criteria were used to identify 104 inadequate and 85 adequate responders to intervention, with comparisons of agreement and coverage for these methods and a dual-discrepancy method. Factors affecting agreement were investigated using computer simulation to manipulate reliability, the intercorrelation between measures, cutoff points, normative samples, and sample size. Results: Identification of inadequate responders based on individual measures showed that single measures tended not to identify many members of the pool of 104 inadequate responders. Poor to fair levels of agreement for identifying inadequate responders were apparent between pairs of measures. In the simulation, comparisons across 2 simulated measures generated indices of agreement (&kgr;) that were generally low because of multiple psychometric issues inherent in any test. Conclusions: Expecting excellent agreement between 2 correlated tests with even small amounts of unreliability may not be realistic. Assessing outcomes based on multiple measures, such as level of curriculum-based measure performance and short norm-referenced assessments of fluency, may improve the reliability of diagnostic decisions.


School Psychology Review | 2014

Cognitive Attributes of Adequate and Inadequate Responders to Reading Intervention in Middle School

Jeremy Miciak; Karla K. Stuebing; Sharon Vaughn; Greg Roberts; Amy E. Barth; Jack M. Fletcher

No studies have investigated the cognitive attributes of middle school students who are adequate and inadequate responders to Tier 2 reading intervention. We compared students in Grades 6 and 7 representing groups of adequate responders (n = 77) and inadequate responders who fell below criteria in (a) comprehension (n = 54); (b) fluency (n = 45); and (c) decoding, fluency, and comprehension (DFC; n = 45). These students received measures of phonological awareness, listening comprehension, rapid naming, processing speed, verbal knowledge, and nonverbal reasoning. Multivariate comparisons showed a significant Group-by-Task interaction: the comprehension-impaired group demonstrated primary difficulties with verbal knowledge and listening comprehension, the DFC group with phonological awareness, and the fluency-impaired group with phonological awareness and rapid naming. A series of regression models investigating whether responder status explained unique variation in cognitive skills yielded largely null results consistent with a continuum of severity associated with level of reading impairment, with no evidence for qualitative differences in the cognitive attributes of adequate and inadequate responders.


Exceptional Children | 2016

Using Content Acquisition Podcasts to Improve Teacher Candidate Knowledge of Curriculum-Based Measurement

Michael J. Kennedy; Dana Wagner; Joanna Stegall; Erica S. Lembke; Jeremy Miciak; Kat D. Alves; Tiara S. Brown; Melissa K. Driver; Shanna Eisner Hirsch

Given the significant literature supporting the use of curriculum-based measurement (CBM) for data-based decision making, it is critical that teacher candidates learn about it prior to student teaching and entry into the field as full-time teachers. The authors of this study used a content acquisition podcast (CAP), a multimedia-based instructional tool, to deliver information regarding CBM to teacher candidates. A second set of students received a practitioner-friendly text containing the same content as the CAP. Participants from three universities (N = 270) were randomly assigned to condition and completed pretest, posttest, and maintenance probes of CBM knowledge and ability to apply skill. In addition, participants completed a measure of motivation during their instruction. Results showed that participants who learned using the CAP scored significantly higher on the knowledge and application measures and reported being more motivated during instruction than peers in the text-only condition. The authors discuss implications for teacher education instruction and future research.


Archives of Clinical Neuropsychology | 2017

Comprehensive Cognitive Assessments are not Necessary for the Identification and Treatment of Learning Disabilities.

Jack M. Fletcher; Jeremy Miciak

There is considerable controversy about the necessity of cognitive assessment as part of an evaluation for learning and attention problems. The controversy should be adjudicated through an evaluation of empirical research. We review five sources of evidence commonly provided as support for cognitive assessment as part of the learning disability (LD) identification process, highlighting significant gaps in empirical research and where existing evidence is insufficient to establish the reliability and validity of cognitive assessments used in this way. We conclude that current evidence does not justify routine cognitive assessment for LD identification. As an alternative, we offer an instructional conceptualization of LD: a hybrid model that directly informs intervention and is based on documenting low academic achievement, inadequate response to intensive interventions, and a consideration of exclusionary factors.


Psychological Assessment | 2017

Cognitive discrepancy models for specific learning disabilities identification: Simulations of psychometric limitations.

W. Pat Taylor; Jeremy Miciak; Jack M. Fletcher; David J. Francis

Few studies have investigated specific learning disabilities (SLD) identification methods based on the identification of patterns of processing strengths and weaknesses (PSW). We investigated the reliability of SLD identification decisions emanating from different achievement test batteries for 1 method to operationalize the PSW approach: the concordance/discordance model (C/DM; Hale & Fiorello, 2004). Two studies examined the level of agreement for SLD identification decisions between 2 different simulated, highly correlated achievement test batteries. Study 1 simulated achievement and cognitive data across a wide range of potential latent correlations between an achievement deficit, a cognitive strength and a cognitive weakness. Latent correlations permitted simulation of case-level data at specified reliabilities for cognitive abilities and 2 achievement observations. C/DM criteria were applied and resulting SLD classifications from the 2 achievement test batteries were compared for agreement. Overall agreement and negative agreement were high, but positive agreement was low (0.33–0.59) across all conditions. Study 2 isolated the effects of reduced test reliability on agreement for SLD identification decisions resulting from different test batteries. Reductions in reliability of the 2 achievement tests resulted in average decreases in positive agreement of 0.13. Conversely, reductions in reliability of cognitive measures resulted in small average increases in positive agreement (0.0–0.06). Findings from both studies are consistent with prior research demonstrating the inherent instability of classifications based on C/DM criteria. Within complex ipsative SLD identification models like the C/DM, small variations in test selection can have deleterious effects on classification reliability.

Collaboration


Dive into the Jeremy Miciak's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sharon Vaughn

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Greg Roberts

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carolyn A. Denton

University of Texas Health Science Center at Houston

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge