Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amanda M. Marcotte is active.

Publication


Featured researches published by Amanda M. Marcotte.


Assessment for Effective Intervention | 2011

Innovations and Future Directions for Early Numeracy Curriculum-Based Measurement: Commentary on the Special Series.

Scott A. Methe; Robin L. Hojnoski; Ben Clarke; Brittany B. Owens; Patricia K. Lilley; Bethany C. Politylo; Kara M. White; Amanda M. Marcotte

The purpose of this extended commentary article is to frame the set of studies in the first of two issues and recommend areas of inquiry for future research. This special series issue features studies examining the technical qualities of formative assessment procedures that were developed to inform intervention. This article intends to emphasize issues in the current set of studies that do not appear central to early numeracy curriculum-based measurement (EN-CBM) research. To the extent possible, we expect that this two-volume series will result in scientific and practical advances. Despite this lofty intention, this series can neither represent all issues important to stakeholders nor characterize the full body of applied and basic research in early numeracy assessment. As such, we focus on a set of theoretical frameworks to guide the current and future development of EN-CBM.


School Psychology Quarterly | 2016

Concurrent validity and classification accuracy of curriculum-based measurement for written expression.

William M. Furey; Amanda M. Marcotte; John M. Hintze; Caroline M. Shackett

The study presents a critical analysis of written expression curriculum-based measurement (WE-CBM) metrics derived from 3- and 10-min test lengths. Criterion validity and classification accuracy were examined for Total Words Written (TWW), Correct Writing Sequences (CWS), Percent Correct Writing Sequences (%CWS), and Correct Minus Incorrect Writing Sequences (CMIWS). Fourth grade students (n = 109) from 6 schools participated in the study. To assess criterion validity of each metric, total scores from writing tasks were correlated with the state achievement tests composition subtest. Each index investigated was moderately correlated with the subtest. Correlations increased with the longer sampling period, however they were not statistically significant. The accuracy at distinguishing between proficient and not proficient writers on the state assessment was analyzed for each index using discriminant function analysis and Receiver Operating Characteristic (ROC) curves. CWS and CMIWS, indices encompassing production and accuracy, were most accurate for predicting proficiency. Improvements were observed in classification accuracy with an increased sampling time. Utilizing cut scores to hold sensitivity above .90, specificity for each metric increased with longer probes. Sensitivity and specificity increased for all metrics with longer probes when using a 25th percentile cut. Visual analyses of ROC curves reveal where classification improvements were made. The 10-min sample for CWS more accurately identified at-risk students in the center of the distribution. Without measurement guiding decisions, writers in the middle of the distribution are more difficult to classify than those who clearly write well or struggle. The findings have implications for screening using WE-CBM. (PsycINFO Database Record


Journal of Psychoeducational Assessment | 2018

Decision-Making Accuracy of CBM Progress-Monitoring Data.

John M. Hintze; Craig S. Wells; Amanda M. Marcotte; Benjamin G. Solomon

This study examined the diagnostic accuracy associated with decision making as is typically conducted with curriculum-based measurement (CBM) approaches to progress monitoring. Using previously published estimates of the standard errors of estimate associated with CBM, 20,000 progress-monitoring data sets were simulated to model student reading growth of two-word increase per week across 15 consecutive weeks. Results indicated that an unacceptably high proportion of cases were falsely identified as nonresponsive to intervention when a common 4-point decision rule was applied, under the context of typical levels of probe reliability. As reliability and stringency of the decision-making rule increased, such errors decreased. Findings are particularly relevant to those who use a multi-tiered response-to-intervention model for evaluating formative changes associated with instructional intervention and evaluating responsiveness to intervention across multiple tiers of intervention.


Reading & Writing Quarterly | 2017

The Effects of Supplemental Sentence-Level Instruction for Fourth-Grade Students Identified as Struggling Writers.

William M. Furey; Amanda M. Marcotte; Craig S. Wells; John M. Hintze

ABSTRACT The Language and Writing strands of the Common Core State Standards place a heavy emphasis on sentence-level conventions including syntax/grammar and mechanics. Interventions targeting these foundational skills are necessary to support struggling writers, as poorly developed sentence construction skills inhibit more complex writing tasks. This study examined the effects of a supplemental intervention on the writing skills of 4th-grade students identified as struggling writers. The intervention used explicit instruction and the self-regulated strategy development framework to teach students a sentence construction strategy along with self-regulation procedures. We used a regression discontinuity design to test whether students included in the intervention group outperformed their predicted scores on assessments of writing conventions and story quality. Results indicated that the intervention was successful at improving struggling writers’ ability to use accepted orthographic and grammatic conventions during composition. The intervention was not effective for improving the broader domain of story quality.


Assessment for Effective Intervention | 2016

Examining the Classification Accuracy of a Vocabulary Screening Measure With Preschool Children

Amanda M. Marcotte; Nathan H. Clemens; Christopher Parker; Sara A. Whitcomb

This study investigated the classification accuracy of the Dynamic Indicators of Vocabulary Skills (DIVS) as a preschool vocabulary screening measure. With a sample of 240 preschoolers, fall and winter DIVS scores were used to predict year-end vocabulary risk using the 25th percentile on the Peabody Picture Vocabulary Test–Third Edition (PPVT-III) to denote risk status. Results indicated that DIVS Picture Naming Fluency (PNF) and Reverse Definition Fluency (RDF) demonstrated very good accuracy in classifying students according to year-end vocabulary risk status. The DIVS measures also demonstrated stronger accuracy than demographic characteristics known to be indicators of vocabulary difficulties (socioeconomic status, English learner [EL] status, and sex). Combining PNF and RDF did not result in sufficient improvement in accuracy to justify administering both measures as opposed to just one. Further examination of predictive probability values revealed the potential for DIVS measures to improve the precision of vocabulary risk identification over considering EL status alone. Overall, results supported the use of the DIVS as a brief and inexpensive tool for preschool vocabulary screening.


International Journal of Testing | 2018

Investigating the Reliability of the Sentence Verification Technique

Amanda M. Marcotte; Francis Rick; Craig S. Wells

Reading comprehension plays an important role in achievement for all academic domains. The purpose of this study is to describe the sentence verification technique (SVT) (Royer, Hastings, & Hook, 1979) as an alternative method of assessing reading comprehension, which can be used with a variety of texts and across diverse populations and educational contexts. Additionally, this study adds a unique contribution to the extant literature on the SVT through an investigation of the precision of the instrument across proficiency levels. Data were gathered from a sample of 464 fourth-grade students from the Northeast region of the United States. Reliability was estimated using one, two, three, and four passage test forms. Two or three passages provided sufficient reliability. The conditional reliability analyses revealed that the SVT test scores were reliable for readers with average to below average proficiency, but did not provide reliable information for students who were very poor or strong readers.


Journal of School Psychology | 2009

Incremental and predictive utility of formative assessment methods of reading comprehension

Amanda M. Marcotte; John M. Hintze


Career Development Quarterly | 2015

Financial Planning Strategies of High School Seniors: Removing Barriers to Career Success

Timothy A. Poynton; Richard T. Lapan; Amanda M. Marcotte


Journal of Counseling and Development | 2017

College and Career Readiness Counseling Support Scales

Richard T. Lapan; Timothy A. Poynton; Amanda M. Marcotte; Joshua Marland; Chase M. Milam


SPECIALUSIS UGDYMAS / SPECIAL EDUCATION | 2016

VALIDITY OF TWO GENERAL OUTCOME MEASURES OF SCIENCE AND SOCIAL STUDIES ACHIEVEMENT

Paul Mooney; Renée E. Lastrapes; Amanda M. Marcotte; B. S. Amy Matthews Matthews

Collaboration


Dive into the Amanda M. Marcotte's collaboration.

Top Co-Authors

Avatar

John M. Hintze

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Craig S. Wells

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul Mooney

Louisiana State University

View shared research outputs
Top Co-Authors

Avatar

Renée E. Lastrapes

University of Houston–Clear Lake

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

William M. Furey

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Benjamin G. Solomon

University of Massachusetts Amherst

View shared research outputs
Researchain Logo
Decentralizing Knowledge