Bobette Bouton
Vanderbilt University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Bobette Bouton.
Teaching Exceptional Children | 2007
Douglas Fuchs; Lynn S. Fuchs; Donald L. Compton; Bobette Bouton; Erin Caffrey; Lisa Hill
“Instruction” Is the Test Many teachers, administrators, and policy makers are currently discussing Responsiveness to Intervention (RTI) as a method of providing both early intervention to at-risk learners and more valid identification of children with learning disabilities (LD). RTI is viewed by many stakeholders as more valid than traditional methods of identification because it guarantees in principle that all children participate in scientifically validated curriculum and instruction. Hence, practitioners working within an RTI framework are expected to reduce the likelihood that untaught or poorly taught nondisabled students are misidentified as disabled. With classroom teachers using scientifically validated curricula and instruction, all children, or at least most children, should get the education they need without having to “wait to fail” when RTI is implemented well. RTI as a method of disability identification has been legitimized in the recently reauthorized Individuals with Disabilities Education Improvement Act of 2004 and in accompanying Regulations that were released in August 2006. The Regulations prohibit states from requiring use of IQachievement discrepancy, and they encourage implementation of RTI (cf. Yell, Shriner, & Katsiyannis, 2006). The essence of RTI as a method of disability identification is that instruction becomes the “test”—as much a test as the Wide Range Achievement Test or Stanford-Binet. In other words, instruction is the test stimulus and the student’s level or rate of performance is her response. Just as commercial publishers, such professional groups as the American Psychological Association, examiners, and others worry about the validity of test instruments, practitioners using RTI need to be concerned about the validity of their instruction. Choosing scientifically validated curricula and academic programs that address at-risk students’ needs and implementing them with fidelity are necessary to ensure the validity of the RTI process. If practitioners choose invalid or unvalidated instructional programs or implement validated instructional programs without fidelity, a child’s nonresponsiveness can become impossible to interpret.
Journal of Learning Disabilities | 2012
Donald L. Compton; Jennifer K. Gilbert; Joseph R. Jenkins; Douglas Fuchs; Lynn S. Fuchs; Eunsoo Cho; Laura A. Barquero; Bobette Bouton
Response-to-intervention (RTI) approaches to disability identification are meant to put an end to the so-called wait-to-fail requirement associated with IQ discrepancy. However, in an unfortunate irony, there is a group of children who wait to fail in RTI frameworks. That is, they must fail both general classroom instruction (Tier 1) and small-group intervention (Tier 2) before becoming eligible for the most intensive intervention (Tier 3). The purpose of this article was to determine how to predict accurately which at-risk children will be unresponsive to Tiers 1 and 2, thereby allowing unresponsive children to move directly from Tier 1 to Tier 3. As part of an efficacy study of a multitier RTI approach to prevention and identification of reading disabilities (RD), 129 first-grade children who were unresponsive to classroom reading instruction were randomly assigned to 14 weeks of small-group, Tier 2 intervention. Nonresponders to this instruction (n = 33) were identified using local norms on first-grade word identification fluency growth linked to a distal outcome of RD at the end of second grade. Logistic regression models were used to predict membership in responder and nonresponder groups. Predictors were entered as blocks of data from least to most difficult to obtain: universal screening data, Tier 1 response data, norm referenced tests, and Tier 2 response data. Tier 2 response data were not necessary to classify students as responders and nonresponders to Tier 2 instruction, suggesting that some children can be accurately identified as eligible for Tier 3 intervention using only Tier 1 data, thereby avoiding prolonged periods of failure to instruction.
Journal of Learning Disabilities | 2011
Douglas Fuchs; Donald L. Compton; Lynn S. Fuchs; Bobette Bouton; Erin Caffrey
The purpose of this study was to examine the construct and predictive validity of a dynamic assessment (DA) of decoding learning. Students (N = 318) were assessed in the fall of first grade on an array of instruments that were given in hopes of forecasting responsiveness to reading instruction. These instruments included DA as well as one-point-in-time (static) measures of early alphabetic knowledge, rapid automatized naming (RAN), phonemic awareness, oral vocabulary, listening comprehension, attentive behavior, and hyperactive or impulsive behavior. An IQ test was administered in spring of second grade. Measures of reading outcomes administered in spring of first grade were accuracy and fluency of word identification skills and reading comprehension. Factor analysis using principal axis factor extraction indicated that DA loaded on a first factor that also included language abilities and IQ, which the authors refer to as the “language, IQ, and DA” factor. It was relatively distinct from two additional factors: (a) “speeded alphabetic knowledge and RAN” and (b) “task-oriented behavior.” A three-level (children nested within classroom; classrooms nested within school) random intercept model with fixed effects predictors suggested that DA differed from word attack in predicting future reading skill and that DA was a significant predictor of responsiveness to instruction, contributing unique variance to end-of-first-grade word identification and reading comprehension beyond that explained by other well-established predictors of reading development.
Journal of Science Teacher Education | 2013
Cory A. Buxton; Martha Allexsaht-Snider; Regina Suriel; Shakhnoza Kayumova; Youn-Jeng Choi; Bobette Bouton; Melissa Baker
Grounded in Hallidayan perspectives on academic language, we report on our development of an educative science assessment as one component of the language-rich inquiry science for English-language learners teacher professional learning project for middle school science teachers. The project emphasizes the role of content-area writing to support teachers in diagnosing their students’ emergent understandings of science inquiry practices, science content knowledge, and the academic language of science, with a particular focus on the needs of English-language learners. In our current school policy context, writing for meaningful purposes has received decreased attention as teachers struggle to cover large numbers of discrete content standards. Additionally, high-stakes assessments presented in multiple-choice format have become the definitive measure of student science learning, further de-emphasizing the value of academic writing for developing and expressing understanding. To counter these trends, we examine the implementation of educative assessment materials—writing-rich assessments designed to support teachers’ instructional decision making. We report on the qualities of our educative assessment that supported teachers in diagnosing their students’ emergent understandings, and how teacher–researcher collaborative scoring sessions and interpretation of assessment results led to changes in teachers’ instructional decision making to better support students in expressing their scientific understandings. We conclude with implications of this work for theory, research, and practice.
Learning Disability Quarterly | 2014
Jessica R. Toste; Donald L. Compton; Douglas Fuchs; Lynn S. Fuchs; Jennifer K. Gilbert; Eunsoo Cho; Laura A. Barquero; Bobette Bouton
The purpose of the current study was to examine academic and cognitive profiles of first graders who responded adequately and inadequately to intensive small-group reading intervention (Tier 2), as well as assess how these profiles differ based on the criteria used for classification of unresponsiveness. Nonresponders were identified using two different methods: (a) reading composite with weighted standardized scores for untimed word identification and word attack, timed sight word reading and decoding, and reading comprehension at the end of first grade (n = 23; 18.4%), and (b) local norms on first grade word identification fluency (WIF; n = 31; 24.8%). Repeated measures ANOVAs were used to assess the difference between responders and nonresponders on four separate profiles (i.e., academic and cognitive profiles, with groups identified using reading composite and WIF criteria for unresponsiveness). Significant level effects were found using the first-grade reading composite and the WIF criteria, indicating that the groups differ from one another across domains. Interestingly, there were only significant shape effects found when using the WIF criteria, suggesting relative strengths and weaknesses distinguish the groups. These findings suggest potentially important considerations related to identification and placement of students in appropriately intensive and targeted interventions.
Journal of Learning Disabilities | 2011
Amy M. Elleman; Donald L. Compton; Douglas Fuchs; Lynn S. Fuchs; Bobette Bouton
In this study, the authors explore a newly constructed dynamic assessment (DA) intended to tap inference-making skills that they hypothesize will be predictive of future comprehension performance. The authors administered the test to 100 second-grade children using a dynamic format to consider the concurrent validity of the measure. The dynamic portion of the assessment comprised teaching children to be “reading detectives” by using textual clues to solve what was happening in the story. During the DA children listened to short passages and answered three inferential questions (i.e., one setting, two causal). If children were unable to answer a question, they were reminded what a reading detective would do and given a set of increasingly concrete prompts and clues to orient them to the relevant portion of text until they could answer the question correctly. Results showed that the DA correlated significantly with a standardized measure of reading comprehension and explained a small but significant amount of unique variance in reading comprehension above and beyond vocabulary and word identification skills. In addition, results suggest that DA may be better than the standardized measure of reading comprehension at identifying intraindividual differences in young children’s reading abilities.
Journal of Learning Disabilities | 2014
Eunsoo Cho; Donald L. Compton; Douglas Fuchs; Lynn S. Fuchs; Bobette Bouton
The purpose of this study was to examine the role of a dynamic assessment (DA) of decoding in predicting responsiveness to Tier 2 small-group tutoring in a response-to-intervention model. First grade students (n = 134) who did not show adequate progress in Tier 1 based on 6 weeks of progress monitoring received Tier 2 small-group tutoring in reading for 14 weeks. Student responsiveness to Tier 2 was assessed weekly with word identification fluency (WIF). A series of conditional individual growth curve analyses were completed that modeled the correlates of WIF growth (final level of performance and growth). Its purpose was to examine the predictive validity of DA in the presence of three sets of variables: static decoding measures, Tier 1 responsiveness indicators, and prereading variables (phonemic awareness, rapid letter naming, oral vocabulary, and IQ). DA was a significant predictor of final level and growth, uniquely explaining 3% to 13% of the variance in Tier 2 responsiveness depending on the competing predictors in the model and WIF outcome (final level of performance or growth). Although the additional variances explained uniquely by DA were relatively small, results indicate the potential of DA in identifying Tier 2 nonresponders.
Journal of Educational Psychology | 2010
Donald L. Compton; Douglas Fuchs; Lynn S. Fuchs; Bobette Bouton; Jennifer K. Gilbert; Laura A. Barquero; Eunsoo Cho; Robert Crouch
Reading Research Quarterly | 2013
Jennifer K. Gilbert; Donald L. Compton; Douglas Fuchs; Lynn S. Fuchs; Bobette Bouton; Laura A. Barquero; Eunsoo Cho
Learning and Individual Differences | 2010
Christopher J. Lemons; Alexandra P. F. Key; Douglas Fuchs; Paul J. Yoder; Lynn S. Fuchs; Donald L. Compton; Susan M. Williams; Bobette Bouton