Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Laura S. Hamilton is active.

Publication


Featured researches published by Laura S. Hamilton.


Journal of Educational and Behavioral Statistics | 2004

Models for Value-Added Modeling of Teacher Effects

Daniel F. McCaffrey; J. R. Lockwood; Daniel Koretz; Thomas A. Louis; Laura S. Hamilton

The use of complex value-added models that attempt to isolate the contributions of teachers or schools to student development is increasing. Several variations on these models are being applied in the research literature, and policy makers have expressed interest in using these models for evaluating teachers and schools. In this article, we present a general multivariate, longitudinal mixed-model that incorporates the complex grouping structures inherent to longitudinal student data linked to teachers. We summarize the principal existing modeling approaches, show how these approaches are special cases of the proposed model, and discuss possible extensions to model more complex data structures. We present simulation and analytical results that clarify the interplay between estimated teacher effects and repeated outcomes on students over time. We also explore the potential impact of model misspecifications, including missing student covariates and assumptions about the accumulation of teacher effects over time, on key inferences made from the models. We conclude that mixed models that account for student correlation over time are reasonably robust to such misspecifications when all the schools in the sample serve similar student populations. However, student characteristics are likely to confound estimated teacher effects when schools serve distinctly different populations.


The RAND Corporation | 2007

Standards-Based Accountability under No Child Left Behind: Experiences of Teachers and Administrators in Three States. MG-589-NSF.

Laura S. Hamilton; Brian M. Stecher; Julie A. Marsh; Jennifer Sloan McCombs; Abby Robyn; Jennifer Lin Russell; Scott Naftel; Heather Barney

notice appearing later in this work. This electronic representation of RAND intellectual property is provided for non-commercial use only. Permission is required from RAND to reproduce, or reuse in another form, any of our research documents. Limited Electronic Distribution Rights Visit RAND at www.rand.org Explore RAND Education View document details For More Information Purchase this document Browse Books & Publications Make a charitable contribution Support RAND This PDF document was made available from www.rand.org as a public service of the RAND Corporation.


Educational Evaluation and Policy Analysis | 2003

Studying Large-Scale Reforms of Instructional Practice: An Example from Mathematics and Science

Laura S. Hamilton; Daniel F. McCaffrey; Brian M. Stecher; Stephen P. Klein; Abby Robyn; Delia Bugliari

A number of challenges are encountered when evaluating a large-scale, multisite educational reform aimed at changing classroom practice. The challenges include substantial variability in implementation with little information on actual practice, lack of common, appropriate outcome measures, and the need to synthesize evaluation results across multiple study sites. This article describes an approach to addressing these challenges in the context of a study of the relationships between student achievement and instructional practices in the National Science Foundation’s Systemic Initiatives (SI) program. We gathered data from eleven SI sites and investigated relationships at the site level and pooled across sites using a planned meta-analytic approach. We found small but consistent positive relationships between teachers’ reported use of standards-based instruction and student achievement. The article also describes the ways in which we addressed the challenges discussed, as well as a number of additional obstacles that need to be addressed to improve future evaluations of large-scale reforms.


Educational Evaluation and Policy Analysis | 2000

Assessment of Students With Disabilities in Kentucky: Inclusion, Student Performance, and Validity

Daniel Koretz; Laura S. Hamilton

Students with disabilities are increasingly being included in large-scale, high-stakes testing programs, despite a lack of evidence regarding the validity of scores from many tests for these students. This study examines Kentuckys efforts to include students with disabilities in its statewide assessment. We explore the level of inclusion achieved, the kinds of assessment accommodations offered, the performance of students with disabilities, and the relationships between performance and the use of accommodations on both multiple-choice and open-response formats. Results indicate that most students were included in the assessment, but that the scores obtained by some students may not be trustworthy due to inappropriate use of accommodations. We discuss the implications of these results for research and policy.


Educational Evaluation and Policy Analysis | 1998

Gender Differences on High School Science Achievement Tests: Do Format and Content Matter?

Laura S. Hamilton

Gender differences on the NELS:88 multiple-choice and constructed-response science tests were explored through a combination of statistical analyses and interviews. Performance gaps between males and females varied across formats (multiple-choice versus constructed-response) and across items within a format. Differences were largest for items that involved visual content and called on application of knowledge commonly acquired through extracurricular activities. Large-scale surveys such as NELS:88 are widely used by researchers to study the effects of various student and school characteristics on achievement. The results of this investigation reveal the value of studying the validity of the outcome measure and suggest that conclusions about group differences and about correlates of achievement depend heavily on specific features of the items that make up the test.


Educational Evaluation and Policy Analysis | 2012

Team Pay for Performance Experimental Evidence From the Round Rock Pilot Project on Team Incentives

Matthew G. Springer; John F. Pane; Vi-Nhuan Le; Daniel F. McCaffrey; Susan Freeman Burns; Laura S. Hamilton; Brian M. Stecher

Education policymakers have shown increased interest in incentive programs for teachers based on the outcomes of their students. This article examines a program in which bonuses were awarded to teams of middle school teachers based on their collective contribution to student test score gains. The study employs a randomized controlled trial to examine effects of the bonus program over the course of an academic year, with the experiment repeated a second year, and finds no significant effects on the achievement of students or the attitudes and practices of teachers. The lack of effects of team-level pay for performance in this study is consistent with other recent experiments studying the short-term effects of bonus awards for individual performance or whole-school performance.


Educational Evaluation and Policy Analysis | 2006

Using Structured Classroom Vignettes to Measure Instructional Practices in Mathematics

Brian M. Stecher; Vi-Nhuan Le; Laura S. Hamilton; Gery W. Ryan; Abby Robyn; J. R. Lockwood

Large-scale educational studies frequently require accurate descriptions of classroom practices to judge implementation and impact. However, it can be difficult to obtain these descriptions in a timely, efficient manner. To address this problem, the authors developed a vignette-based measure of one aspect of mathematics instructional practice, reform-oriented instruction. Teachers read contextualized descriptions of teaching practices that varied in terms of reform-oriented instruction, and rated the degree to which the options corresponded to their own likely behaviors. Responses from 80 fourth-grade teachers yielded fairly consistent responses across two parallel vignettes and moderate correlations with other scales of reform-oriented instruction derived from classroom observations, surveys, and logs. The results suggested that the vignettes measure important aspects of reform-oriented instruction that are not captured by other measurement methods. Based on this work, it appears that vignettes can be a useful tool for research on instructional practice, but cognitive interviews with participating teachers provided insight into possible improvements to the items.


Archive | 2008

Chapter 2 Accountability and teaching practices: School-level actions and teacher responses

Laura S. Hamilton; Brian M. Stecher; Jennifer Lin Russell; Julie A. Marsh; Jeremy N. V. Miles

The design of the ISBA project was guided by an analysis of the SBA theory of action, its likely effect on educators’ work across levels of the educational hierarchy, and prior research on the impact of SBA policies on teachers’ work. We begin placing our work in the context of theoretical accounts of school organizations and the occupational norms of teaching.


Education inquiry | 2012

Standards-Based Accountability in the United States: Lessons Learned and Future Directions 1

Laura S. Hamilton; Brian M. Stecher; Kun Yuan

Standards-based accountability (SBA) has been a primary driver of education policy in the United States for several decades. Although definitions of SBA vary, it typically includes standards that indicate what students are expected to know and be able to do, measures of student attainment of the standards, targets for performance on those measures, and a set of consequences for schools or educations based on performance. Research on SBA indicates that these policies have led to some of the consequences its advocates had hoped to achieve, such as an emphasis on equity and alignment of curriculum within and across grade levels, but that it has also produced some less desirable outcomes. This article summarizes the research on SBA in three areas: quality of standards, ways in which SBA has shaped educators’ practices, and effects on student achievement. The article identifies lessons learned from the implication of SBA in the United States and provides guidance for developing SBA systems that could promote beneficial outcomes for students.


Journal of Educational Administration | 2013

Improving accountability through expanded measures of performance

Laura S. Hamilton; Heather L. Schwartz; Brian M. Stecher; Jennifer L. Steele

Purpose – The purpose of this paper is to examine how test‐based accountability has influenced school and district practices and explore how states and districts might consider creating expanded systems of measures to address the shortcomings of traditional accountability. It provides research‐based guidance for entities that are developing or adopting new measures of school performance.Design/methodology/approach – The study relies on literature review, consultation with expert advisers, review of state and district documentation, and semi‐structured interviews with staff at state and local education agencies and research institutions.Findings – The research shows mixed effects of test‐based accountability on student achievement and demonstrates that teachers and administrators change their practices in ways that respond to the incentives provided by the system. The review of state and district measurement systems shows widespread use of additional measures of constructs, such as school climate and colle...

Collaboration


Dive into the Laura S. Hamilton's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Julie A. Marsh

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge