Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Koretz is active.

Publication


Featured researches published by Daniel Koretz.


Journal of Educational and Behavioral Statistics | 2004

Models for Value-Added Modeling of Teacher Effects

Daniel F. McCaffrey; J. R. Lockwood; Daniel Koretz; Thomas A. Louis; Laura S. Hamilton

The use of complex value-added models that attempt to isolate the contributions of teachers or schools to student development is increasing. Several variations on these models are being applied in the research literature, and policy makers have expressed interest in using these models for evaluating teachers and schools. In this article, we present a general multivariate, longitudinal mixed-model that incorporates the complex grouping structures inherent to longitudinal student data linked to teachers. We summarize the principal existing modeling approaches, show how these approaches are special cases of the proposed model, and discuss possible extensions to model more complex data structures. We present simulation and analytical results that clarify the interplay between estimated teacher effects and repeated outcomes on students over time. We also explore the potential impact of model misspecifications, including missing student covariates and assumptions about the accumulation of teacher effects over time, on key inferences made from the models. We conclude that mixed models that account for student correlation over time are reasonably robust to such misspecifications when all the schools in the sample serve similar student populations. However, student characteristics are likely to confound estimated teacher effects when schools serve distinctly different populations.


Journal of Human Resources | 2002

Limitations in the Use of Achievement Tests as Measures of Educators' Productivity

Daniel Koretz

Test-based accountability rests on the assumption that accountability for scores on tests will provide needed incentives for teachers to improve student performance. Evidence shows, however, that simple test-based accountability can generate perverse incentives and seriously inflated scores. This paper discusses the logic of achievement tests, issues that arise in using them as proxy indicators of educational quality, and the mechanism underlying the inflation of scores. It ends with suggestions, some speculative, for improving the incentives faced by teachers by modifying systems of student assessment and combining them with numerous other measures, many of which are more subjective than are test scores.


Assessment in Education: Principles, Policy & Practice | 1998

Large‐scale Portfolio Assessments in the US: evidence pertaining to the quality of measurement

Daniel Koretz

abstract Portfolio assessment, that is, the evaluation of performance by means of a cumulative collection of student work, has figured prominently in recent US debate about education reform. Proponents hope not only to broaden measurement of performance, but also to use portfolio assessment to encourage improved instruction. Although portfolio assessment has sparked considerable attention and enthusiasm, it has been incorporated into only a few of the nearly ubiquitous large‐scale external assessment programmes in the US. This paper evaluates the quality of the performance data produced by several large‐scale portfolio efforts. Evaluations of reliability, which have focused primarily on the consistency of scoring, have yielded highly variable results. While high levels of consistency have been reached in some cases, scoring has been quite inconsistent in others, to the point of severely limiting the utility of scores. Information about other aspects of validity is more limited and generally discouraging. ...


Educational Assessment | 2004

Assessing Students With Disabilities: Issues and Evidence

Daniel Koretz; Karen Barton

Historically, many students with disabilities were excluded from large-scale assessments. Recent federal and state policy initiatives, including the most recent reauthorization of the Individuals With Disabilities Education Act, require that the large majority of students with disabilities be included in the statewide assessments used in accountability systems. Although most observers agree that educational outcomes for students with disabilities were inadequate before the new policies were implemented, the research undergirding the new policies is limited. The reforms have spurred a rapid increase in relevant research, but more and improved research is needed. This article reviews the status of research on issues that are central to the new reforms and recommends directions for future research.


Educational Evaluation and Policy Analysis | 2000

Assessment of Students With Disabilities in Kentucky: Inclusion, Student Performance, and Validity

Daniel Koretz; Laura S. Hamilton

Students with disabilities are increasingly being included in large-scale, high-stakes testing programs, despite a lack of evidence regarding the validity of scores from many tests for these students. This study examines Kentuckys efforts to include students with disabilities in its statewide assessment. We explore the level of inclusion achieved, the kinds of assessment accommodations offered, the performance of students with disabilities, and the relationships between performance and the use of accommodations on both multiple-choice and open-response formats. Results indicate that most students were included in the assessment, but that the scores obtained by some students may not be trustworthy due to inappropriate use of accommodations. We discuss the implications of these results for research and policy.


The Future of Children | 2009

How Do American Students Measure Up?: Making Sense of International Comparisons

Daniel Koretz

In response to frequent news media reports about how poorly American students fare compared with their peers abroad, Daniel Koretz takes a close look at what these comparisons say, and do not say, about the achievement of U.S. high school students. He stresses that the comparisons do not provide what many observers of education would like: unambiguous information about the effectiveness of American high schools compared with those in other nations. Koretz begins by describing the two principal international student comparisons—the Trends in International Mathematics and Science Study (TIMSS) and the Program for International Student Assessment (PISA). Both assessments, he stresses, reflect the performance of students several years before they complete high school. PISA, which targets fifteen-year-old students, measures students’ abilities to apply what they have learned in school to real-world problems. By contrast, TIMSS tests fourth and eighth graders. Unlike PISA, TIMSS follows the school curriculum closely. Because the findings of the two tests are sometimes inconsistent, Koretz stresses the importance of considering data from both sources. He cautions against comparing U.S. students with an “international average,” which varies widely from survey to survey depending on which countries participate, and recommends instead comparing them with students in other nations that are similar to the United States or that are particularly high-achieving. Many observers, says Koretz, speculate that the lackluster average performance of American students in international comparisons arises because many, especially minority and low-income U.S. students, attend low-performing schools. But both TIMSS and PISA, he says, show that the performance of American students on the exams is not much more variable than that of students in countries that are socially more homogeneous or that have more equitable educational systems. Koretz emphasizes that the international comparisons provide valuable information and are a useful source of hypotheses about American secondary schooling to be tested by researchers. Studies designed to explain differences between U.S. students and those in very similar countries, he says, might provide especially useful suggestions for changes in policy and practice.


Measurement: Interdisciplinary Research & Perspective | 2010

Self-Monitoring Assessments for Educational Accountability Systems

Daniel Koretz; Anton Beguin

Test-based accountability is now the cornerstone of U.S. education policy, and it is becoming more important in many other nations as well. Educators sometimes respond to test-based accountability in ways that produce score inflation. In the past, score inflation has usually been evaluated by comparing trends in scores on a high-stakes test to trends on a lower-stakes audit test. However, separate audit tests are often unavailable, and their use has several important drawbacks, such as potential bias from motivational differences. As an alternative, we propose self-monitoring assessments (SMAs) that incorporate audit components into operational high-stakes assessments. This paper provides a framework for designing SMAs. It describes 5 specific SMA designs that could be incorporated into the non-equivalent groups anchor test linking approaches used by most large-scale assessments and discusses analytical issues that would arise in their use.


Measurement: Interdisciplinary Research & Perspective | 2015

Adapting Educational Measurement to the Demands of Test-Based Accountability.

Daniel Koretz

Accountability has become a primary function of large-scale testing in the United States. The pressure on educators to raise scores is vastly greater than it was several decades ago. Research has shown that high-stakes testing can generate behavioral responses that inflate scores, often severely. I argue that because of these responses, using tests for accountability necessitates major changes in the practices of educational measurement. The needed changes span the entire testing endeavor. This article addresses implications for design, linking, and validation. It offers suggestions about possible new approaches and calls for research evaluating them.


Science | 2009

Moving Past No Child Left Behind

Daniel Koretz

No Child Left Behind is a poorly designed program, with serious side effects and little evidence of benefit, in need of fundamental changes. Weaknesses in the U.S educational system are clear. U.S. students do not compare well to peers in many other nations in their mastery of mathematics and science (1). Inequities in educational resources and outcomes are glaring. Although policy responses to these problems should include holding educators accountable for student performance, No Child Left Behind (NCLB) is a poorly designed test-based accountability (TBA) system that requires fundamental changes.


Educational Policy | 2013

Estimating the Impact of the Massachusetts English Immersion Law on Limited English Proficient Students’ Reading Achievement

Qian Guo; Daniel Koretz

The large number of limited English proficient (LEP) children in U.S. schools and the uncertainty about the impact of bilingual education versus English immersion on their achievement warrant rigorous investigation of the effects of “English immersion laws.” We estimated the impact of Question 2, the Massachusetts English immersion law, and explored whether programs provided to LEP students before and after Question 2 imparted different language and reading skills. The results suggested that Question 2 had no substantial effect on third-grade LEP students’ reading achievement; there was suggestive evidence that pre- and post-Question 2 programs might attach emphasis to different subskills.

Collaboration


Dive into the Daniel Koretz's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge