Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Howard S. Bloom is active.

Publication


Featured researches published by Howard S. Bloom.


Journal of Research on Educational Effectiveness | 2008

Performance Trajectories and Performance Gaps as Achievement Effect-Size Benchmarks for Educational Interventions.

Howard S. Bloom; Carolyn J. Hill; Alison Rebeck Black; Mark W. Lipsey

Abstract Two complementary approaches to developing empirical benchmarks for achievement effect sizes in educational interventions are explored. The first approach characterizes the natural developmental progress in achievement made by students from one year to the next as effect sizes. Data for seven nationally standardized achievement tests show large annual gains in the early elementary grades followed by gradually declining gains in later grades. A given intervention effect will therefore look quite different when compared to the annual progress for different grade levels. The second approach explores achievement gaps for policy-relevant subgroups of students or schools. Data from national- and district-level achievement tests show that, when represented as effect sizes, student gaps are relatively small for gender and much larger for economic disadvantage and race/ethnicity. For schools, the differences between weak schools and average schools are surprisingly modest when expressed as student-level effect sizes. A given intervention effect viewed in terms of its potential for closing one of these performance gaps will therefore look very different depending on which gap is considered.


Educational Evaluation and Policy Analysis | 2007

Using Covariates to Improve Precision for Studies That Randomize Schools to Evaluate Educational Interventions

Howard S. Bloom; Lashawn Richburg-Hayes; Alison Rebeck Black

This article examines how controlling statistically for baseline covariates, especially pretests, improves the precision of studies that randomize schools to measure the impacts of educational interventions on student achievement. Empirical findings from five urban school districts indicate that (1) pretests can reduce the number of randomized schools needed for a given level of precision to about half of what would be needed otherwise for elementary schools, one fifth for middle schools, and one tenth for high schools, and (2) school-level pretests are as effective in this regard as student-level pretests. Furthermore, the precision-enhancing power of pretests (3) declines only slightly as the number of years between the pretest and posttests increases; (4) improves only slightly with pretests for more than 1 baseline year; and (5) is substantial, even when the pretest differs from the posttest. The article compares these findings with past research and presents an approach for quantifying their uncertainty.


The Review of Economics and Statistics | 2004

Can Propensity-Score Methods Match the Findings from a Random Assignment Evaluation of Mandatory Welfare-to-Work Programs?

Charles Michalopoulos; Howard S. Bloom; Carolyn J. Hill

This paper assesses nonexperimental estimators using results from a six-state random assignment study of mandatory welfare-to-work programs. The assessment addresses two questions: which nonexperimental methods provide the most accurate estimates; and do the best methods work well enough to replace random assignment? Three tentative conclusions emerge. Nonexperimental bias was larger in the medium run than in the short run. In-state comparison groups produced less average bias than out-of-state comparison groups. Statistical adjustments did not consistently reduce bias, although some methods reduced the estimated bias in some circumstances and propensity-score methods provided a specification check that eliminated some large biases.


Journal of Human Resources | 1997

The Benefits and Costs of JTPA Title II-A Programs: Key Findings from the National Job Training Partnership Act Study

Howard S. Bloom; Larry L. Orr; Stephen H. Bell; George Cave; Fred Doolittle; Winston Lin; Johannes M. Bos

This paper examines the benefits and costs of Job Training Partnership Act (JTPA) Title II-A programs for economically disadvantaged adults and out-of-school youths. It is based on a 21,000-person randomized experiment conducted within ongoing Title II-A programs at 16 local JTPA Service Delivery Areas (SDAs) from around the country. In the paper, we present the background and design of our study, describe the methodology used to estimate program impacts, present estimates of program impacts on earnings and educational attainment, and assess the overall success of the programs studied through a benefit-cost analysis.


Evaluation Review | 1999

Using cluster random assignment to measure program impacts. Statistical implications for the evaluation of education programs.

Howard S. Bloom; Johannes M. Bos; Suk-Won Lee

This article explores the possibility of randomly assigning groups (or clusters) of individuals to a program or a control group to estimate the impacts of programs designed to affect whole groups. This cluster assignment approach maintains the primary strength of random assignment—the provision of unbiased impact estimates—but has less statistical power than random assignment of individuals, which usually is not possible for programs focused on whole groups. To explore the statistical implications of cluster assignment, the authors (a) outline the issues involved, (b) present an analytic framework for studying these issues, and (c) apply this framework to assess the potential for using the approach to evaluate education programs targeted on whole schools. The findings suggest that cluster assignment of schools holds some promise for estimating the impacts of education programs when it is possible to control for the average performance of past student cohorts or the past performance of individual students.


Journal of Research on Educational Effectiveness | 2012

Modern Regression Discontinuity Analysis

Howard S. Bloom

Abstract This article provides a detailed discussion of the theory and practice of modern regression discontinuity (RD) analysis for estimating the effects of interventions or treatments. Part 1 briefly chronicles the history of RD analysis and summarizes its past applications. Part 2 explains how in theory an RD analysis can identify an average effect of treatment for a population and how different types of RD analyses—“sharp” versus “fuzzy”—can identify average treatment effects for different conceptual subpopulations. Part 3 of the article introduces graphical methods, parametric statistical methods, and nonparametric statistical methods for estimating treatment effects in practice from regression discontinuity data plus validation tests and robustness tests for assessing these estimates. Section 4 considers generalizing RD findings and presents several different views on and approaches to the issue. Part 5 notes some important issues to pursue in future research about or applications of RD analysis.


Journal of Policy Analysis and Management | 2014

A Conceptual Framework For Studying The Sources Of Variation In Program Effects

Michael J. Weiss; Howard S. Bloom; Thomas Brock

Evaluations of public programs in many fields reveal that different types of programs—or different versions of the same program—vary in their effectiveness. Moreover, a program that is effective for one group of people might not be effective for other groups, and a program that is effective in one set of circumstances may not be effective in other circumstances. This paper presents a conceptual framework for research on such variation in program effects and the sources of this variation. The framework is intended to help researchers—both those who focus mainly on studying program implementation and those who focus mainly on estimating program effects—see how their respective pieces fit together in a way that helps to identify factors that explain variation in program effects, and thereby support more systematic data collection. The ultimate goal of the framework is to enable researchers to offer better guidance to policymakers and program operators on the conditions and practices that are associated with larger and more positive effects.


Evaluation Review | 2003

Using "short" interrupted time-series analysis to measure the impacts of whole-school reforms. With applications to a study of accelerated schools.

Howard S. Bloom

The present article introduces a new approach for measuring the impacts of whole-school reforms. The approach is based on “short” interrupted time-series analysis, which has been used to evaluate programs in many fields. The approach is used to measure impacts on three facets of student performance: (a) average (mean) test scores, which summarize impacts on total performance; (b) the distribution of scores across specific ranges, which helps to identify where in the distribution of student performance impacts were experienced; and (c) the variation (standard deviation) of scores, which indicates how the disparity in student performance was affected. To help researchers use the approach, the article lays out its conceptual rationale, describes its statistical procedures, explains how to interpret its findings, indicates its strengths and limitations, and illustrates how it was used to evaluate a major whole-school reform—Accelerated Schools.


Prevention Science | 2013

When Is the Story in the Subgroups? Strategies for Interpreting and Reporting Intervention Effects for Subgroups

Howard S. Bloom; Charles Michalopoulos

This paper examines strategies for interpreting and reporting estimates of intervention effects for subgroups of a study sample. The paper considers: why and how subgroup findings are important for applied research, alternative ways to define subgroups, different research questions that motivate subgroup analyses, the importance of pre-specifying subgroups before analyses are conducted, the importance of using existing theory and prior research to distinguish between subgroups for whom study findings are confirmatory (hypothesis testing) as opposed to exploratory (hypothesis generating), and the conditions under which study findings should be considered confirmatory. Each issue is illustrated by selected empirical examples.


Journal of Research on Educational Effectiveness | 2010

New Empirical Evidence for the Design of Group Randomized Trials in Education

Robin Jacob; Pei Zhu; Howard S. Bloom

Abstract This article provides practical guidance for researchers who are designing studies that randomize groups to measure the impacts of educational interventions. The article (a) provides new empirical information about the values of parameters that influence the precision of impact estimates (intraclass correlations and R 2 values) and includes outcomes other than standardized test scores and data with a three-level structure rather than a two-level structure, and (b) discusses the error (both generalizability and estimation error) that exists in estimates of key design parameters and the implications this error has for design decisions. Data for the paper come primarily from two studies: the Chicago Literacy Initiative: Making Better Early Readers Study (CLIMBERS) and the School Breakfast Pilot Project (SBPP). The analysis sample from CLIMBERS comprised 430 four-year-old children from 47 preschool classrooms in 23 Chicago public schools. The analysis sample from the SBPP study comprised 1,151 third graders from 233 classrooms in 111 schools from 6 school districts. Student achievement data from the Reading First Impact Study is also used to supplement the discussion.

Collaboration


Dive into the Howard S. Bloom's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robin Jacob

University of Michigan

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fatih Unlu

University of Michigan

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Beth Gamse

University of Michigan

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge