Laura R. Peck
Economic Policy Institute
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Laura R. Peck.
American Journal of Evaluation | 2003
Laura R. Peck
A fundamental question within the field of program evaluation is “Do social programs work?” Although experiments allow us to answer this question with certainty, they have some limitations. Experiments generate mean program impacts and even mean impacts by subgroup, but they often leave unexplored the impacts on subgroups determined by treatment use. This work proposes a methodology for analyzing the impacts of social programs on previously unexamined subgroups. Rather than using a single trait to define subgroups—which is currently the dominant method of subgroup analysis—the proposed approach estimates the impact of programs on subgroups identified by a post-treatment choice while still maintaining the integrity of the experimental research design. Analysis of data from the experimental evaluation of New York State’s Child Assistance Program (CAP) provides an application of the proposed technique.
Evaluation Review | 2005
Laura R. Peck
The conventional way to measure program impacts is to compute the average treatment effect; that is, the difference between a treatment group that received some intervention and a control group that did not. Recently, scholars have recognized that looking only at the average treatment effect may obscure impacts that accrue to subgroups. In an effort to inform subgroup analysis research, this article explains the challenge of treatment group heterogeneity. It then proposes using cluster analysis to identify otherwise difficult-to-identify subgroups within evaluation data. The approach maintains the integrity of the experimental evaluation design, thereby producing unbiased estimates of program impacts by subgroup. This method is applied to data from the evaluation of New York State’s Child Assistance Program, a reform that intended to increase work and earnings among welfare recipients. The article interprets the substantive findings and then addresses the advantages and disadvantages of the proposed method.
American Journal of Evaluation | 2013
Laura R. Peck
Researchers and policy makers are increasingly dissatisfied with the “average treatment effect.” Not only are they interested in learning about the overall causal effects of policy interventions, but they want to know what specifically it is about the intervention that is responsible for any observed effects. In the U.S., using experimentally-designed evaluation to capture the average treatment effect is both commonplace and preferred practice; but, as this paper argues, there are many important questions yet to be asked and answered via our body of experimental research. As a reconsideration of Peck (2003), on the tenth anniversary of its publication, this article recasts earlier work on analyzing “what works” as a call to action for evaluators and policy analysts: we can and should do better.
American Journal of Evaluation | 2007
Laura R. Peck
This article uses propensity scores to identify subgroups of individuals most likely to experience a reduction in cash benefits because of sanctions in some of the programs that make up the National Evaluation of Welfare-to-Work Strategies. It extends program evaluation methodology by using propensity scoring to identify the subgroups of sanctioned and nonsanctioned welfare recipients. Specifically, the propensity score is used to identify the sample subset most likely to experience program sanction. In this application, the propensity score helps deal with an omitted variable problem, that of not knowing what the sanction status is in the control group (because they were not subject to the policies being tested). Findings reveal that being high sanction risk induces greater work levels and therefore higher earnings, but it also results in receiving less cash assistance so that sanctioned recipients have roughly the same net incomes as nonsanctioned ones.
Administration & Society | 2009
Chao Guo; Laura R. Peck
This study assesses the extent to which welfare recipients engage in giving money and time to charitable causes. Using the 2003 Center on Philanthropy Panel Study data, this study examines the effects of public assistance—holding constant earned income and demographic traits—on two major types of charitable activities: charitable giving and volunteering. Using a Tobit specification, as appropriate for this type of data, the authors use a creative differencing strategy in an attempt to overcome sticky issues of selection bias. Evidence is found that public assistance receipt tends to suppress monetary donations but may increase volunteer time.
American Journal of Evaluation | 2013
Eleanor L. Harvill; Laura R. Peck; Stephen H. Bell
Using exogenous characteristics to identify endogenous subgroups, the approach discussed in this method note creates symmetric subsets within treatment and control groups, allowing the analysis to take advantage of an experimental design. In order to maintain treatment–control symmetry, however, prior work has posited that it is necessary to use a prediction subsample, separate from the subsample used for impact estimation in order to prevent overfitting from affecting impact estimates. Doing so diminishes sample size—both for prediction and analysis—and so has costs. This article delves into this topic to consider the conditions under which overfitting occurs and to characterize the effects of overfitting in terms of bias and variance. It suggests a strategy for preserving the full sample size in all phases of the analysis. The research uses Monte Carlo simulation to directly measure overfitting, identify the circumstances that should concern us, and to explore possible recommended practices and future research implications.
Journal of Poverty | 2007
Laura R. Peck
ABSTRACT Both the public perception of poverty and the measurement of poverty intersect in ways in which neither area of study is fully aware. That is, some research focuses on the publics opinion of the poor and of welfare recipients, and other research examines poverty measurement and how its variants determine whom we consider to be poor in the U.S.; but relatively little work has explored, either conceptually or empirically, the intersection of these two fields. This essay aims to do just that. After presenting a general summary of these two topics, I propose how each offers new perspectives for the other.
American Journal of Evaluation | 2013
Stephen H. Bell; Laura R. Peck
To answer “what works?” questions about policy interventions based on an experimental design, Peck (2003) proposes to use baseline characteristics to symmetrically divide treatment and control group members into subgroups defined by endogenously determined postrandom assignment events. Symmetric prediction of these subgroups in both experimental arms ensures the internal validity of the subgroup impacts estimates but leaves the external validity of the findings in doubt. A final step of the procedure solves for impacts on actual subgroups using a system of equations that is underidentified without further assumptions. We address these assumptions by first extending the methodology to encompass three rather than two endogenous subgroups and then proposing plausible assumptions for deriving impacts for actual endogenous subgroups. We also consider how the first-stage prediction process can be structured to better support the accuracy of the assumptions.
American Journal of Evaluation | 2012
Laura R. Peck; Yushim Kim; Joanna Lucio
This study addresses validity issues in evaluation that stem from Ernest R. House’s book, Evaluating With Validity. The authors examine American Journal of Evaluation articles from 1980 to 2010 that report the results of policy and program evaluations. The authors classify these evaluations according to House’s “major approaches” typology (Systems Analysis, Behavioral Objectives, Decision making, Goal-free, Professional Review, Art Criticism, Quasi-legal, and Case Study) and the types of validity (measurement, design, interpretation, use) the evaluations consider. Analyzing the intersection of evaluation type and validity type, the authors explore the status of House’s standards of Truth, Beauty, and Justice in evaluation practice.
Poverty & Public Policy | 2010
Joanna Duke-Lucio; Laura R. Peck; Elizabeth A. Segal
This paper explores unexamined housing costs that families incur by virtue of their low income. We build on a paradigm that identifies some unmeasured costs of being poor as latent (hidden and not counted in other poverty measures) and sequential (consequential with subsequent cost implications). Using data from in-depth interviews with cash assistance recipients and working poor heads of household, we explore these latent and sequential costs of poverty related to housing. We observe a variety of housing-related experiences regarding amenities and structure, stability, money outlays, and neighborhood characteristics. These experiences have associated with them latent and sequential costs that involve lack of safety, poor physical health, poor mental health, exhausting social capital, hopelessness, poor education, and diminished life opportunities, all of which have important financial and non-financial implications for families.