Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Larry L. Orr is active.

Publication


Featured researches published by Larry L. Orr.


Journal of Human Resources | 1997

The Benefits and Costs of JTPA Title II-A Programs: Key Findings from the National Job Training Partnership Act Study

Howard S. Bloom; Larry L. Orr; Stephen H. Bell; George Cave; Fred Doolittle; Winston Lin; Johannes M. Bos

This paper examines the benefits and costs of Job Training Partnership Act (JTPA) Title II-A programs for economically disadvantaged adults and out-of-school youths. It is based on a 21,000-person randomized experiment conducted within ongoing Title II-A programs at 16 local JTPA Service Delivery Areas (SDAs) from around the country. In the paper, we present the background and design of our study, describe the methodology used to estimate program impacts, present estimates of program impacts on earnings and educational attainment, and assess the overall success of the programs studied through a benefit-cost analysis.


Journal of Human Resources | 1994

Is Subsidized Employment Cost Effective for Welfare Recipients? Experimental Evidence from Seven State Demonstrations

Stephen H. Bell; Larry L. Orr

This paper examines the benefits and costs of training and subsidized employment provided to welfare recipients in demonstration programs in seven states. A classical experimental design is used to estimate the effect of these demonstrations on earnings and welfare benefits over 33 months following program entry. Both effects are substantial and, in some cases, long-lived. When combined with data on program costs, these findings indicate that, while not always cost effective for taxpayers, subsidized employment for welfare recipients does convey positive net benefits to participants and to society as a whole.


Labour Economics | 2002

Screening (and creaming?) applicants to job training programs: the AFDC homemaker-home health aide demonstrations

Stephen H. Bell; Larry L. Orr

Abstract Government employment and training programs typically do not have sufficient resources to serve all those who apply for assistance. Those to be served are usually selected by program staff based on management guidelines that allow considerable policy discretion at the local level. A longstanding issue in employment and training policy is whether allowing this flexibility leads to selection of applicants (1) most likely to benefit from the program or (2) who are likely to experience the highest absolute outcomes in the absence of program services, sometimes called “creaming”. The distinction is crucial to the success of many programs, both as redistributional tools and as economic investments. Selection of those most likely to benefit from the program—i.e., those for whom the programs impact on subsequent labor market success will be greatest—will maximize the social return on the investment in training. In contrast, “creaming” may lead to little or no social benefit or to a substantial gain, depending on whether those selected for training—the group most likely to succeed without the treatment—in fact benefit most from it. The redistributional effects of a program will also depend on who is served: among the applicant group, a more equal distribution of economic well-being, ex post, will be achieved only if the program favors applicants likely to do worst without the intervention. This paper explores the role of creaming in the operation of seven welfare-to-work training programs, the type of programs that have been the focus of increased expenditures over the last 10 years as more and more welfare recipients have been pushed to become self-sufficient. It considers whether the program intake practices adopted in the studied programs furthered the social goals pursued and, if not, what consequences they had on the twin concerns of distributional equity and economic efficiency. The analysis begins by reviewing the history of the creaming issue and its importance in the literature. A unique data set is then examined to discover the factors that influenced admission decisions in seven state-run employment and training programs for welfare recipients and how those decisions played out in terms of the in-training performance and later labor market outcomes of program participants. The principal conclusions are that these programs “creamed” the most able applicants on both observable and unobservable characteristics, but that this targeting did not systematically affect the size of program impacts or the return on investment.


Educational Evaluation and Policy Analysis | 2016

Estimates of External Validity Bias When Impact Evaluations Select Sites Nonrandomly

Stephen H. Bell; Robert B. Olsen; Larry L. Orr; Elizabeth A. Stuart

Evaluations of educational programs or interventions are typically conducted in nonrandomly selected samples of schools or districts. Recent research has shown that nonrandom site selection can yield biased impact estimates. To estimate the external validity bias from nonrandom site selection, we combine lists of school districts that were selected nonrandomly for 11 educational impact studies with population data on student outcomes from the Reading First program. Our analysis finds that on average, if an impact study of Reading First were conducted in the districts from these 11 studies, the impact estimate would be biased downward. In particular, it would be 0.10 standard deviations lower than the impact in the broader population from which the samples were selected, a substantial bias based on several benchmarks of comparison.


Evaluation Review | 2015

2014 Rossi Award Lecture

Larry L. Orr

Background: For much of the last 40 years, the evaluation profession has been consumed in a battle over internal validity. Today, that battle has been decided. Random assignment, while still far from universal in practice, is almost universally acknowledged as the preferred method for impact evaluation. It is time for the profession to shift its attention to the remaining major flaws in the “standard model” of evaluation: (i) external validity and (ii) the high cost and low hit rate of experimental evaluations as currently practiced. Recommendations: To raise the profession’s attention to external validity, the author recommends some simple, easy steps to be taken in every evaluation. The author makes two recommendations to increase the number of interventions found to be effective within existing resources: First, a two-stage evaluation strategy in which a cheap, streamlined Stage 1 evaluation is followed by a more intensive Stage 2 evaluation only for those interventions found to be effective in a Stage 1 trial and, second, use of random assignment to guide the myriad program management decisions that must be made in the course of routine program operations. This article is not intended as a solution to these issues: It is intended to stimulate the evaluation community to take these issues more seriously and to develop innovative solutions.


IZA Journal of Labor Policy | 2014

Wage subsidies in developing countries as a tool to build human capital: design and implementation issues

Rita Almeida; Larry L. Orr; David A. Robalino

This paper reviews international experiences with the implementation of wage subsidies and develops a policy framework to guide their design in developing countries. The evidence suggests that, if the goal is only to create jobs, wage subsidies are unlikely to be an effective instrument. Wage subsidies, however, could have a role in helping first-time job seekers or those who have gone through long-periods of unemployment or inactivity, to gain some work experience and in the process build skills and improve their employability. If these “learning” effects are large enough, the social benefits of wage subsidies could outweigh their cost. When wage subsidies are designed with these objectives in mind, there are important implications in terms of eligibility and targeting, how the subsidy is set, its duration, and the types of conditionalities on employers and beneficiaries. Given uncertainty regarding their impact, in all cases, programs should be piloted and evaluated prior to full scale implementation.JELsJ2, J3, J6


Journal of Policy Analysis and Management | 2013

External Validity in Policy Evaluations That Choose Sites Purposively

Robert B. Olsen; Larry L. Orr; Stephen H. Bell; Elizabeth A. Stuart


Industrial and Labor Relations Review | 1997

Program Applicants As a Comparison Group in Evaluating Training Programs.

Ernst W. Stromsdorfer; Stephen H. Bell; Larry L. Orr; John D. Blomquist; Glen G. Cain


Books from Upjohn Press | 1995

Program Applicants as a Comparison Group in Evaluating Training Programs: Theory and a Test

Stephen H. Bell; Larry L. Orr; John D. Blomquist; Glen G. Cain


Journal of Research on Educational Effectiveness | 2017

Characteristics of School Districts That Participate in Rigorous National Educational Evaluations

Elizabeth A. Stuart; Stephen H. Bell; Cyrus Ebnesajjad; Robert B. Olsen; Larry L. Orr

Collaboration


Dive into the Larry L. Orr's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert B. Olsen

George Washington University

View shared research outputs
Top Co-Authors

Avatar

Glen G. Cain

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ian Schmid

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge