Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stephen H. Bell is active.

Publication


Featured researches published by Stephen H. Bell.


Journal of Human Resources | 1997

The Benefits and Costs of JTPA Title II-A Programs: Key Findings from the National Job Training Partnership Act Study

Howard S. Bloom; Larry L. Orr; Stephen H. Bell; George Cave; Fred Doolittle; Winston Lin; Johannes M. Bos

This paper examines the benefits and costs of Job Training Partnership Act (JTPA) Title II-A programs for economically disadvantaged adults and out-of-school youths. It is based on a 21,000-person randomized experiment conducted within ongoing Title II-A programs at 16 local JTPA Service Delivery Areas (SDAs) from around the country. In the paper, we present the background and design of our study, describe the methodology used to estimate program impacts, present estimates of program impacts on earnings and educational attainment, and assess the overall success of the programs studied through a benefit-cost analysis.


Journal of Human Resources | 1994

Is Subsidized Employment Cost Effective for Welfare Recipients? Experimental Evidence from Seven State Demonstrations

Stephen H. Bell; Larry L. Orr

This paper examines the benefits and costs of training and subsidized employment provided to welfare recipients in demonstration programs in seven states. A classical experimental design is used to estimate the effect of these demonstrations on earnings and welfare benefits over 33 months following program entry. Both effects are substantial and, in some cases, long-lived. When combined with data on program costs, these findings indicate that, while not always cost effective for taxpayers, subsidized employment for welfare recipients does convey positive net benefits to participants and to society as a whole.


Labour Economics | 2002

Screening (and creaming?) applicants to job training programs: the AFDC homemaker-home health aide demonstrations

Stephen H. Bell; Larry L. Orr

Abstract Government employment and training programs typically do not have sufficient resources to serve all those who apply for assistance. Those to be served are usually selected by program staff based on management guidelines that allow considerable policy discretion at the local level. A longstanding issue in employment and training policy is whether allowing this flexibility leads to selection of applicants (1) most likely to benefit from the program or (2) who are likely to experience the highest absolute outcomes in the absence of program services, sometimes called “creaming”. The distinction is crucial to the success of many programs, both as redistributional tools and as economic investments. Selection of those most likely to benefit from the program—i.e., those for whom the programs impact on subsequent labor market success will be greatest—will maximize the social return on the investment in training. In contrast, “creaming” may lead to little or no social benefit or to a substantial gain, depending on whether those selected for training—the group most likely to succeed without the treatment—in fact benefit most from it. The redistributional effects of a program will also depend on who is served: among the applicant group, a more equal distribution of economic well-being, ex post, will be achieved only if the program favors applicants likely to do worst without the intervention. This paper explores the role of creaming in the operation of seven welfare-to-work training programs, the type of programs that have been the focus of increased expenditures over the last 10 years as more and more welfare recipients have been pushed to become self-sufficient. It considers whether the program intake practices adopted in the studied programs furthered the social goals pursued and, if not, what consequences they had on the twin concerns of distributional equity and economic efficiency. The analysis begins by reviewing the history of the creaming issue and its importance in the literature. A unique data set is then examined to discover the factors that influenced admission decisions in seven state-run employment and training programs for welfare recipients and how those decisions played out in terms of the in-training performance and later labor market outcomes of program participants. The principal conclusions are that these programs “creamed” the most able applicants on both observable and unobservable characteristics, but that this targeting did not systematically affect the size of program impacts or the return on investment.


Educational Evaluation and Policy Analysis | 2016

Estimates of External Validity Bias When Impact Evaluations Select Sites Nonrandomly

Stephen H. Bell; Robert B. Olsen; Larry L. Orr; Elizabeth A. Stuart

Evaluations of educational programs or interventions are typically conducted in nonrandomly selected samples of schools or districts. Recent research has shown that nonrandom site selection can yield biased impact estimates. To estimate the external validity bias from nonrandom site selection, we combine lists of school districts that were selected nonrandomly for 11 educational impact studies with population data on student outcomes from the Reading First program. Our analysis finds that on average, if an impact study of Reading First were conducted in the districts from these 11 studies, the impact estimate would be biased downward. In particular, it would be 0.10 standard deviations lower than the impact in the broader population from which the samples were selected, a substantial bias based on several benchmarks of comparison.


Journal of Policy Analysis and Management | 2016

Using Preferred Applicant Random Assignment (PARA) to Reduce Randomization Bias in Randomized Trials of Discretionary Programs

Robert B. Olsen; Stephen H. Bell; Austin Nichols

Randomization bias occurs when the random assignment used to estimate program effects influences the types of individuals that participate in a program. This paper focuses on a form of randomization bias called “applicant inclusion bias,†which can occur in evaluations of discretionary programs that normally choose which of the eligible applicants to serve. If this nonrandom selection process is replaced by a process that randomly assigns eligible applicants to receive the intervention or not, the types of individuals served by the program—and thus its average impact on program participants—could be affected. To estimate the impact of discretionary programs for the individuals that they normally serve, we propose an experimental design called Preferred Applicant Random Assignment (PARA). Prior to random assignment, program staff would identify their “preferred applicants,†those that they would have chosen to serve. All eligible applicants are randomly assigned, but the probability of assignment to the program is set higher for preferred applicants than for the remaining applicants. This paper demonstrates the feasibility of the method, the cost in terms of increased sample size requirements, and the benefit in terms of improved generalizability to the population normally served by the program.


Administration for Children & Families | 2010

Head Start Impact Study. Final Report.

Michael Puma; Stephen H. Bell; Ronna Cook; Camilla Heid; Gary Shapiro; Pam Broene; Frank Jenkins; Philip Fletcher; Liz Quinn; Janet Friedman; Janet Ciarico; Monica Rohacek; Gina Adams; Elizabeth Spier


Administration for Children & Families | 2005

Head Start Impact Study: First Year Findings.

Michael Puma; Stephen H. Bell; Ronna Cook; Camilla Heid; Michael Lopez


Journal of Policy Analysis and Management | 2013

External Validity in Policy Evaluations That Choose Sites Purposively

Robert B. Olsen; Larry L. Orr; Stephen H. Bell; Elizabeth A. Stuart


Industrial and Labor Relations Review | 1997

Program Applicants As a Comparison Group in Evaluating Training Programs.

Ernst W. Stromsdorfer; Stephen H. Bell; Larry L. Orr; John D. Blomquist; Glen G. Cain


Books from Upjohn Press | 1995

Program Applicants as a Comparison Group in Evaluating Training Programs: Theory and a Test

Stephen H. Bell; Larry L. Orr; John D. Blomquist; Glen G. Cain

Collaboration


Dive into the Stephen H. Bell's collaboration.

Top Co-Authors

Avatar

Larry L. Orr

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert B. Olsen

George Washington University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cristofer Price

American Academy of Pediatrics

View shared research outputs
Top Co-Authors

Avatar

Glen G. Cain

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge