Rebecca A. Maynard
University of Pennsylvania
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rebecca A. Maynard.
Journal of Human Resources | 1987
Thomas M. Fraker; Rebecca A. Maynard
This study investigates empirically the strengths and limitations of using experimental versus nonexperimental designs for evaluating employment and training programs. The assessment involves comparing results from an experimental-design study-the National Supported Work Demonstration-with the estimated impacts of Supported Work based on analyses using comparison groups constructed from the Current Population Surveys. The results indicate that nonexperimental designs cannot be relied on to estimate the effectiveness of employment programs. Impact estimates tend to be sensitive both to the comparison group construction methodology and to the analytic model used. There is currently no way a priori to ensure that the results of comparison group studies will be valid indicators of the program impacts.
Children and Youth Services Review | 1995
Rebecca A. Maynard
Abstract This paper discusses the causes and consequences of teenage childbearing. It also describes those teenage parents on welfare and their life circumstances that facilitate or impede their success as young parents. The third section describes a major federal demonstration of a mandatory JOBS-type program for teenage parents and its effectiveness in promoting self-sufficiency among this population. The fourth section discusses other initiatives aimed at preventing teenage pregnancy and parenting and promoting improved outcomes for those who do become parents. The final two sections discuss the implications of the research findings for welfare reform.
Journal of Research on Educational Effectiveness | 2013
Rebecca A. Maynard; Nianbo Dong
Abstract This paper and the accompanying tool are intended to complement existing supports for conducting power analysis tools by offering a tool based on the framework of Minimum Detectable Effect Sizes (MDES) formulae that can be used in determining sample size requirements and in estimating minimum detectable effect sizes for a range of individual- and group-random assignment design studies and for common quasi-experimental design studies. The paper and accompanying tool cover computation of minimum detectable effect sizes under the following study designs: individual random assignment designs, hierarchical random assignment designs (2-4 levels), block random assignment designs (2-4 levels), regression discontinuity designs (6 types), and short interrupted time-series designs. In each case, the discussion and accompanying tool consider the key factors associated with statistical power and minimum detectable effect sizes, including the level at which treatment occurs and the statistical models (e.g., fixed effect and random effect) used in the analysis. The tool also includes a module that estimates for one and two level random assignment design studies the minimum sample sizes required in order for studies to attain user-defined minimum detectable effect sizes.
Peabody Journal of Education | 2007
Irma Perez-Johnson; Rebecca A. Maynard
The persistent achievement gaps among children of different race/ethnicity and socioeconomic status in the United States represent an issue that has commanded public, policy, and research attention on and off for about 100 years now, and it is once again in the forefront of policy-making agendas. Debates nevertheless abound on the most promising and cost-effective strategies to address the problem. We examine critically the available evidence on the benefits and costs of early childhood education and conclude that early, vigorous interventions targeted at disadvantaged children offer the best chance to substantially reduce gaps in school readiness and increase the productivity of our educational systems. The available evidence fails to provide a complete road map for future investments, however. Hence, we propose a program of challenge grants to states and their subunits, coupled with waivers from regulation, to spur innovation and experimentation within this important research area. We provide examples of the types of experiments that could be funded and discuss important considerations in the development and implementation of such a research grants program.
Journal of Policy Analysis and Management | 1994
Rebecca A. Maynard
Introduction What Is a Successful Program? Serving Children and Families Through the Welfare System: Challenges and Opportunities Sites and Services: Programs that Meet the Challenges Strategies for Meeting the Challenges Recommendations for Action Appendix A: The Case Studies Appendix B: Research Approach and Methods Selected Bibliography Index
Journal of Children's Services | 2008
Gary W. Ritter; Rebecca A. Maynard
Academically focused tutoring programmes for young children have been promoted widely in the US in various forms as promising strategies for improving academic performance, particularly in reading and mathematics. A body of evidence shows the benefits of tutoring provided by certified, paid professionals; however, the evidence is less clear for tutoring programmes staffed by adult volunteers or college students. In this article, we describe a relatively large‐scale university‐based programme that creates tutoring partnerships between college‐aged volunteers and students from surrounding elementary schools. We used a randomised trial to evaluate the effectiveness of this programme for 196 students from 11 elementary schools over one school year, focusing on academic grades and standardised test scores, confidence in academic ability, motivation and school attendance. We discuss the null findings in order to inform the conditions under which student support programmes can be successful.
American Journal of Evaluation | 2015
Robert C Granger; Rebecca A. Maynard
Despite bipartisan support in Washington, DC, which dates back to the mid-1990s, the “what works” approach has yet to gain broad support among policymakers and practitioners. One way to build such support is to increase the usefulness of program impact evaluations for these groups. We describe three ways to make impact evaluations more useful to policy and practice: emphasize learning from all studies over sorting out winners and losers; collect better information on the conditions that shape an interventions success or failure; and learn about the features of programs and policies that influence their effectiveness. We argue that measurement of the treatment contrast that exists between the intervention and comparison condition(s) is important for each of these changes. Measurement and analysis of the treatment contrast will increase cost and policymakers and practitioners already see evaluations as expensive. Therefore we offer suggestions for reducing costs in other areas of data collection.
Evaluation | 2000
Rebecca A. Maynard
The following is based on a keynote address presented at III Congresso Nazionale Dell’Associazione Italiana Di Valutazione, Torino, Villa Gaulina, 24 March, 2000.
Evaluation & Research in Education | 2004
Phoebe Cottingham; Rebecca A. Maynard; Matthew Stagner
Review teams tested the systematic review procedures and principles developed under the Campbell Collaboration Fourteen review teams selected topics for intervention reviews in social policy, education, and criminal justice. Review protocols gave criteria for the extensive research literature search. Randomised Controlled Trials were selected. Systematic reviewers should give careful attention to defining the review topic, Setting study inclusion and exclusion criteria, handling variability in outcome measurement and study reporting, appropriate uses of statistical meta-analysis, and reporting review results. Significant differences in review results were observed based on review criteria and procedures.
Journal of Human Resources | 1985
Edward S. Cavin; Rebecca A. Maynard
The objective of this paper is to assess the usefulness of short-term program performance data in judging the relative effectiveness of Supported Work and in targeting program resources to those most likely to benefit from them. The results reveal evidence of significant negative impacts for youth who left the program for negative reasons; the magnitude of the adverse impacts associated with negative terminations is related to length of program participation. These results indicate that performance of individual programs could be improved if short-run performance standards could effectively identify youth likely to leave for negative reasons.