Anthony Petrosino
WestEd
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Anthony Petrosino.
Evaluation Review | 2007
Allison Gruner Gandhi; Erin Murphy-Graham; Anthony Petrosino; Sara Schwartz Chrismer; Carol H. Weiss
In an effort to promote evidence-based practice, government officials, researchers, and program developers have developed lists of model programs in the prevention field. This article reviews the evidence used by seven best-practice lists to select five model prevention programs. The authors’ examination of this research raises questions about the process used to identify and publicize programs as successful. They found limited evidence showing substantial impact on drug use behavior at posttest, with very few studies showing substantial impact at longer follow-ups. The authors advocate additional long-term follow-up studies and conclude by suggesting changes in the procedures for developing best-practice lists.
Cochrane Database of Systematic Reviews | 2013
Anthony Petrosino; Carolyn Turpin-Petrosino; Meghan E. Hollis-Peel; Julia Lavenberg
BACKGROUNDnScared Straight and other similar programs involve organized visits to prison by juvenile delinquents or children at risk for criminal behavior. Programs are designed to deter participants from future offending through first hand observation of prison life and interaction with adult inmates. These programs remain in use despite research questioning their effectiveness. This is an update of a 2002 review.nnnOBJECTIVESnTo assess the effects of programs comprising organized visits to prisons by juvenile delinquents (officially adjudicated, that is, convicted by a juvenile court) or pre-delinquents (children in trouble but not officially adjudicated as delinquents), aimed at deterring them from delinquency.nnnSEARCH METHODSnTo update this review, we searched 22 electronic databases, including CENTRAL, MEDLINE, PsycINFO, and Criminal Justice Abstracts, in December 2011. In addition, we searched clinical trials registries, consulted experts, conducted Google Scholar searches,and followed up on all relevant citations.nnnSELECTION CRITERIAnWe included studies that tested programs involving the organized visits of delinquents or children at risk for delinquency to penal institutions such as prisons or re formatives. Studies that had overlapping samples of juvenile and young adults (for example, ages 14 to 20 years) were included. We only considered studies that assigned participants to conditions randomly or quasi-randomly (that is,by odd/even assignment to conditions). Each study had to have a no-treatment control condition and at least one outcome measure of post-visit criminal behavior.nnnDATA COLLECTION AND ANALYSISnThe search methods for the original review generated 487 citations, most of which had abstracts. The lead review author screened these citations, determining that 30 were evaluation reports. Two review authors independently examined these citations and agreed that 11 were potential randomized trials. All reports were obtained. Upon inspection of the full-text reports, two review authors independently agreed to exclude two studies, resulting in nine randomized trials. The lead review author extracted data from each of the nine study reports using a specially designed instrument. In cases in which outcome information was missing from the original reports, we made attempts via correspondence to retrieve the data for the analysis from the original investigators. Outcome data were independently checked by a second review author (CTP).In this review, we report the results of each of the nine trials narratively.We conducted two meta-analyses of seven studies that provided post intervention offending rates using official data. Information from other sources (for example, self-report) was either missing from some studies or critical information was omitted (for example, standard deviations).We examined the immediate post-treatment effects(that is, first-effects) by computing odds ratios (OR) for data on proportions of each group re offending, and assumed both fixed-effect and random-effects models in our analyses.nnnMAIN RESULTSnWe have included nine studies in this review. All were part of the original systematic review; no new trials meeting eligibility criteria were identified through our updated searches. The studies were conducted in eight different states of the USA, during the years 1967 to 1992. Nearly 1000 (946) juveniles or young adults of different races participated, almost all males. The average age of the participants in each study ranged from 15 to 17 years.Meta-analyses of seven studies show the intervention to be more harmful than doing nothing. The OR (fixed-effect) for effects on first post-treatment effect on officially measured criminal behavior indicated a negative program effect (OR 1.68, 95% confidence interval (CI) 1.20 to 2.36) and nearly identical regardless of the meta-analytic strategy (random-effectsOR 1.72, 95%CI 1.13 to 2.62).Sensitivity analyses (random-effects) showed the findings were robust even when removing one study with an inadequate randomization strategy (OR 1.47, 95% CI 1.03 to 2.11), or when removing one study with high attrition (OR 1.96, 95% CI 1.25 to 3.08), or both(OR 1.68, 95% CI 1.10 to 2.58).nnnAUTHORS CONCLUSIONSnWe conclude that programs such as cared Straight increase delinquency relative to doing nothing at all to similar youths. Given these results, we cannot recommend this program as a crime prevention strategy. Agencies that permit such programs, therefore, must rigorously evaluate them, to ensure that they do not cause more harm than good to the very citizens they pledge to protect.
American Journal of Evaluation | 2008
Carol H. Weiss; Erin Murphy-Graham; Anthony Petrosino; Allison Gruner Gandhi
Evaluators sometimes wish for a Fairy Godmother who would make decision makers pay attention to evaluation findings when choosing programs to implement. The U.S. Department of Education came close to creating such a Fairy Godmother when it required school districts to choose drug abuse prevention programs only if their effectiveness was supported by “scientific” evidence. The experience showed advantages of such a procedure (e.g., reduction in support for D.A.R.E., which evaluation had found wanting) but also shortcomings (limited and in some cases questionable evaluation evidence in support of other programs). Federal procedures for identifying successful programs appeared biased. In addition, the Fairy Godmother discounted the professional judgment of local educators and did little to improve the fit of programs to local conditions. Nevertheless, giving evaluation more clout is a worthwhile way to increase the rationality of decision making. The authors recommend research on procedures used by other agencies to achieve similar aims.
Evaluation Review | 2003
Ted Palmer; Anthony Petrosino
During the 1960s and 1970s, the California Youth Authority embarked on a series of randomized field trials to test interventions for juvenile and young adult offenders. This article examines the institutional and political reasons why rigorous tests were adopted for such interventions as the Community Treatment Program. It also describes the effect these trials had on the agency and on California justice, as well as how the experimental method eventually became less often used in the Youth Authority. The authors explore some general reasons why this happened.
Evaluation Review | 2000
Anthony Petrosino
The author examines the role of mediators and moderators in the evaluation of programs for children. The terms are defined and examples of each are presented. Using bibliometric analysis, the author examines how evaluators use mediators and moderators in treatment studies in education, juvenile justice, health care, child protection, and mental health. The use of mediators and moderators is sporadic and vague at best. An agenda for improvement is outlined that includes greater use of program theory and intensive case studies to find out why researchers in prevention and health promotion incorporate mediators and moderators more effectively in their evaluations.
Reading Research Quarterly | 2009
Cynthia Greenleaf; Anthony Petrosino
Reading Research Quarterly • 43(3) • pp. 290–322 • dx.doi.org/10.1598/RRQ.43.3.4 •
Evaluation Review | 1995
Anthony Petrosino
One of the most important stages of a meta-analysis is specifying the inclusion criteria. In other words, what studies will be included in or excluded from the quantitative review? How are these decisions made? The author presents problems and illustrations from this first phase in an ongoing meta-analysis of crime reduction programs. The eight criteria for including studies in the crime reduction meta-analysis are specified, problematic studies confronted using the criteria are listed, and rules for handling those studies to retain consistency throughout the meta-analysis are discussed. The article concludes with three recommendations for future meta-analyses of this type.
Research on Social Work Practice | 2015
Anthony Petrosino; Claire Morgan; Trevor Fronius; Emily E. Tanner-Smith; Robert F. Boruch
Due to evidence linking education and development, funding has been invested in interventions relevant to getting youth into school and keeping them there. This article reports on a systematic review of impact studies of these school enrollment interventions. Reports were identified through electronic searches of bibliographic databases and other methods. To be eligible, studies (1) assessed impact on primary or secondary school enrollment outcomes; (2) used a rigorous design; (3) were conducted in a low- or middle-income nation; (4) included at least one quantifiable measure of enrollment or related outcomes; (5) were available before December 2009; and (6) included data on participants post-1990. A coding instrument extracted data on study characteristics from each report. Standardized mean difference effect sizes were computed for the first effect reported. The sample includes 73 evaluations. The average effect size was positive across all outcomes. However, the results varied. Studies that focused on building new schools and other infrastructure interventions reported the largest average effects.
Educational Researcher | 2012
Anthony Petrosino
This article responds to arguments by Skidmore and Thompson (this issue of Educational Researcher) that a graph published more than 10 years ago was erroneously reproduced and “gratuitously damaged” perceptions of the quality of education research. After describing the purpose of the original graph, the author counters assertions that the graph changed perceptions or that this was anything more than a case of unintentional editorial sloppiness.
Journal of Experimental Criminology | 2005
Anthony Petrosino; Haluk Soydan