Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert F. Boruch is active.

Publication


Featured researches published by Robert F. Boruch.


Prevention Science | 2005

Standards of Evidence: Criteria for Efficacy, Effectiveness and Dissemination

Brian R. Flay; Anthony Biglan; Robert F. Boruch; Felipe González Castro; Denise C. Gottfredson; Sheppard G. Kellam; Eve K. Mościcki; Steven P. Schinke; Jeffrey C. Valentine; Peter Ji

Ever increasing demands for accountability, together with the proliferation of lists of evidence-based prevention programs and policies, led the Society for Prevention Research to charge a committee with establishing standards for identifying effective prevention programs and policies. Recognizing that interventions that are effective and ready for dissemination are a subset of effective programs and policies, and that effective programs and policies are a subset of efficacious interventions, SPR’s Standards Committee developed overlapping sets of standards. We designed these Standards to assist practitioners, policy makers, and administrators to determine which interventions are efficacious, which are effective, and which are ready for dissemination. Under these Standards, an efficacious intervention will have been tested in at least two rigorous trials that (1) involved defined samples from defined populations, (2) used psychometrically sound measures and data collection procedures; (3) analyzed their data with rigorous statistical approaches; (4) showed consistent positive effects (without serious iatrogenic effects); and (5) reported at least one significant long-term follow-up. An effective intervention under these Standards will not only meet all standards for efficacious interventions, but also will have (1) manuals, appropriate training, and technical support available to allow third parties to adopt and implement the intervention; (2) been evaluated under real-world conditions in studies that included sound measurement of the level of implementation and engagement of the target audience (in both the intervention and control conditions); (3) indicated the practical importance of intervention outcome effects; and (4) clearly demonstrated to whom intervention findings can be generalized. An intervention recognized as ready for broad dissemination under these Standards will not only meet all standards for efficacious and effective interventions, but will also provide (1) evidence of the ability to “go to scale”; (2) clear cost information; and (3) monitoring and evaluation tools so that adopting agencies can monitor or evaluate how well the intervention works in their settings. Finally, the Standards Committee identified possible standards desirable for current and future areas of prevention science as the field develops. If successful, these Standards will inform efforts in the field to find prevention programs and policies that are of proven efficacy, effectiveness, or readiness for adoption and will guide prevention scientists as they seek to discover, research, and bring to the field new prevention programs and policies.


BMJ | 2001

The Campbell Collaboration: Does for public policy what Cochrane does for health

Philip Davies; Robert F. Boruch

Evidence based practice has moved beyond the bounds of health care and is now central to public policymaking. In the United Kingdom attempts to modernise government1–3 emphasise the importance of using evidence of the effectiveness of interventions as part of a drive towards higher quality and “joined up” policymaking. Similar interest in evidence based public policy is apparent in other countries. This demand for more, and better, evidence on which to develop public policy requires new sources of valid, reliable, and relevant evidence. One of these sources is the Campbell Collaboration (http://campbell.gse.upenn.edu/). This is an international organisation, inspired by the Cochrane Collaboration, which seeks to help policymakers, practitioners, and the public make well informed decisions about policy interventions by preparing, maintaining, and disseminating systematic reviews of the effectiveness of social and behavioural interventions in education, crime and justice, and social welfare. Systematic reviews can be a valid and reliable means of avoiding the bias that comes from the fact that single studies are specific to a …


Annals of The American Academy of Political and Social Science | 2001

Meeting the Challenges of Evidence-Based Policy: The Campbell Collaboration

Anthony Petrosino; Robert F. Boruch; Haluk Soydan; Lorna Duggan; Julio Sánchez-Meca

Evidence-based policy has much to recommend it, but it also faces significant challenges. These challenges reside not only in the dilemmas faced by policy makers but also in the quality of the evaluation evidence. Some of these problems are most effectively addressed by rigorous syntheses of the literature known as systematic reviews. Other problems remain, including the range of quality in systematic reviews and their general failure to be updated in light of new evidence or disseminated beyond the research community. Based on the precedent established in health care by the international Cochrane Collaboration, the newly formed Campbell Collaboration will prepare, maintain, and make accessible systematic reviews of research on the effects of social and educational interventions. Through mechanisms such as rigorous quality control, electronic publication, and worldwide coverage of the literature, the Campbell Collaboration seeks to meet challenges posed by evidence-based policy.


Prevention Science | 2011

Replication in Prevention Science

Jeffrey C. Valentine; Anthony Biglan; Robert F. Boruch; Felipe González Castro; Linda M. Collins; Brian R. Flay; Sheppard G. Kellam; Eve K. Mościcki; Steven P. Schinke

Replication research is essential for the advancement of any scientific field. In this paper, we argue that prevention science will be better positioned to help improve public health if (a) more replications are conducted; (b) those replications are systematic, thoughtful, and conducted with full knowledge of the trials that have preceded them; and (c) state-of-the art techniques are used to summarize the body of evidence on the effects of the interventions. Under real-world demands it is often not feasible to wait for multiple replications to accumulate before making decisions about intervention adoption. To help individuals and agencies make better decisions about intervention utility, we outline strategies that can be used to help understand the likely direction, size, and range of intervention effects as suggested by the current knowledge base. We also suggest structural changes that could increase the amount and quality of replication research, such as the provision of incentives and a more vigorous pursuit of prospective research registers. Finally, we discuss methods for integrating replications into the roll-out of a program and suggest that strong partnerships with local decision makers are a key component of success in replication research. Our hope is that this paper can highlight the importance of replication and stimulate more discussion of the important elements of the replication process. We are confident that, armed with more and better replications and state-of-the-art review methods, prevention science will be in a better position to positively impact public health.


BMJ | 2011

Impact of CONSORT extension for cluster randomised trials on quality of reporting and study methodology: review of random sample of 300 trials, 2000-8

Noah Ivers; Monica Taljaard; Stephanie N. Dixon; Carol Bennett; Andrew D McRae; Julia Taleban; Zoe Skea; Jamie C. Brehaut; Robert F. Boruch; Martin P Eccles; Jeremy Grimshaw; Charles Weijer; Merrick Zwarenstein; Allan Donner

Objective To assess the impact of the 2004 extension of the CONSORT guidelines on the reporting and methodological quality of cluster randomised trials. Design Methodological review of 300 randomly sampled cluster randomised trials. Two reviewers independently abstracted 14 criteria related to quality of reporting and four methodological criteria specific to cluster randomised trials. We compared manuscripts published before CONSORT (2000-4) with those published after CONSORT (2005-8). We also investigated differences by journal impact factor, type of journal, and trial setting. Data sources A validated Medline search strategy. Eligibility criteria for selecting studies Cluster randomised trials published in English language journals, 2000-8. Results There were significant improvements in five of 14 reporting criteria: identification as cluster randomised; justification for cluster randomisation; reporting whether outcome assessments were blind; reporting the number of clusters randomised; and reporting the number of clusters lost to follow-up. No significant improvements were found in adherence to methodological criteria. Trials conducted in clinical rather than non-clinical settings and studies published in medical journals with higher impact factor or general medical journals were more likely to adhere to recommended reporting and methodological criteria overall, but there was no evidence that improvements after publication of the CONSORT extension for cluster trials were more likely in trials conducted in clinical settings nor in trials published in either general medical journals or in higher impact factor journals. Conclusion The quality of reporting of cluster randomised trials improved in only a few aspects since the publication of the extension of CONSORT for cluster randomised trials, and no improvements at all were observed in essential methodological features. Overall, the adherence to reporting and methodological guidelines for cluster randomised trials remains suboptimal, and further efforts are needed to improve both reporting and methodology.


Educational Researcher | 2014

Moving Through MOOCs: Understanding the Progression of Users in Massive Open Online Courses

Laura W. Perna; Alan Ruby; Robert F. Boruch; Nicole Wang; Janie Scull; Seher Ahmad; Chad Evans

This paper reports on the progress of users through 16 Coursera courses taught by University of Pennsylvania faculty for the first time between June 2012 and July 2013. Using descriptive analyses, this study advances knowledge by considering two definitions of massive open online course (MOOC) users (registrants and starters), comparing two approaches to measuring student progress through a MOOC course (sequential versus user driven), and examining several measures of MOOC outcomes and milestones. The patterns of user progression found in this study may not describe current or future patterns given the continued evolution of MOOCs. Nonetheless, the findings provide a baseline for future studies.


Evaluation Review | 1978

RANDOMIZED FIELD EXPERIMENTS FOR PROGRAM PLANNING, DEVELOPMENT, AND EVALUATION An Illustrative Bibliography

Robert F. Boruch; A. John McSweeny; E. Jon Soderstrom

This bibliography lists references to cover 300 randomized field experiments undertaken in schools, hospitals, prisons, and other social settings. The list is divided into 10 major categories corresponding to the type of program under examination. They include: criminal and civil justice programs, mental health, training and education, mass media, information collection, utilization, commerce and industry, welfare, health, and family planning. The main purpose of the bibliography is to provide evidence on feasibility and scope of randomized field tests, since despite their advantages, it is not always clearfrom managerial, political, and other constraints on research that they can be mounted.


Trials | 2011

When is informed consent required in cluster randomized trials in health research

Andrew D McRae; Charles Weijer; Ariella Binik; Jeremy Grimshaw; Robert F. Boruch; Jamie C. Brehaut; Allan Donner; Martin Eccles; Raphael Saginur; Angela White; Monica Taljaard

This article is part of a series of papers examining ethical issues in cluster randomized trials (CRTs) in health research. In the introductory paper in this series, we set out six areas of inquiry that must be addressed if the cluster trial is to be set on a firm ethical foundation. This paper addresses the second of the questions posed, namely, from whom, when, and how must informed consent be obtained in CRTs in health research? The ethical principle of respect for persons implies that researchers are generally obligated to obtain the informed consent of research subjects. Aspects of CRT design, including cluster randomization, cluster level interventions, and cluster size, present challenges to obtaining informed consent. Here we address five questions related to consent and CRTs: How can a study proceed if informed consent is not possible? Is consent to randomization always required? What information must be disclosed to potential subjects if their cluster has already been randomized? Is passive consent a valid substitute for informed consent? Do health professionals have a moral obligation to participate as subjects in CRTs designed to improve professional practice?We set out a framework based on the moral foundations of informed consent and international regulatory provisions to address each of these questions. First, when informed consent is not possible, a study may proceed if a research ethics committee is satisfied that conditions for a waiver of consent are satisfied. Second, informed consent to randomization may not be required if it is not possible to approach subjects at the time of randomization. Third, when potential subjects are approached after cluster randomization, they must be provided with a detailed description of the interventions in the trial arm to which their cluster has been randomized; detailed information on interventions in other trial arms need not be provided. Fourth, while passive consent may serve a variety of practical ends, it is not a substitute for valid informed consent. Fifth, while health professionals may have a moral obligation to participate as subjects in research, this does not diminish the necessity of informed consent to study participation.


Crime & Delinquency | 2000

The Importance of Randomized Field Trials

Robert F. Boruch; Brooke Snyder; Dorothy DeMoya

This article lays out five standards for judging the importance of randomized field trials in estimating the relative effects of new programs and new variations on existing programs. These standards include contemporary evaluation policy, the historical development of trials in diverse sciences, ethics, normative practice, and the credibility of alternative approaches to estimating the effects of programs or variations. Empirical evidence and a line of reasoning bearing on each standard are made plain.


Evaluation Review | 1985

Social Policy Experimentation A Position Paper

Richard A. Berk; Robert F. Boruch; David L. Chambers; Peter H. Rossi; Ann Dryden Witte

We review the argumentsfor and against randomized field experiments design to address important questions of social policy. Based on this review, we make a number of recommendations about how the use of randomized field experiments might be fostered.

Collaboration


Dive into the Robert F. Boruch's collaboration.

Top Co-Authors

Avatar

Jeremy Grimshaw

Ottawa Hospital Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Allan Donner

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar

Charles Weijer

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar

Jamie C. Brehaut

Ottawa Hospital Research Institute

View shared research outputs
Top Co-Authors

Avatar

Monica Taljaard

Ottawa Hospital Research Institute

View shared research outputs
Top Co-Authors

Avatar

Andrew D McRae

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar

Merrick Zwarenstein

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar

Raphael Saginur

Ottawa Hospital Research Institute

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge