Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where William R. Shadish is active.

Publication


Featured researches published by William R. Shadish.


Psychological Bulletin | 1997

Outcome, attrition, and family-couples treatment for drug abuse: a meta-analysis and review of the controlled, comparative studies.

M. Duncan Stanton; William R. Shadish

This review synthesizes drug abuse outcome studies that included a family-couples therapy treatment condition. The meta-analytic evidence, across 1,571 cases involving an estimated 3,500 patients and family members, favors family therapy over (a) individual counseling or therapy, (b) peer group therapy, and (c) family psychoeducation. Family therapy is as effective for adults as for adolescents and appears to be a cost-effective adjunct to methadone maintenance. Because family therapy frequently had higher treatment retention rates than did nonfamily therapy modalities, it was modestly penalized in studies that excluded treatment dropouts from their analyses, as family therapy apparently had retained a higher proportion of poorer prognosis cases. Re-analysis, with dropouts regarded as failures, generally offset this artifact. Two statistical effect size measures to contend with attrition (dropout d and total attrition d) are offered for future researchers and policy makers.


Remedial and Special Education | 2013

Single-Case Intervention Research Design Standards

Thomas R. Kratochwill; John H. Hitchcock; Robert H. Horner; Joel R. Levin; Samuel L. Odom; David Rindskopf; William R. Shadish

In an effort to responsibly incorporate evidence based on single-case designs (SCDs) into the What Works Clearinghouse (WWC) evidence base, the WWC assembled a panel of individuals with expertise in quantitative methods and SCD methodology to draft SCD standards. In this article, the panel provides an overview of the SCD standards recommended by the panel (henceforth referred to as the Standards) and adopted in Version 1.0 of the WWC’s official pilot standards. The Standards are sequentially applied to research studies that incorporate SCDs. The design standards focus on the methodological soundness of SCDs, whereby reviewers assign the categories of Meets Standards, Meets Standards With Reservations, and Does Not Meet Standards to each study. Evidence criteria focus on the credibility of the reported evidence, whereby the outcome measures that meet the design standards (with or without reservations) are examined by reviewers trained in visual analysis and categorized as demonstrating Strong Evidence, Moderate Evidence, or No Evidence. An illustration of an actual research application of the Standards is provided. Issues that the panel did not address are presented as priorities for future consideration. Implications for research and the evidence-based practice movement in psychology and education are discussed. The WWC’s Version 1.0 SCD standards are currently being piloted in systematic reviews conducted by the WWC. This document reflects the initial standards recommended by the authors as well as the underlying rationale for those standards. It should be noted that the WWC may revise the Version 1.0 standards based on the results of the pilot; future versions of the WWC standards can be found at http://www.whatworks.ed.gov.


Psychological Bulletin | 2000

The effects of psychological therapies under clinically representative conditions: A meta-analysis.

William R. Shadish; Ana M. Navarro; Georg E. Matt; Glenn A. Phillips

Recently, concern has arisen that meta-analyses overestimate the effects of psychological therapies and that those therapies may not work under clinically representative conditions. This meta-analysis of 90 studies found that therapies are effective over a range of clinical representativeness. The projected effects of an ideal study of clinically representative therapy are similar to effect sizes in past meta-analyses. Effects increase with larger dose and when outcome measures are specific to treatment. Some clinically representative studies used self-selected treatment clients who were more distressed than available controls, and these quasi-experiments underestimated therapy effects. This study illustrates the joint use of fixed and random effects models, use of pretest effect sizes to study selection bias in quasi-experiments, and use of regression analysis to project results to an ideal study in the spirit of response surface modeling.


Journal of the American Statistical Association | 2008

Can Nonrandomized Experiments Yield Accurate Answers? A Randomized Experiment Comparing Random and Nonrandom Assignments

William R. Shadish; M. H. Clark; Peter M. Steiner

A key justification for using nonrandomized experiments is that, with proper adjustment, their results can well approximate results from randomized experiments. This hypothesis has not been consistently supported by empirical studies; however, previous methods used to study this hypothesis have confounded assignment method with other study features. To avoid these confounding factors, this study randomly assigned participants to be in a randomized experiment or a nonrandomized experiment. In the randomized experiment, participants were randomly assigned to mathematics or vocabulary training; in the nonrandomized experiment, participants chose their training. The study held all other features of the experiment constant; it carefully measured pretest variables that might predict the condition that participants chose, and all participants were measured on vocabulary and mathematics outcomes. Ordinary linear regression reduced bias in the nonrandomized experiment by 84–94% using covariate-adjusted randomized results as the benchmark. Propensity score stratification, weighting, and covariance adjustment reduced bias by about 58–96%, depending on the outcome measure and adjustment method. Propensity score adjustment performed poorly when the scores were constructed from predictors of convenience (sex, age, marital status, and ethnicity) rather than from a broader set of predictors that might include these. Please see the online supplements for a Letter to the Editor.


Journal of Consulting and Clinical Psychology | 1993

Effects of family and marital psychotherapies: a meta-analysis.

William R. Shadish; Linda M. Montgomery; Paul Wilson; Mary R. Wilson; Ivey Bright; Theresa Okwumabua

This meta-analysis of 163 randomized trials (including 59 dissertations) examines a number of questions not studied in previous syntheses. These include differences in outcome associated with different theoretical orientations, differences between marital and family therapies versus individual therapies, the clinical significance of therapy outcome, differences between marital versus family therapies in both outcomes and problems treated, and the effects of various substantive and methodological moderators of therapy outcome. The review concludes with some observations about the methodological status of this literature.


Evaluation Review | 2005

Propensity scores: An introduction and experimental test.

Jason K. Luellen; William R. Shadish; M. H. Clark

Propensity score analysis is a relatively recent statistical innovation that is useful in the analysis of data from quasi-experiments. The goal of propensity score analysis is to balance two non-equivalent groups on observed covariates to get more accurate estimates of the effects of a treatment on which the two groups differ. This article presents a general introduction to propensity score analysis, provides an example using data from a quasi-experiment compared to a benchmark randomized experiment, offers practical advice about how to do such analyses, and discusses some limitations of the approach. It also presents the first detailed instructions to appear in the literature on how to use classification tree analysis and bagging for classification trees in the construction of propensity scores. The latter two examples serve as an introduction for researchers interested in computing propensity scores using more complex classification algorithms known as ensemble methods.


New Directions for Program Evaluation | 1995

Guiding Principles for Evaluators.

Dianna L. Newman; Mary Ann Scheirer; William R. Shadish; Christopher G. Wye

Presented here is the version of the American Evaluation Association Guiding Principles for Evaluators that was approved and copyrighted by the AEA board of directors and subsequently adopted by vote of the AEA membership.


Psychological Methods | 1998

Using odds ratios as effect sizes for meta-analysis of dichotomous data: A primer on methods and issues.

C. Keith Haddock; David Rindskopf; William R. Shadish

Many meta-analysts incorrectly use correlations or standardized mean difference statistics to compute effect sizes on dichotomous data. Odds ratios and their logarithms should almost always be preferred for such data. This article reviews the issues and shows how to use odds ratios in meta-analytic data, both alone and in combination with other effect size estimators. Examples illustrate procedures for estimating the weighted average of such effect sizes and methods for computing variance estimates, confidence intervals, and homogeneity tests. Descriptions of fixedand random-effects models help determine whether effect sizes are functions of study characteristics, and a random-effects regression model, previously unused for odds ratio data, is described. Although all but the latter of these procedures are already widely known in areas such as medicine and epidemiology, the absence of their use in psychology suggests a need for this description.


Psychological Methods | 2010

The Importance of Covariate Selection in Controlling for Selection Bias in Observational Studies

Peter M. Steiner; Thomas D. Cook; William R. Shadish; M. H. Clark

The assumption of strongly ignorable treatment assignment is required for eliminating selection bias in observational studies. To meet this assumption, researchers often rely on a strategy of selecting covariates that they think will control for selection bias. Theory indicates that the most important covariates are those highly correlated with both the real selection process and the potential outcomes. However, when planning a study, it is rarely possible to identify such covariates with certainty. In this article, we report on an extensive reanalysis of a within-study comparison that contrasts a randomized experiment and a quasi-experiment. Various covariate sets were used to adjust for initial group differences in the quasi-experiment that was characterized by self-selection into treatment. The adjusted effect sizes were then compared with the experimental ones to identify which individual covariates, and which conceptually grouped sets of covariates, were responsible for the high degree of bias reduction achieved in the adjusted quasi-experiment. Such results provide strong clues about preferred strategies for identifying the covariates most likely to reduce bias when planning a study and when the true selection process is not known.


Journal of Consulting and Clinical Psychology | 1997

How much weight gain occurs following smoking cessation ? A comparison of weight gain using both continuous and point prevalence abstinence

Robert C. Klesges; Suzan E. Winders; Andrew W. Meyers; Linda H. Eck; Kenneth D. Ward; Cynthia M. Hultquist; JoAnne W. Ray; William R. Shadish

Estimates of postcessation weight gain vary widely. This study determined the magnitude of weight gain in a cohort using both point prevalence and continuous abstinence criteria for cessation. Participants were 196 volunteers who participated in a smoking cessation program and who either continuously smoked (n = 118), were continuously abstinent (n = 51), or who were point prevalent abstinent (n = 27) (i.e., quit at the 1-year follow-up visit but not at others). Continuously abstinent participants gained over 13 lbs. (5.90 kg) at 1 year, significantly more than continuously smoking (M = 2.4 lb.) and point prevalent abstinent participants (M = 6.7 lbs., or 3.04 kg). Individual growth curve analysis confirmed that weight gain and the rate of weight gain (pounds per month) were greater among continuously smoking participants and that these effects were independent of gender, baseline weight, smoking and dieting history, age, and education. Results suggest that studies using point prevalence abstinence to estimate postcessation weight gain may be underestimating postcessation weight gain.

Collaboration


Dive into the William R. Shadish's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Rindskopf

City University of New York

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter M. Steiner

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James E. Pustejovsky

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thomas R. Kratochwill

University of Wisconsin-Madison

View shared research outputs
Researchain Logo
Decentralizing Knowledge