Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dylan S. Small is active.

Publication


Featured researches published by Dylan S. Small.


The New England Journal of Medicine | 2015

Randomized Trial of Four Financial-Incentive Programs for Smoking Cessation

Scott D. Halpern; Benjamin French; Dylan S. Small; Kathryn A. Saulsgiver; Michael O. Harhay; Janet Audrain-McGovern; George Loewenstein; Troyen A. Brennan; David A. Asch; Kevin G. Volpp

BACKGROUND Financial incentives promote many health behaviors, but effective ways to deliver health incentives remain uncertain. METHODS We randomly assigned CVS Caremark employees and their relatives and friends to one of four incentive programs or to usual care for smoking cessation. Two of the incentive programs targeted individuals, and two targeted groups of six participants. One of the individual-oriented programs and one of the group-oriented programs entailed rewards of approximately


Journal of Asthma | 2006

Asthma Numeracy Skill and Health Literacy

Andrea J. Apter; Jing Cheng; Dylan S. Small; Ian M. Bennett; Claire Albert; Daniel G. Fein; Maureen George; Simone Van Horne

800 for smoking cessation; the others entailed refundable deposits of


Statistics in Medicine | 2014

Instrumental variable methods for causal inference

Michael Baiocchi; Jing Cheng; Dylan S. Small

150 plus


Journal of the American Statistical Association | 2007

Sensitivity Analysis for Instrumental Variables Regression With Overidentifying Restrictions

Dylan S. Small

650 in reward payments for successful participants. Usual care included informational resources and free smoking-cessation aids. RESULTS Overall, 2538 participants were enrolled. Of those assigned to reward-based programs, 90.0% accepted the assignment, as compared with 13.7% of those assigned to deposit-based programs (P<0.001). In intention-to-treat analyses, rates of sustained abstinence from smoking through 6 months were higher with each of the four incentive programs (range, 9.4 to 16.0%) than with usual care (6.0%) (P<0.05 for all comparisons); the superiority of reward-based programs was sustained through 12 months. Group-oriented and individual-oriented programs were associated with similar 6-month abstinence rates (13.7% and 12.1%, respectively; P=0.29). Reward-based programs were associated with higher abstinence rates than deposit-based programs (15.7% vs. 10.2%, P<0.001). However, in instrumental-variable analyses that accounted for differential acceptance, the rate of abstinence at 6 months was 13.2 percentage points (95% confidence interval, 3.1 to 22.8) higher in the deposit-based programs than in the reward-based programs among the estimated 13.7% of the participants who would accept participation in either type of program. CONCLUSIONS Reward-based programs were much more commonly accepted than deposit-based programs, leading to higher rates of sustained abstinence from smoking. Group-oriented incentive programs were no more effective than individual-oriented programs. (Funded by the National Institutes of Health and CVS Caremark; ClinicalTrials.gov number, NCT01526265.).


Journal of the American Statistical Association | 2010

Building a Stronger Instrument in an Observational Study of Perinatal Care for Premature Infants

Mike Baiocchi; Dylan S. Small; Scott A. Lorch; Paul R. Rosenbaum

To assess understanding of numerical concepts in asthma self-management instructions, a 4-item Asthma Numeracy Questionnaire (ANQ) was developed and read to 73 adults with persistent asthma. Participants completed the Short Test of Functional Health Literacy in Adults (STOFHLA), 12(16%) answered all 4 numeracy items correctly; 6(8%) answered none correctly. Participants were least likely to understand items involving risk and percentages. Low numeracy but not STOFHLA score was associated with a history of hospitalization for asthma. At higher STOFHLA levels there was a wide range of the total number of correct numeracy responses. Numeracy is a unique and important component of health literacy.


Journal of the American Statistical Association | 2008

War and Wages

Dylan S. Small; Paul R. Rosenbaum

A goal of many health studies is to determine the causal effect of a treatment or intervention on health outcomes. Often, it is not ethically or practically possible to conduct a perfectly randomized experiment, and instead, an observational study must be used. A major challenge to the validity of observational studies is the possibility of unmeasured confounding (i.e., unmeasured ways in which the treatment and control groups differ before treatment administration, which also affect the outcome). Instrumental variables analysis is a method for controlling for unmeasured confounding. This type of analysis requires the measurement of a valid instrumental variable, which is a variable that (i) is independent of the unmeasured confounding; (ii) affects the treatment; and (iii) affects the outcome only indirectly through its effect on the treatment. This tutorial discusses the types of causal effects that can be estimated by instrumental variables analysis; the assumptions needed for instrumental variables analysis to provide valid estimates of causal effects and sensitivity analysis for those assumptions; methods of estimation of causal effects using instrumental variables; and sources of instrumental variables in health studies.


Statistics in Medicine | 2014

The use of bootstrapping when using propensity-score matching without replacement: a simulation study

Peter C. Austin; Dylan S. Small

Instrumental variables regression (IV regression) is a method for making causal inferences about the effect of a treatment based on an observational study in which there are unmeasured confounding variables. The method requires one or more valid instrumental variables (IVs); a valid IV is a variable that is associated with the treatment, is independent of unmeasured confounding variables, and has no direct effect on the outcome. Often there is uncertainty about the validity of the proposed IVs. When a researcher proposes more than one IV, the validity of these IVs can be tested through the “overidentifying restrictions test.” Although the overidentifying restrictions test does provide some information, the test has no power versus certain alternatives and can have low power versus many alternatives due to its omnibus nature. To fully address uncertainty about the validity of the proposed IVs, we argue that a sensitivity analysis is needed. A sensitivity analysis examines the impact of plausible amounts of invalidity of the proposed IVs on inferences for the parameters of interest. We develop a method of sensitivity analysis for IV regression with overidentifying restrictions that makes full use of the information provided by the overidentifying restrictions test but provides more information than the test by exploring sensitivity to violations of the validity of the proposed IVs in directions for which the test has low power. Our sensitivity analysis uses interpretable parameters that can be discussed with subject matter experts. We illustrate our method using a study of food demand among rural households in the Philippines.


Statistical Science | 2007

Defining and Estimating Intervention Effects for Groups that will Develop an Auxiliary Outcome

Marshall M. Joffe; Dylan S. Small; Chi-yuan Hsu

An instrument is a random nudge toward acceptance of a treatment that affects outcomes only to the extent that it affects acceptance of the treatment. Nonetheless, in settings in which treatment assignment is mostly deliberate and not random, there may exist some essentially random nudges to accept treatment, so that use of an instrument might extract bits of random treatment assignment from a setting that is otherwise quite biased in its treatment assignments. An instrument is weak if the random nudges barely influence treatment assignment or strong if the nudges are often decisive in influencing treatment assignment. Although ideally an ostensibly random instrument is perfectly random and not biased, it is not possible to be certain of this; thus a typical concern is that even the instrument might be biased to some degree. It is known from theoretical arguments that weak instruments are invariably sensitive to extremely small biases; for this reason, strong instruments are preferred. The strength of an instrument is often taken as a given. It is not. In an evaluation of effects of perinatal care on the mortality of premature infants, we show that it is possible to build a stronger instrument, we show how to do it, and we show that success in this task is critically important. We also develop methods of permutation inference for effect ratios, a key component in an instrumental variable analysis.


Statistics in Medicine | 2009

Mediation analysis with principal stratification

Robert Gallop; Dylan S. Small; Julia Y Lin; Michael R. Elliott; Marshall M. Joffe; Thomas R. Ten Have

An instrument manipulates a treatment that it does not entirely control, but the instrument affects the outcome only indirectly through its manipulation of the treatment. The idealized prototype is the randomized encouragement design, in which subjects are randomly assigned to receive either encouragement to accept the treatment or no such encouragement, but not all subjects comply by doing what they are encouraged to do, and the situation is such that only the treatment itself, not disregarded encouragement alone, can affect the outcome. An instrument is weak if it has only a slight impact on acceptance of the treatment, that is, if most people disregard encouragement to accept the treatment. Typical applications of instrumental variables are not ideal; encouragement is not randomized, although it may be assigned in a far less biased manner than the treatment itself. Using the concept of design sensitivity, we study the sensitivity of instrumental variable analyses to departures from the ideal of random assignment of encouragement, with particular reference to the strength of the instrument. With these issues in mind, we reanalyze a clever study by Angrist and Krueger concerning the effects of military service during World War II on subsequent earnings, in which birth cohorts of very similar but not identical age were differently “encouraged” to serve in the war. A striking feature of this example is that those who served earned more, but the effect of service on earnings appears to be negative; that is, the instrumental variables analysis reverses the sign of the naive comparison. For expository purposes, this example has the convenient feature of enabling, by selecting different birth cohorts, the creation of instruments of varied strength, from extremely weak to fairly strong, although separated by the same time interval and thus perhaps similarly biased. No matter how large the sample size becomes, even if the effect under study is quite large, studies with weak instruments are extremely sensitive to tiny biases, whereas studies with stronger instruments can be insensitive to moderate biases.


JAMA | 2014

Association of the 2011 ACGME Resident Duty Hour Reforms With Mortality and Readmissions Among Hospitalized Medicare Patients

Mitesh S. Patel; Kevin G. Volpp; Dylan S. Small; Alexander S. Hill; Orit Even-Shoshan; Lisa Rosenbaum; Richard N. Ross; Lisa M. Bellini; Jingsan Zhu; Jeffrey H. Silber

Propensity-score matching is frequently used to estimate the effect of treatments, exposures, and interventions when using observational data. An important issue when using propensity-score matching is how to estimate the standard error of the estimated treatment effect. Accurate variance estimation permits construction of confidence intervals that have the advertised coverage rates and tests of statistical significance that have the correct type I error rates. There is disagreement in the literature as to how standard errors should be estimated. The bootstrap is a commonly used resampling method that permits estimation of the sampling variability of estimated parameters. Bootstrap methods are rarely used in conjunction with propensity-score matching. We propose two different bootstrap methods for use when using propensity-score matching without replacementand examined their performance with a series of Monte Carlo simulations. The first method involved drawing bootstrap samples from the matched pairs in the propensity-score-matched sample. The second method involved drawing bootstrap samples from the original sample and estimating the propensity score separately in each bootstrap sample and creating a matched sample within each of these bootstrap samples. The former approach was found to result in estimates of the standard error that were closer to the empirical standard deviation of the sampling distribution of estimated effects.

Collaboration


Dive into the Dylan S. Small's collaboration.

Top Co-Authors

Avatar

Paul R. Rosenbaum

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Kevin G. Volpp

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Jing Cheng

University of California

View shared research outputs
Top Co-Authors

Avatar

Jingsan Zhu

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Mitesh S. Patel

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Scott A. Lorch

Children's Hospital of Philadelphia

View shared research outputs
Top Co-Authors

Avatar

Scott D. Halpern

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

David A. Asch

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Hyunseung Kang

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Jeffrey H. Silber

Children's Hospital of Philadelphia

View shared research outputs
Researchain Logo
Decentralizing Knowledge