Qingyuan Zhao
University of Pennsylvania
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Qingyuan Zhao.
Journal of the American Statistical Association | 2018
Qingyuan Zhao
ABSTRACT This article proposes a new quantity called the “sensitivity value,” which is defined as the minimum strength of unmeasured confounders needed to change the qualitative conclusions of a naive analysis assuming no unmeasured confounder. We establish the asymptotic normality of the sensitivity value in pair-matched observational studies. The theoretical results are then used to approximate the power of a sensitivity analysis and select the design of a study. We explore the potential to use sensitivity values to screen multiple hypotheses in the presence of unmeasured confounding using a microarray dataset. Supplementary materials for this article are available online.
Journal of the American Statistical Association | 2018
Qingyuan Zhao; Dylan S. Small; Paul R. Rosenbaum
ABSTRACT We discuss observational studies that test many causal hypotheses, either hypotheses about many outcomes or many treatments. To be credible an observational study that tests many causal hypotheses must demonstrate that its conclusions are neither artifacts of multiple testing nor of small biases from nonrandom treatment assignment. In a sense that needs to be defined carefully, hidden within a sensitivity analysis for nonrandom assignment is an enormous correction for multiple testing: In the absence of bias, it is extremely improbable that multiple testing alone would create an association insensitive to moderate biases. We propose a new strategy called “cross-screening,” different from but motivated by recent work of Bogomolov and Heller on replicability. Cross-screening splits the data in half at random, uses the first half to plan a study carried out on the second half, then uses the second half to plan a study carried out on the first half, and reports the more favorable conclusions of the two studies correcting using the Bonferroni inequality for having done two studies. If the two studies happen to concur, then they achieve Bogomolov–Heller replicability; however, importantly, replicability is not required for strong control of the family-wise error rate, and either study alone suffices for firm conclusions. In randomized studies with just a few null hypotheses, cross-screening is not an attractive method when compared with conventional methods of multiplicity control. However, cross-screening has substantially higher power when hundreds or thousands of hypotheses are subjected to sensitivity analyses in an observational study of moderate size. We illustrate the technique by comparing 46 biomarkers in individuals who consume large quantities of fish versus little or no fish. The R package CrossScreening on CRAN implements the cross-screening method. Supplementary materials for this article, including a standardized description of the materials available for reproducing the work, are available as an online supplement.
Journal of the American Statistical Association | 2018
Qingyuan Zhao; Dylan S. Small; Weijie Su
ABSTRACT In the evaluation of treatment effects, it is of major policy interest to know if the treatment is beneficial for some and harmful for others, a phenomenon known as qualitative interaction. We formulate this question as a multiple testing problem with many conservative null p-values, in which the classical multiple testing methods may lose power substantially. We propose a simple technique—conditioning—to improve the power. A crucial assumption we need is uniform conservativeness, meaning for any conservative p-value p, the conditional distribution (p/τ) | p ⩽ τ is stochastically larger than the uniform distribution on (0, 1) for any τ. We show this property holds for one-sided tests in a one-dimensional exponential family (e.g., testing for qualitative interaction) as well as testing |μ| ⩽ η using a statistic Y ∼ N(μ, 1) (e.g., testing for practical importance with threshold η). We propose an adaptive method to select the threshold τ. Our theoretical and simulation results suggest that the proposed tests gain significant power when many p-values are uniformly conservative and lose little power when no p-value is uniformly conservative. We apply our method to two educational intervention datasets. Supplementary materials for this article are available online.
knowledge discovery and data mining | 2015
Qingyuan Zhao; Murat A. Erdogdu; Hera Y. He; Anand Rajaraman; Jure Leskovec
Annals of Statistics | 2017
Jingshu Wang; Qingyuan Zhao; Trevor Hastie; Art B. Owen
arXiv: Methodology | 2016
Qingyuan Zhao
arXiv: Applications | 2018
Qingyuan Zhao; Jingshu Wang; Jack Bowden; Dylan S. Small
arXiv: Methodology | 2017
Qingyuan Zhao; Dylan S. Small; Ashkan Ertefaie
arXiv: Applications | 2018
Qingyuan Zhao; Yang Chen; Jingshu Wang; Dylan S. Small
arXiv: Applications | 2018
Qingyuan Zhao; Yang Chen; Jingshu Wang; Dylan S. Small