Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Haziza is active.

Publication


Featured researches published by David Haziza.


Journal of Official Statistics | 2014

An Adaptive Data Collection Procedure for Call Prioritization

Jean François Beaumont; Cynthia Bocci; David Haziza

Abstract We propose an adaptive data collection procedure for call prioritization in the context of computer-assisted telephone interview surveys. Our procedure is adaptive in the sense that the effort assigned to a sample unit may vary from one unit to another and may also vary during data collection. The goal of an adaptive procedure is usually to increase quality for a given cost or, alternatively, to reduce cost for a given quality. The quality criterion often considered in the literature is the nonresponse bias of an estimator that is not adjusted for nonresponse. Although the reduction of the nonresponse bias is a desirable goal, we argue that it is not a useful criterion to use at the data collection stage of a survey because the bias that can be removed at this stage through an adaptive collection procedure can also be removed at the estimation stage through appropriate nonresponse weight adjustments. Instead, we develop a procedure of call prioritization that, given the selected sample, attempts to minimize the conditional variance of a nonresponse-adjusted estimator subject to an overall budget constraint. We evaluate the performance of our procedure in a simulation study.


Journal of statistical theory and practice | 2010

Variance Estimation in Two-Stage Cluster Sampling under Imputation for Missing Data

David Haziza; J. N. K. Rao

Variance estimation in the presence of imputed data has been widely studied in the literature. It is well known that treating the imputed values as if they were true values could lead to serious underestimation of the true variance, especially if the response rates are low. In this paper, we consider the problem of variance estimation using a model, in the context of two-stage cluster sampling designs which are widely used in social and household surveys. In cluster sampling designs, units in the same neighborhood tend to have similar characteristics (e.g., income, education level, etc). It is thus important to take account of the intra-cluster correlation in formulating the model and then derive variance estimators under the appropriate model. In this paper, we consider weighted random hot-deck imputation and derive consistent variance estimators under two distinct frameworks: (i) the two-phase framework and (ii) the reverse framework. In the case of the two-phase framework, we use a variance estimation method proposed by Särndal (1992), whereas we use a method developed by Fay (1991) and Shao and Steel (1999) in the case of the reverse framework. Finally, we perform a simulation study to evaluate the performance of the proposed variance estimators in terms of relative bias. We conclude that the variance estimators obtained by Shao-Steel’s method are more robust to model misspecification than those derived using Särndal’s method.


Biometrika | 2017

Multiply robust imputation procedures for the treatment of item nonresponse in surveys

Sixia Chen; David Haziza

&NA; Item nonresponse in surveys is often treated through some form of imputation. We introduce multiply robust imputation in finite population sampling. This is closely related to multiple robustness, which extends double robustness. In practice, multiple nonresponse models and multiple imputation models may be fitted, each involving different subsets of covariates and possibly different link functions. An imputation procedure is said to be multiply robust if the resulting estimator is consistent when all models but one are misspecified. A jackknife variance estimator is proposed and shown to be consistent. Random and fractional imputation procedures are discussed. A simulation study suggests that the proposed estimation procedures have low bias and high efficiency.


Statistical Science | 2017

Approaches to Improving Survey-Weighted Estimates

Qixuan Chen; Michael R. Elliott; David Haziza; Ye Yang; Malay Ghosh; Roderick J. A. Little; Joseph Sedransk; Mary E. Thompson

In sample surveys, the sample units are typically chosen using a complex design. This may lead to a selection effect and, if uncorrected in the analysis, may lead to biased inferences. To mitigate the effect on inferences of deviations from a simple random sample a common technique is to use survey weights in the analysis. This article reviews approaches to address possible inefficiency in estimation resulting from such weighting. To improve inferences we emphasize modifications of the basic designbased weight, that is, the inverse of a unit’s inclusion probability. These techniques include weight trimming, weight modelling and incorporating weights via models for survey variables.We start with an introduction to survey weighting, including methods derived from both the design and modelbased perspectives. Then we present the rationale and a taxonomy of methods for modifying the weights. We next describe an extensive numerical study to compare these methods. Using as the criteria relative bias, relative mean square error, confidence or credible interval width and coverage probability, we compare the alternative methods and summarize our findings. To supplement this numerical study we use Texas school data to compare the distributions of the weights for several methods.We also make general recommendations, describe limitations of our numerical study and make suggestions for further investigation.


METRON | 2017

Multiply robust imputation procedures for zero-inflated distributions in surveys

Sixia Chen; David Haziza

Item nonresponse in surveys is usually treated by some form of single imputation. In practice, the survey variable subject to missing values may exhibit a large number of zero-valued observations. In this paper, we propose multiply robust imputation procedures for treating this type of variable. Our procedures may be based on multiple imputation models and/or multiple nonresponse models. An imputation procedure is said to be multiply robust if the resulting estimator is consistent when all models but one are misspecified. The variance of the imputed estimators is estimated through a generalized jackknife variance estimation procedure. Results from a simulation study suggest that the proposed procedures perform well in terms of bias, efficiency and coverage rate.


Journal of the American Statistical Association | 2018

A Cautionary Tale on Instrumental Calibration for the Treatment of Nonignorable Unit Nonresponse in Surveys

Éric Lesage; David Haziza; Xavier D’Haultfœuille

ABSTRACT Response rates have been steadily declining over the last decades, making survey estimates vulnerable to nonresponse bias. To reduce the potential bias, two weighting approaches are commonly used in National Statistical Offices: the one-step and the two-step approaches. In this article, we focus on the one-step approach, whereby the design weights are modified in a single step with two simultaneous goals in mind: reduce the nonresponse bias and ensure the consistency between survey estimates and known population totals. In particular, we examine the properties of instrumental calibration, a special case of the one-step approach that has received a lot of attention in the literature in recent years. Despite the rich literature on the topic, there remain some important gaps that this article aims to fill. First, we give a set of sufficient conditions required for establishing the consistency of instrumental calibration estimators. Also, we show that the latter may suffer from a large bias when some of these conditions are violated. Results from a simulation study support our findings. Supplementary materials for this article are available online.


Computational Statistics & Data Analysis | 2018

Jackknife empirical likelihood method for multiply robust estimation with missing data

Sixia Chen; David Haziza

A novel jackknife empirical likelihood method for constructing confidence intervals for multiply robust estimators is proposed in the context of missing data. Under mild regularity conditions, the proposed jackknife empirical likelihood ratio has been shown to converge to a standard chi-square distribution. A simulation study supports the findings and shows the benefits of the proposed method. The latter has also been applied to 2016 National Health Interview Survey data.


Statistical Science | 2017

Construction of Weights in Surveys: A Review

David Haziza; Jean François Beaumont

Weighting is one of the central steps in surveys. The typical weighting process involves three major stages. At the first stage, each unit is assigned a base weight, which is defined as the inverse of its inclusion probability. The base weights are then modified to account for unit nonresponse. At the last stage, the nonresponse-adjusted weights are further modified to ensure consistency between survey estimates and known population totals. When needed, the weights undergo a last modification through weight trimming or weight smoothing methods in order to improve the efficiency of survey estimates. This article provides an overview of the various stages involved in the typical weighting process used by national statistical offices.


Convegno della Società Italiana di Statistica | 2016

Robustness in Survey Sampling Using the Conditional Bias Approach with R Implementation

Cyril Favre-Martinoz; Anne Ruiz-Gazen; Jean François Beaumont; David Haziza

The classical tools of robust statistics have to be adapted to the finite population context. Recently, a unified approach for robust estimation in surveys has been introduced. It is based on an influence measure called the conditional bias that allows to take into account the particular finite population framework and the sampling design. In the present paper, we focus on the design-based approach and we recall the main properties of the conditional bias and how it can be used to define a general class of robust estimators of a total. The link between this class and the well-known winsorized estimators is detailed. We also recall how the approach can be adapted for estimating domain totals in a robust and consistent way. The implementation in R of the proposed methodology is presented with some functions that estimate the conditional bias, calculate the proposed robust estimators and compute the weights associated to the winsorized estimator for particular designs. One function for computing consistently domain totals is also proposed.


Calcutta Statistical Association Bulletin | 2016

Revisiting Basu's Circus Example: Another Look at the Horvitz-Thompson Estimator

Malay Ghosh; David Haziza

The objective of this article is a critical appraisal of the classical Horvitz-Thompson (HT) estimator used in survey sampling, and examine when and where it is effective. For illustration, we have brought in the hilarious circus example of Basus[1] where the HT estimator led to a disastrous result. We have pointed out what went wrong with this example and, in the process, have also discussed what one needs for a successful application of the HT estimator. We also provide a model-based interpretation of the HT estimator, and again discuss the success or failure of the HT estimator from a model-based perspective.

Collaboration


Dive into the David Haziza's collaboration.

Top Co-Authors

Avatar

Guillaume Chauvet

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sixia Chen

University of Oklahoma Health Sciences Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge