Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Roderick J. A. Little is active.

Publication


Featured researches published by Roderick J. A. Little.


Journal of the American Statistical Association | 1988

A test of missing completely at random for multivariate data with missing values

Roderick J. A. Little

Abstract A common concern when faced with multivariate data with missing values is whether the missing data are missing completely at random (MCAR); that is, whether missingness depends on the variables in the data set. One way of assessing this is to compare the means of recorded values of each variable between groups defined by whether other variables in the data set are missing or not. Although informative, this procedure yields potentially many correlated statistics for testing MCAR, resulting in multiple-comparison problems. This article proposes a single global test statistic for MCAR that uses all of the available data. The asymptotic null distribution is given, and the small-sample null distribution is derived for multivariate normal data with a monotone pattern of missing data. The test reduces to a standard t test when the data are bivariate with missing data confined to a single variable. A limited simulation study of empirical sizes for the test applied to normal and nonnormal data suggests th...


Journal of the American Statistical Association | 1995

Modeling the drop-out mechanism in repeated-measures studies

Roderick J. A. Little

Abstract Subjects often drop out of longitudinal studies prematurely, yielding unbalanced data with unequal numbers of measures for each subject. Modern software programs for handling unbalanced longitudinal data improve on methods that discard the incomplete cases by including all the data, but also yield biased inferences under plausible models for the drop-out process. This article discusses methods that simultaneously model the data and the drop-out process within a unified model-based framework. Models are classified into two broad classes—random-coefficient selection models and random-coefficient pattern-mixture models—depending on how the joint distribution of the data and drop-out mechanism is factored. Inference is likelihood-based, via maximum likelihood or Bayesian methods. A number of examples in the literature are placed in this framework, and possible extensions outlined. Data collection on the nature of the drop-out process is advocated to guide the choice of model. In cases where the drop-...


Journal of the American Statistical Association | 1992

Regression with Missing X's: A Review

Roderick J. A. Little

Regression With Missing Xs: A Review Author(s): Roderick J. A. Little Source: Journal of the American Statistical Association, Vol. 87, No. 420 (Dec., 1992), pp. 1227- Published by: American Statistical Association Stable URL: http://www.jstor.org/stable/2290664 . Accessed: 09/08/2011 18:31 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. American Statistical Association is collaborating with JSTOR to digitize, preserve and extend access to Journal of the American Statistical Association. http://www.jstor.org


Journal of the American Statistical Association | 1989

Robust statistical modeling using the t distribution

Kenneth Lange; Roderick J. A. Little; Jeremy M. G. Taylor

Abstract The t distribution provides a useful extension of the normal for statistical modeling of data sets involving errors with longer-than-normal tails. An analytical strategy based on maximum likelihood for a general model with multivariate t errors is suggested and applied to a variety of problems, including linear and nonlinear regression, robust estimation of the mean and covariance matrix with missing data, unbalanced multivariate repeated-measures data, multivariate modeling of pedigree data, and multivariate nonlinear regression. The degrees of freedom parameter of the t distribution provides a convenient dimension for achieving robust statistical inference, with moderate increases in computational complexity for many models. Estimation of precision from asymptotic theory and the bootstrap is discussed, and graphical methods for checking the appropriateness of the t distribution are presented.


The New England Journal of Medicine | 2012

The Prevention and Treatment of Missing Data in Clinical Trials

Roderick J. A. Little; Ralph B. D'Agostino; Michael L. Cohen; Kay Dickersin; Scott S. Emerson; John T. Farrar; Constantine Frangakis; Joseph W. Hogan; Geert Molenberghs; Susan A. Murphy; James D. Neaton; Andrea Rotnitzky; Daniel O. Scharfstein; Weichung J. Shih; Jay P. Siegel; Hal S. Stern

Missing data in clinical trials can have a major effect on the validity of the inferences that can be drawn from the trial. This article reviews methods for preventing missing data and, failing that, dealing with data that are missing.


Journal of the American Statistical Association | 1993

Pattern-Mixture Models for Multivariate Incomplete Data

Roderick J. A. Little

Consider a random sample on variables X1, …, Xv with some values of Xv missing. Selection models specify the distribution of X1 , …, XV over respondents and nonrespondents to Xv , and the conditional distribution that Xv is missing given X1 , …, Xv . In contrast, pattern-mixture models specify the conditional distribution of X 1, …, Xv given that XV is observed or missing respectively and the marginal distribution of the binary indicator for whether or not Xv is missing. For multivariate data with a general pattern of missing values, the literature has tended to adopt the selection-modeling approach (see for example Little and Rubin); here, pattern-mixture models are proposed for this more general problem. Pattern-mixture models are chronically underidentified; in particular for the case of univariate nonresponse mentioned above, there are no data on the distribution of Xv given X1 , …, XV–1 , in the stratum with Xv missing. Thus the models require restrictions or prior information to identify the paramet...


Sociological Methods & Research | 1989

The Analysis of Social Science Data with Missing Values

Roderick J. A. Little; Donald B. Rubin

Methods for handling missing data in social science data sets are reviewed. Limitations of common practical approaches, including complete-case analysis, available-case analysis and imputation, are illustrated on a simple missing-data problem with one complete and one incomplete variable. Two more principled approaches, namely maximum likelihood under a model for the data and missing-data mechanism and multiple imputation, are applied to the bivariate problem. General properties of these methods are outlined, and applications to more complex missing-data problems are discussed. The EM algorithm, a convenient method for computing maximum likelihood estimates in missing-data problems, is described and applied to two common models, the multivariate normal model for continuous data and the multinomial model for discrete data. Multiple imputation under explicit or implicit models is recommended as a method that retains the advantages of imputation and overcomes its limitations.


Journal of Business & Economic Statistics | 1988

Missing-Data Adjustments in Large Surveys

Roderick J. A. Little

Useful properties of a general-purpose imputation method for numerical data are suggested and discussed in the context of several large government surveys. Imputation based on predictive mean matching is proposed as a useful extension of methods in existing practice, and versions of the method are presented for unit nonresponse and item nonresponse with a general pattern of missingness. Extensions of the method to provide multiple imputations are also considered. Pros and cons of weighting adjustments are discussed, and weighting-based analogs to predictive mean matching are outlined.


International Statistical Review | 1986

Survey Nonresponse Adjustments for Estimates of Means

Roderick J. A. Little

Summary Theoretical properties of nonresponse adjustments based on adjustment cells are studied, for estimates of means for the whole population and in subclasses that cut across adjustment cells. Three forms of adjustment are considered: weighting by the inverse response rate within cells, post-stratification on known population cell counts, and mean imputation within adjustment cells. Two dimensions of covariate information x are distinguished as particularly useful for reducing nonresponse bias: the response propensity f(x) and the conditional mean ^(x) of the outcome variable y given x. Weighting within adjustment cells based on ^f(x) controls bias, but not necessarily variance. Imputation within adjustment cells based on ^(x) controls bias and variance. Post-stratification yields some gains in efficiency for overall population means, and smaller gains for means in subclasses of the population. A simulation study similar to that of Holt & Smith (1979) is described which explores the mean squared error properties of the estimators. Finally, some modifications of response propensity weighting to control variance are suggested.


IEEE Transactions on Medical Imaging | 1987

A Theoretical Study of Some Maximum Likelihood Algorithms for Emission and Transmission Tomography

Kenneth Lange; Mark M. Bahn; Roderick J. A. Little

This paper has the dual purpose of introducing some new algorithms for emission and transmission tomography and proving mathematically that these algorithms and related antecedent algorithms converge. Like the EM algorithms for positron, single-photon, and transmission tomography, the algorithms provide maximum likelihood estimates of pixel concentration or linear attenuation parameters. One particular innovation we discuss is a computationally practical scheme for modifying the EM algorithms to include a Bayesian prior. The Bayesian versions of the EM algorithms are shown to have superior convergence properties in a vicinity of the maximum. We anticipate that some of the other algorithms will also converge faster than the EM algorithms.

Collaboration


Dive into the Roderick J. A. Little's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sid Gilman

University of Michigan

View shared research outputs
Top Co-Authors

Avatar

Larry Junck

University of Michigan

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bin Nan

University of Michigan

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hyonggin An

University of Michigan

View shared research outputs
Researchain Logo
Decentralizing Knowledge