Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sander Greenland is active.

Publication


Featured researches published by Sander Greenland.


Epidemiology | 1999

Causal diagrams for epidemiologic research.

Sander Greenland; Judea Pearl; James M. Robins

We explored the relation between various potential sources of maternal periconceptional pregnancy exposures to pesticides and congenital anomalies in offspring. Data were derived from a case-control study of fetuses and liveborn infants with orofacial clefts, neural tube defects, conotruncal defects, or limb anomalies, among 1987-1989 California births and fetal deaths. We conducted telephone interviews with mothers of 662 (85% of eligible) orofacial cleft cases, 265 (84%) neural tube defect cases, 207 (87%) conotruncal defect cases, 165 (84%) limb cases, and 734 (78%) nonmalformed controls. The odds ratio (OR) estimates did not indicate increased risk for any of the studied anomaly groups among women whose self-reported occupational tasks were considered by an industrial hygienist likely to involve pesticide exposures. Paternal occupational exposure to pesticides, as reported by the mother, revealed elevated ORs for only two of the cleft phenotypes [OR = 1.7 [95% confidence interval (CI) = 0.9-3.4] for multiple cleft lip with/without cleft palate and OR = 1.6 [95% CI = 0.7-3.4] for multiple cleft palate]. Use of pesticide products for household gardening, by mothers or by professional applicators, was associated with ORs > or =1.5 for most of the studied anomalies. Use of pesticide products for the control of pests in or around homes was not associated with elevated risks for most of the studied anomalies, although women who reported that a professional applied pesticides to their homes had increased risks for neural tube defect-affected pregnancies [OR = 1.6 (95% CI = 1.1-2.5)] and limb anomalies [OR = 1.6 (95% CI = 1.0-2.7)]. Having a pet cat or dog and treating its fleas was not associated with increased anomaly risk. Women who reported living within 0.25 miles of an agricultural crop revealed increased risks for offspring with neural tube defects [OR = 1.5 (95%CI = 1.1-2.1)]. For many of the comparisons, data were sparse, resulting in imprecise effect estimation. Despite our investigating multiple sources of potential pesticide exposures, without more specific information on chemical and level of exposure, we could not adequately discriminate whether the observed effects are valid, whether biased exposure reporting contributed to the observed elevated risks, or whether nonspecific measurement of exposure was responsible for many of the observed estimated risks not being elevated.Causal diagrams have a long history of informal use and, more recently, have undergone formal development for applications in expert systems and robotics. We provide an introduction to these developments and their use in epidemiologic research. Causal diagrams can provide a starting point for identifying variables that must be measured and controlled to obtain unconfounded effect estimates. They also provide a method for critical evaluation of traditional epidemiologic criteria for confounding. In particular, they reveal certain heretofore unnoticed shortcomings of those criteria when used in considering multiple potential confounders. We show how to modify the traditional criteria to correct those shortcomings.


American Journal of Public Health | 1989

Modeling and variable selection in epidemiologic analysis.

Sander Greenland

This paper provides an overview of problems in multivariate modeling of epidemiologic data, and examines some proposed solutions. Special attention is given to the task of model selection, which involves selection of the model form, selection of the variables to enter the model, and selection of the form of these variables in the model. Several conclusions are drawn, among them: a) model and variable forms should be selected based on regression diagnostic procedures, in addition to goodness-of-fit tests; b) variable-selection algorithms in current packaged programs, such as conventional stepwise regression, can easily lead to invalid estimates and tests of effect; and c) variable selection is better approached by direct estimation of the degree of confounding produced by each variable than by significance-testing algorithms. As a general rule, before using a model to estimate effects, one should evaluate the assumptions implied by the model against both the data and prior information.


Epidemiology | 1992

Identifiability and exchangeability for direct and indirect effects.

James M. Robins; Sander Greenland

We consider the problem of separating the direct effects of an exposure from effects relayed through an intermediate variable (indirect effects). We show that adjustment for the intermediate variable, which is the most common method of estimating direct effects, can be biased. We also show that, even in a randomized crossover trial of exposure, direct and indirect effects cannot be separated without special assumptions; in other words, direct and indirect effects are not separately identifiable when only exposure is randomized. If the exposure and intermediate never interact to cause disease and if intermediate effects can be controlled, that is, blocked by a suitable intervention, then a trial randomizing both exposure and the intervention can separate direct from indirect effects. Nonetheless, the estimation must be carried out using the G-computation algorithm. Conventional adjustment methods remain biased. When exposure and the intermediate interact to cause disease, direct and indirect effects will not be separable even in a trial in which both the exposure and the intervention blocking intermediate effects are randomly assigned. Nonetheless, in such a trial, one can still estimate the fraction of exposure-induced disease that could be prevented by control of the intermediate. Even in the absence of an intervention blocking the intermediate effect, the fraction of exposure-induced disease that could be prevented by control of the intermediate can be estimated with the G-computation algorithm if data are obtained on additional confounding variables. (Epidemiology 1992;3:143–155)


Epidemiology | 1995

Dose-response and trend analysis in epidemiology: alternatives to categorical analysis.

Sander Greenland

Standard categorical analysis is based on an unrealistic model for dose-response and trends and does not make efficient use of within-category information. This paper describes two classes of simple alternatives that can be implemented with any regression software: fractional polynomial regression and spline regression. These methods are illustrated in a problem of estimating historical trends in human immunodeficiency virus incidence. Fractional polynomial and spline regression are especially valuable when important nonlinearities are anticipated and software for more general nonparametric regression approaches is not available.


American Journal of Public Health | 2005

Causation and Causal Inference in Epidemiology

Kenneth J. Rothman; Sander Greenland

Concepts of cause and causal inference are largely self-taught from early learning experiences. A model of causation that describes causes in terms of sufficient causes and their component causes illuminates important principles such as multi-causality, the dependence of the strength of component causes on the prevalence of complementary component causes, and interaction between component causes. Philosophers agree that causal propositions cannot be proved, and find flaws or practical limitations in all philosophies of causal inference. Hence, the role of logic, belief, and observation in evaluating causal propositions is not settled. Causal inference in epidemiology is better viewed as an exercise in measurement of an effect rather than as a criterion-guided process for deciding whether an effect is present or not.


The Lancet | 2014

Increasing value and reducing waste in research design, conduct, and analysis

John P. A. Ioannidis; Sander Greenland; Mark A. Hlatky; Muin J. Khoury; Malcolm R. Macleod; David Moher; Kenneth F. Schulz; Robert Tibshirani

Correctable weaknesses in the design, conduct, and analysis of biomedical and public health research studies can produce misleading results and waste valuable resources. Small effects can be difficult to distinguish from bias introduced by study design and analyses. An absence of detailed written protocols and poor documentation of research is common. Information obtained might not be useful or important, and statistical precision or power is often too low or used in a misleading way. Insufficient consideration might be given to both previous and continuing studies. Arbitrary choice of analyses and an overemphasis on random extremes might affect the reported findings. Several problems relate to the research workforce, including failure to involve experienced statisticians and methodologists, failure to train clinical researchers and laboratory scientists in research methods and design, and the involvement of stakeholders with conflicts of interest. Inadequate emphasis is placed on recording of research decisions and on reproducibility of research. Finally, reward systems incentivise quantity more than quality, and novelty more than reliability. We propose potential solutions for these problems, including improvements in protocols and documentation, consideration of evidence from studies in progress, standardisation of research efforts, optimisation and training of an experienced and non-conflicted scientific workforce, and reconsideration of scientific reward systems.


Epidemiology | 2000

A pooled analysis of magnetic fields, wire codes, and childhood leukemia

Sander Greenland; Asher R. Sheppard; William T. Kaune; Charles Poole; Michael A. Kelsh

We obtained original individual data from 15 studies of magnetic fields or wire codes and childhood leukemia, and we estimated magnetic field exposure for subjects with sufficient data to do so. Summary estimates from 12 studies that supplied magnetic field measures exhibited little or no association of magnetic fields with leukemia when comparing 0.1–0.2 and 0.2–0.3 microtesla (&mgr;T) categories with the 0–0.1 &mgr;T category, but the Mantel-Haenszel summary odds ratio comparing >0.3 &mgr;T to 0–0.1 &mgr;T was 1.7 (95% confidence limits = 1.2, 2.3). Similar results were obtained using covariate adjustment and spline regression. The study-specific relations appeared consistent despite the numerous methodologic differences among the studies. The association of wire codes with leukemia varied considerably across studies, with odds ratio estimates for very high current vs low current configurations ranging from 0.7 to 3.0 (homogeneity P = 0.005). Based on a survey of household magnetic fields, an estimate of the U.S. population attributable fraction of childhood leukemia associated with residential exposure is 3% (95% confidence limits = –2%, 8%). Our results contradict the idea that the magnetic field association with leukemia is less consistent than the wire code association with leukemia, although analysis of the four studies with both measures indicates that the wire code association is not explained by measured fields. The results also suggest that appreciable magnetic field effects, if any, may be concentrated among relatively high and uncommon exposures, and that studies of highly exposed populations would be needed to clarify the relation of magnetic fields to childhood leukemia.


Archive | 2014

Research: increasing value, reducing waste 2

John P A Ioannidis; Sander Greenland; Mark A. Hlatky; Muin J. Khoury; Malcolm R. Macleod; David Moher; Kenneth F. Schulz; Robert Tibshirani

Correctable weaknesses in the design, conduct, and analysis of biomedical and public health research studies can produce misleading results and waste valuable resources. Small effects can be difficult to distinguish from bias introduced by study design and analyses. An absence of detailed written protocols and poor documentation of research is common. Information obtained might not be useful or important, and statistical precision or power is often too low or used in a misleading way. Insufficient consideration might be given to both previous and continuing studies. Arbitrary choice of analyses and an overemphasis on random extremes might affect the reported findings. Several problems relate to the research workforce, including failure to involve experienced statisticians and methodologists, failure to train clinical researchers and laboratory scientists in research methods and design, and the involvement of stakeholders with conflicts of interest. Inadequate emphasis is placed on recording of research decisions and on reproducibility of research. Finally, reward systems incentivise quantity more than quality, and novelty more than reliability. We propose potential solutions for these problems, including improvements in protocols and documentation, consideration of evidence from studies in progress, standardisation of research efforts, optimisation and training of an experienced and non-conflicted scientific workforce, and reconsideration of scientific reward systems.


Epidemiology | 2003

Quantifying biases in causal models: Classical confounding vs collider-stratification bias

Sander Greenland

It has long been known that stratifying on variables affected by the study exposure can create selection bias. More recently it has been shown that stratifying on a variable that precedes exposure and disease can induce confounding, even if there is no confounding in the unstratified (crude) estimate. This paper examines the relative magnitudes of these biases under some simple causal models in which the stratification variable is graphically depicted as a collider (a variable directly affected by two or more other variables in the graph). The results suggest that bias from stratifying on variables affected by exposure and disease may often be comparable in size with bias from classical confounding (bias from failing to stratify on a common cause of exposure and disease), whereas other biases from collider stratification may tend to be much smaller.


Biometrics | 1993

Maximum likelihood estimation of the attributable fraction from logistic models

Sander Greenland; Karsten Drescher

Bruzzi et al. (1985, American Journal of Epidemiology 122, 904-914) provided a general logistic-model-based estimator of the attributable fraction for case-control data, and Benichou and Gail (1990, Biometrics 46, 991-1003) gave an implicit-delta-method variance formula for this estimator. The Bruzzi et al. estimator is not, however, the maximum likelihood estimator (MLE) based on the model, as it uses the model only to construct the relative risk estimates, and not the covariate-distribution estimate. We here provide maximum likelihood estimators for the attributable fraction in cohort and case-control studies, and their asymptotic variances. The case-control estimator generalizes the estimator of Drescher and Schill (1991, Biometrics 47, 1247-1256). We also present a limited simulation study which confirms earlier work that better small-sample performance is obtained when the confidence interval is centered on the log-transformed point estimator rather than the original point estimator.

Collaboration


Dive into the Sander Greenland's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Charles Poole

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zuo-Feng Zhang

University of California

View shared research outputs
Top Co-Authors

Avatar

Joel D. Kopple

Los Angeles Biomedical Research Institute

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge