Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nuala A. Sheehan is active.

Publication


Featured researches published by Nuala A. Sheehan.


Statistical Methods in Medical Research | 2007

Mendelian randomization as an instrumental variable approach to causal inference

Vanessa Didelez; Nuala A. Sheehan

In epidemiological research, the causal effect of a modifiable phenotype or exposure on a disease is often of public health interest. Randomized controlled trials to investigate this effect are not always possible and inferences based on observational data can be confounded. However, if we know of a gene closely linked to the phenotype without direct effect on the disease, it can often be reasonably assumed that the gene is not itself associated with any confounding factors — a phenomenon called Mendelian randomization. These properties define an instrumental variable and allow estimation of the causal effect, despite the confounding, under certain model restrictions. In this paper, we present a formal framework for causal inference based on Mendelian randomization and suggest using directed acyclic graphs to check model assumptions by visual inspection. This framework allows us to address limitations of the Mendelian randomization technique that have often been overlooked in the medical literature.


PLOS Medicine | 2008

Mendelian randomisation and causal inference in observational epidemiology

Nuala A. Sheehan; Vanessa Didelez; Paul R. Burton; Martin D. Tobin

Nuala Sheehan and colleagues describe how Mendelian randomization provides an alternative way of dealing with the problems of observational studies, especially confounding.


Statistical Methods in Medical Research | 2012

Using multiple genetic variants as instrumental variables for modifiable risk factors.

Tom Palmer; Debbie A. Lawlor; Roger Harbord; Nuala A. Sheehan; Jon H Tobias; Nicholas J. Timpson; George Davey Smith; Jonathan A C Sterne

Mendelian randomisation analyses use genetic variants as instrumental variables (IVs) to estimate causal effects of modifiable risk factors on disease outcomes. Genetic variants typically explain a small proportion of the variability in risk factors; hence Mendelian randomisation analyses can require large sample sizes. However, an increasing number of genetic variants have been found to be robustly associated with disease-related outcomes in genome-wide association studies. Use of multiple instruments can improve the precision of IV estimates, and also permit examination of underlying IV assumptions. We discuss the use of multiple genetic variants in Mendelian randomisation analyses with continuous outcome variables where all relationships are assumed to be linear. We describe possible violations of IV assumptions, and how multiple instrument analyses can be used to identify them. We present an example using four adiposity-associated genetic variants as IVs for the causal effect of fat mass on bone density, using data on 5509 children enrolled in the ALSPAC birth cohort study. We also use simulation studies to examine the effect of different sets of IVs on precision and bias. When each instrument independently explains variability in the risk factor, use of multiple instruments increases the precision of IV estimates. However, inclusion of weak instruments could increase finite sample bias. Missing data on multiple genetic variants can diminish the available sample size, compared with single instrument analyses. In simulations with additive genotype-risk factor effects, IV estimates using a weighted allele score had similar properties to estimates using multiple instruments. Under the correct conditions, multiple instrument analyses are a promising approach for Mendelian randomisation studies. Further research is required into multiple imputation methods to address missing data issues in IV estimation.


Statistical Science | 2010

Assumptions of IV methods for observational epidemiology

Vanessa Didelez; Sha Meng; Nuala A. Sheehan

Instrumental variable (IV) methods are becoming increas- ingly popular as they seem to offer the only viable way to overcome the problem of unobserved confounding in observational studies. However, some attention has to be paid to the details, as not all such methods target the same causal parameters and some rely on more restrictive parametric assumptions than others. We therefore discuss and contrast the most common IV approaches with relevance to typical applications in observational epidemiology. Further, we illustrate and compare the asymptotic bias of these IV estimators when underlying assumptions are violated in a numerical study. One of our conclusions is that all IV methods encounter problems in the presence of effect modification by unobserved confounders. Since this can never be ruled out for sure, we recommend that practical applications of IV estimators be accompa- nied routinely by a sensitivity analysis.


American Journal of Epidemiology | 2011

Instrumental variable estimation of causal risk ratios and causal odds ratios in Mendelian randomization analyses.

Tom Palmer; Jonathan A C Sterne; Roger Harbord; Debbie A. Lawlor; Nuala A. Sheehan; Sha Meng; Raquel Granell; George Davey Smith; Vanessa Didelez

In this paper, the authors describe different instrumental variable (IV) estimators of causal risk ratios and odds ratios with particular attention to methods that can handle continuously measured exposures. The authors present this discussion in the context of a Mendelian randomization analysis of the effect of body mass index (BMI; weight (kg)/height (m)(2)) on the risk of asthma at age 7 years (Avon Longitudinal Study of Parents and Children, 1991-1992). The authors show that the multiplicative structural mean model (MSMM) and the multiplicative generalized method of moments (MGMM) estimator produce identical estimates of the causal risk ratio. In the example, MSMM and MGMM estimates suggested an inverse relation between BMI and asthma but other IV estimates suggested a positive relation, although all estimates had wide confidence intervals. An interaction between the associations of BMI and fat mass and obesity-associated (FTO) genotype with asthma explained the different directions of the different estimates, and a simulation study supported the observation that MSMM/MGMM estimators are negatively correlated with the other estimators when such an interaction is present. The authors conclude that point estimates from various IV methods can differ in practical applications. Based on the theoretical properties of the estimators, structural mean models make weaker assumptions than other IV estimators and can therefore be expected to be consistent in a wider range of situations.


Circulation | 2005

Association of WNK1 Gene Polymorphisms and Haplotypes With Ambulatory Blood Pressure in the General Population

Martin D. Tobin; Stuart M Raleigh; Stephen Newhouse; Peter S. Braund; Clare L. Bodycote; Jenny Ogleby; Deborah Cross; Jay Gracey; Saija Hayes; Terry Smith; Cathy Ridge; Mark J. Caulfield; Nuala A. Sheehan; Patricia B. Munroe; Paul R. Burton; Nilesh J. Samani

Background— Blood pressure (BP) is a heritable trait of major public health concern. The WNK1 and WNK4 genes, which encode proteins in the WNK family of serine-threonine kinases, are involved in renal electrolyte homeostasis. Mutations in the WNK1 and WNK4 genes cause a rare monogenic hypertensive syndrome, pseudohypoaldosteronism type II. We investigated whether polymorphisms in these WNK genes influence BP in the general population. Methods and Results— Associations between 9 single-nucleotide polymorphisms (SNPs) in WNK1 and 1 in WNK4 with ambulatory BP were studied in a population-based sample of 996 subjects from 250 white European families. The heritability estimates of mean 24-hour systolic BP (SBP) and diastolic BP (DBP) were 63.4% and 67.9%, respectively. We found statistically significant (P<0.05) associations of several common SNPs and haplotypes in WNK1 with mean 24-hour SBP and/or DBP. The minor allele (C) of rs880054, with a frequency of 44%, reduced mean 24-hour SBP and DBP by 1.37 (95% confidence interval, −2.45 to −0.23) and 1.14 (95% confidence interval, −1.93 to −0.38) mm Hg, respectively, per copy of the allele. Conclusions— Common variants in WNK1 contribute to BP variation in the general population. This study shows that a gene causing a rare monogenic form of hypertension also plays a significant role in BP regulation in the general population. The findings provide a basis to identify functional variants of WNK1, elucidate any interactions of these variants with dietary intake or with response to antihypertensive drugs, and determine their impact on cardiovascular morbidity and mortality.


International Journal of Epidemiology | 2010

DataSHIELD: resolving a conflict in contemporary bioscience—performing a pooled analysis of individual-level data without sharing the data

Michael Wolfson; Susan Wallace; Nicholas G. D. Masca; Geoff Rowe; Nuala A. Sheehan; Vincent Ferretti; Philippe Laflamme; Martin D. Tobin; John Macleod; Julian Little; Isabel Fortier; Bartha Maria Knoppers; Paul R. Burton

Background Contemporary bioscience sometimes demands vast sample sizes and there is often then no choice but to synthesize data across several studies and to undertake an appropriate pooled analysis. This same need is also faced in health-services and socio-economic research. When a pooled analysis is required, analytic efficiency and flexibility are often best served by combining the individual-level data from all sources and analysing them as a single large data set. But ethico-legal constraints, including the wording of consent forms and privacy legislation, often prohibit or discourage the sharing of individual-level data, particularly across national or other jurisdictional boundaries. This leads to a fundamental conflict in competing public goods: individual-level analysis is desirable from a scientific perspective, but is prevented by ethico-legal considerations that are entirely valid. Methods Data aggregation through anonymous summary-statistics from harmonized individual-level databases (DataSHIELD), provides a simple approach to analysing pooled data that circumvents this conflict. This is achieved via parallelized analysis and modern distributed computing and, in one key setting, takes advantage of the properties of the updating algorithm for generalized linear models (GLMs). Results The conceptual use of DataSHIELD is illustrated in two different settings. Conclusions As the study of the aetiological architecture of chronic diseases advances to encompass more complex causal pathways—e.g. to include the joint effects of genes, lifestyle and environment—sample size requirements will increase further and the analysis of pooled individual-level data will become ever more important. An aim of this conceptual article is to encourage others to address the challenges and opportunities that DataSHIELD presents, and to explore potential extensions, for example to its use when different data sources hold different data on the same individuals.


Genetics | 2012

Inferences from Genomic Models in Stratified Populations

Luc Janss; Gustavo de los Campos; Nuala A. Sheehan; Danny C. Sorensen

Unaccounted population stratification can lead to spurious associations in genome-wide association studies (GWAS) and in this context several methods have been proposed to deal with this problem. An alternative line of research uses whole-genome random regression (WGRR) models that fit all markers simultaneously. Important objectives in WGRR studies are to estimate the proportion of variance accounted for by the markers, the effect of individual markers, prediction of genetic values for complex traits, and prediction of genetic risk of diseases. Proposals to account for stratification in this context are unsatisfactory. Here we address this problem and describe a reparameterization of a WGRR model, based on an eigenvalue decomposition, for simultaneous inference of parameters and unobserved population structure. This allows estimation of genomic parameters with and without inclusion of marker-derived eigenvectors that account for stratification. The method is illustrated with grain yield in wheat typed for 1279 genetic markers, and with height, HDL cholesterol and systolic blood pressure from the British 1958 cohort study typed for 1 million SNP genotypes. Both sets of data show signs of population structure but with different consequences on inferences. The method is compared to an advocated approach consisting of including eigenvectors as fixed-effect covariates in a WGRR model. We show that this approach, used in the context of WGRR models, is ill posed and illustrate the advantages of the proposed model. In summary, our method permits a unified approach to the study of population structure and inference of parameters, is computationally efficient, and is easy to implement.


International Journal of Epidemiology | 2008

Adjusting for bias and unmeasured confounding in Mendelian randomization studies with binary responses

Tom Palmer; John R. Thompson; Martin D. Tobin; Nuala A. Sheehan; Paul R. Burton

BACKGROUND Mendelian randomization uses a carefully selected gene as an instrumental-variable (IV) to test or estimate an association between a phenotype and a disease. Classical IV analysis assumes linear relationships between the variables, but disease status is often binary and modelled by a logistic regression. When the linearity assumption between the variables does not hold the IV estimates will be biased. The extent of this bias in the phenotype-disease log odds ratio of a Mendelian randomization study is investigated. METHODS Three estimators termed direct, standard IV and adjusted IV, of the phenotype-disease log odds ratio are compared through a simulation study which incorporates unmeasured confounding. The simulations are verified using formulae relating marginal and conditional estimates given in the Appendix. RESULTS The simulations show that the direct estimator is biased by unmeasured confounding factors and the standard IV estimator is attenuated towards the null. Under most circumstances the adjusted IV estimator has the smallest bias, although it has inflated type I error when the unmeasured confounders have a large effect. CONCLUSIONS In a Mendelian randomization study with a binary disease outcome the bias associated with estimating the phenotype-disease log odds ratio may be of practical importance and so estimates should be subject to a sensitivity analysis against different amounts of hypothesized confounding.


International Journal of Epidemiology | 2014

DataSHIELD: taking the analysis to the data, not the data to the analysis

Amadou Gaye; Yannick Marcon; Julia Isaeva; Philippe Laflamme; Andrew Turner; Elinor M. Jones; Joel Minion; Andrew W Boyd; Christopher Newby; Marja-Liisa Nuotio; Rebecca Wilson; Oliver Butters; Barnaby Murtagh; Ipek Demir; Dany Doiron; Lisette Giepmans; Susan Wallace; Isabelle Budin-Ljøsne; Carsten Schmidt; Paolo Boffetta; Mathieu Boniol; Maria Bota; Kim W. Carter; Nick deKlerk; Chris Dibben; Richard W. Francis; Tero Hiekkalinna; Kristian Hveem; Kirsti Kvaløy; Seán R. Millar

Background: Research in modern biomedicine and social science requires sample sizes so large that they can often only be achieved through a pooled co-analysis of data from several studies. But the pooling of information from individuals in a central database that may be queried by researchers raises important ethico-legal questions and can be controversial. In the UK this has been highlighted by recent debate and controversy relating to the UK’s proposed ‘care.data’ initiative, and these issues reflect important societal and professional concerns about privacy, confidentiality and intellectual property. DataSHIELD provides a novel technological solution that can circumvent some of the most basic challenges in facilitating the access of researchers and other healthcare professionals to individual-level data. Methods: Commands are sent from a central analysis computer (AC) to several data computers (DCs) storing the data to be co-analysed. The data sets are analysed simultaneously but in parallel. The separate parallelized analyses are linked by non-disclosive summary statistics and commands transmitted back and forth between the DCs and the AC. This paper describes the technical implementation of DataSHIELD using a modified R statistical environment linked to an Opal database deployed behind the computer firewall of each DC. Analysis is controlled through a standard R environment at the AC. Results: Based on this Opal/R implementation, DataSHIELD is currently used by the Healthy Obese Project and the Environmental Core Project (BioSHaRE-EU) for the federated analysis of 10 data sets across eight European countries, and this illustrates the opportunities and challenges presented by the DataSHIELD approach. Conclusions: DataSHIELD facilitates important research in settings where: (i) a co-analysis of individual-level data from several studies is scientifically necessary but governance restrictions prohibit the release or sharing of some of the required data, and/or render data access unacceptably slow; (ii) a research group (e.g. in a developing nation) is particularly vulnerable to loss of intellectual property—the researchers want to fully share the information held in their data with national and international collaborators, but do not wish to hand over the physical data themselves; and (iii) a data set is to be included in an individual-level co-analysis but the physical size of the data precludes direct transfer to a new site for analysis.

Collaboration


Dive into the Nuala A. Sheehan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cosetta Minelli

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thore Egeland

Norwegian University of Life Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge