Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nan M. Laird is active.

Publication


Featured researches published by Nan M. Laird.


Controlled Clinical Trials | 1986

Meta-analysis in clinical trials.

Rebecca DerSimonian; Nan M. Laird

This paper examines eight published reviews each reporting results from several related trials. Each review pools the results from the relevant trials in order to evaluate the efficacy of a certain treatment for a specified medical condition. These reviews lack consistent assessment of homogeneity of treatment effect before pooling. We discuss a random effects approach to combining evidence from a series of experiments comparing two treatments. This approach incorporates the heterogeneity of effects in the analysis of the overall treatment efficacy. The model can be extended to include relevant covariates which would reduce the heterogeneity and allow for more specific therapeutic recommendations. We suggest a simple noniterative procedure for characterizing the distribution of treatment effects in a series of studies.


Biometrics | 1982

Random-Effects Models for Longitudinal Data

Nan M. Laird; James H. Ware

Models for the analysis of longitudinal data must recognize the relationship between serial observations on the same unit. Multivariate models with general covariance structure are often difficult to apply to highly unbalanced data, whereas two-stage random-effects models can be used easily. In two-stage models, the probability distributions for the response vectors of different individuals belong to a single family, but some random-effects parameters vary across individuals, with a distribution specified at the second stage. A general family of models is discussed, which includes both growth models and repeated-measures models as special cases. A unified approach to fitting these models, based on a combination of empirical Bayes and maximum likelihood estimation of model parameters and using the EM algorithm, is discussed. Two examples are taken from a current epidemiological study of the health effects of air pollution.


The New England Journal of Medicine | 1991

Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I.

Troyen A. Brennan; Lucian L. Leape; Nan M. Laird; Liesi E. Hebert; A. Russell Localio; Ann G. Lawthers; Joseph P. Newhouse; Paul C. Weiler; Howard H. Hiatt

BACKGROUND As part of an interdisciplinary study of medical injury and malpractice litigation, we estimated the incidence of adverse events, defined as injuries caused by medical management, and of the subgroup of such injuries that resulted from negligent or substandard care. METHODS We reviewed 30,121 randomly selected records from 51 randomly selected acute care, nonpsychiatric hospitals in New York State in 1984. We then developed population estimates of injuries and computed rates according to the age and sex of the patients as well as the specialties of the physicians. RESULTS Adverse events occurred in 3.7 percent of the hospitalizations (95 percent confidence interval, 3.2 to 4.2), and 27.6 percent of the adverse events were due to negligence (95 percent confidence interval, 22.5 to 32.6). Although 70.5 percent of the adverse events gave rise to disability lasting less than six months, 2.6 percent caused permanently disabling injuries and 13.6 percent led to death. The percentage of adverse events attributable to negligence increased in the categories of more severe injuries (Wald test chi 2 = 21.04, P less than 0.0001). Using weighted totals, we estimated that among the 2,671,863 patients discharged from New York hospitals in 1984 there were 98,609 adverse events and 27,179 adverse events involving negligence. Rates of adverse events rose with age (P less than 0.0001). The percentage of adverse events due to negligence was markedly higher among the elderly (P less than 0.01). There were significant differences in rates of adverse events among categories of clinical specialties (P less than 0.0001), but no differences in the percentage due to negligence. CONCLUSIONS There is a substantial amount of injury to patients from medical management, and many injuries are the result of substandard care.


JAMA | 1995

Incidence of adverse drug events and potential adverse drug events. Implications for prevention. ADE Prevention Study Group.

David W. Bates; David J. Cullen; Nan M. Laird; Laura A. Petersen; Stephen D. Small; Servi D; Glenn Laffel; Bobbie Jean Sweitzer; Shea Bf; Robert K. Hallisey

OBJECTIVES To assess incidence and preventability of adverse drug events (ADEs) and potential ADEs. To analyze preventable events to develop prevention strategies. DESIGN Prospective cohort study. PARTICIPANTS All 4031 adult admissions to a stratified random sample of 11 medical and surgical units in two tertiary care hospitals over a 6-month period. Units included two medical and three surgical intensive care units and four medical and two surgical general care units. MAIN OUTCOME MEASURES Adverse drug events and potential ADEs. METHODS Incidents were detected by stimulated self-report by nurses and pharmacists and by daily review of all charts by nurse investigators. Incidents were subsequently classified by two independent reviewers as to whether they represented ADEs or potential ADEs and as to severity and preventability. RESULTS Over 6 months, 247 ADEs and 194 potential ADEs were identified. Extrapolated event rates were 6.5 ADEs and 5.5 potential ADEs per 100 nonobstetrical admissions, for mean numbers per hospital per year of approximately 1900 ADEs and 1600 potential ADEs. Of all ADEs, 1% were fatal (none preventable), 12% life-threatening, 30% serious, and 57% significant. Twenty-eight percent were judged preventable. Of the life-threatening and serious ADEs, 42% were preventable, compared with 18% of significant ADEs. Errors resulting in preventable ADEs occurred most often at the stages of ordering (56%) and administration (34%); transcription (6%) and dispensing errors (4%) were less common. Errors were much more likely to be intercepted if the error occurred earlier in the process: 48% at the ordering stage vs 0% at the administration stage. CONCLUSION Adverse drug events were common and often preventable; serious ADEs were more likely to be preventable. Most resulted from errors at the ordering stage, but many also occurred at the administration stage. Prevention strategies should target both stages of the drug delivery process.


Genetic Epidemiology | 2000

Implementing a unified approach to family-based tests of association.

Nan M. Laird; Steve Horvath; Xin Xu

We describe a broad class of family‐based association tests that are adjusted for admixture; use either dichotomous or measured phenotypes; accommodate phenotype‐unknown subjects; use nuclear families, sibships or a combination of the two, permit multiple nuclear families from a single pedigree; incorporate di‐ or multi‐allelic marker data; allow additive, dominant or recessive models; and permit adjustment for covariates and gene‐by‐environment interactions. The test statistic is the covariance between a user‐specified function of the genotype and a user‐specified function of the trait. The distribution of the statistic is computed using the appropriate conditional distribution of offspring genotypes that adjusts for admixture. Genet. Epidemiol. 19(Suppl 1):S36–S42, 2000.


Statistics in Medicine | 1997

Using the general linear mixed model to analyse unbalanced repeated measures and longitudinal data

Avital Cnaan; Nan M. Laird; Peter Slasor

The general linear mixed model provides a useful approach for analysing a wide variety of data structures which practising statisticians often encounter. Two such data structures which can be problematic to analyse are unbalanced repeated measures data and longitudinal data. Owing to recent advances in methods and software, the mixed model analysis is now readily available to data analysts. The model is similar in many respects to ordinary multiple regression, but because it allows correlation between the observations, it requires additional work to specify models and to assess goodness-of-fit. The extra complexity involved is compensated for by the additional flexibility it provides in model fitting. The purpose of this tutorial is to provide readers with a sufficient introduction to the theory to understand the method and a more extensive discussion of model fitting and checking in order to provide guidelines for its use. We provide two detailed case studies, one a clinical trial with repeated measures and dropouts, and one an epidemiological survey with longitudinal follow-up.


European Journal of Human Genetics | 2001

The family based association test method: strategies for studying general genotype--phenotype associations.

Steve Horvath; Xin Xu; Nan M. Laird

With possibly incomplete nuclear families, the family based association test (FBAT) method allows one to evaluate any test statistic that can be expressed as the sum of products (covariance) between an arbitrary function of an offsprings genotype with an arbitrary function of the offsprings phenotype. We derive expressions needed to calculate the mean and variance of these test statistics under the null hypothesis of no linkage. To give some guidance on using the FBAT method, we present three simple data analysis strategies for different phenotypes: dichotomous (affection status), quantitative and censored (eg, the age of onset). We illustrate the approach by applying it to candidate gene data of the NIMH Alzheimer Disease Initiative. We show that the RC-TDT is equivalent to a special case of the FBAT method. This result allows us to generalise the RC-TDT to dominant, recessive and multi-allelic marker codings. Simulations compare the resulting FBAT tests to the RC-TDT and the S-TDT. The FBAT software is freely available.


Biometrics | 1984

Random-effects models for serial observations with binary response.

Robert Stiratelli; Nan M. Laird; James H. Ware

This paper presents a general mixed model for the analysis of serial dichotomous responses provided by a panel of study participants. Each subjects serial responses are assumed to arise from a logistic model, but with regression coefficients that vary between subjects. The logistic regression parameters are assumed to be normally distributed in the population. Inference is based upon maximum likelihood estimation of fixed effects and variance components, and empirical Bayes estimation of random effects. Exact solutions are analytically and computationally infeasible, but an approximation based on the mode of the posterior distribution of the random parameters is proposed, and is implemented by means of the EM algorithm. This approximate method is compared with a simpler two-step method proposed by Korn and Whittemore (1979, Biometrics 35, 795-804), using data from a panel study of asthmatics originally described in that paper. One advantage of the estimation strategy described here is the ability to use all of the data, including that from subjects with insufficient data to permit fitting of a separate logistic regression model, as required by the Korn and Whittemore method. However, the new method is computationally intensive.


Human Heredity | 2000

A Unified Approach to Adjusting Association Tests for Population Admixture with Arbitrary Pedigree Structure and Arbitrary Missing Marker Information

Daniel Rabinowitz; Nan M. Laird

A general approach to family-based examinations of association between marker alleles and traits is proposed. The approach is based on computing p values by comparing test statistics for association to their conditional distributions given the minimal sufficient statistic under the null hypothesis for the genetic model, sampling plan and population admixture. The approach can be applied with any test statistic, so any kind of phenotype and multi-allelic markers may be examined, and covariates may be included in analyses. By virtue of the conditioning, the approach results in correct type I error probabilities regardless of population admixture, the true genetic model and the sampling strategy. An algorithm for computing the conditional distributions is described, and the results of the algorithm for configurations of nuclear families are presented. The algorithm is applicable with all pedigree structures and all patterns of missing marker allele information.


Journal of the American Statistical Association | 1978

Nonparametric Maximum Likelihood Estimation of a Mixing Distribution

Nan M. Laird

Abstract The nonparametric maximum likelihood estimate of a mixing distribution is shown to be self-consistent, a property which characterizes the nonparametric maximum likelihood estimate of a distribution function in incomplete data problems. Under various conditions the estimate is a step function, with a finite number of steps. Its computation is illustrated with a small example.

Collaboration


Dive into the Nan M. Laird's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Edwin K. Silverman

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar

Scott T. Weiss

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael H. Cho

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar

Dawn L. DeMeo

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthew B. McQueen

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Stephen V. Faraone

State University of New York Upstate Medical University

View shared research outputs
Researchain Logo
Decentralizing Knowledge