Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where C. B. Dean is active.

Publication


Featured researches published by C. B. Dean.


Journal of the American Statistical Association | 1992

Testing for Overdispersion in Poisson and Binomial Regression Models

C. B. Dean

Abstract In this article a method for obtaining tests for overdispersion with respect to a natural exponential family is derived. The tests are designed to be powerful against arbitrary alternative mixture models where only the first two moments of the mixed distribution are specified. Various tests for extra-Poisson and extra-binomial variation are obtained as special cases; the use of a particular test may be motivated by a consideration of the mechanism through which the overdispersion may arise. The common occurrence of extra-Poisson and extra-binomial variation has been noted by several authors. However, the Poisson and binomial models remain valid in many instances and, because of their simplicity and appeal, it is of real interest to ascertain when they apply. This paper develops a unifying theory for testing for overdispersion and generalizes tests previously derived, including those by Fisher (1950), Collings and Margolin (1985), and Prentice (1986). It also shows the Pearson statistic to be a sc...


Journal of the American Statistical Association | 1989

Tests for Detecting Overdispersion in Poisson Regression Models

C. B. Dean; J. F. Lawless

Abstract Poisson regression models are widely used in analyzing count data. This article develops tests for detecting extra-Poisson variation in such situations. The tests can be obtained as score tests against arbitrary mixed Poisson alternatives and are generalizations of tests of Fisher (1950) and Collings and Margolin (1985). Accurate approximations for computing significance levels are given, and the power of the tests against negative binomial alternatives is compared with those of the Pearson and deviance statistics. One way to test for extra-Poisson variation is to fit models that parametrically incorporate and then test for the absence of such variation within the models; for example, negative binomial models can be used in this way (Cameron and Trivedi 1986; Lawless 1987a). The tests in this article require only the Poisson model to be fitted. Two test statistics are developed that are motivated partly by a desire to have good distributional approximations for computing significance levels. Simu...


Clinical Science | 2007

What is the role of non-invasive measurements of atherosclerosis in individual cardiovascular risk prediction?

Jerilynn C. Prior; Jason D. Nielsen; Christine L. Hitchcock; Lucy A. Williams; Yvette M. Vigna; C. B. Dean

Primary prevention of CVD (cardiovascular disease) is mainly based on the assessment of individual cardiovascular risk factors. However, often, only the most important (conventional) cardiovascular risk factors are determined, and every level of risk factor exposure is associated with a substantial variation in the amount of atherosclerosis. Measuring the effect of risk factor exposure over time directly in the vessel might (partially) overcome these shortcomings. Several non-invasive imaging techniques have the potential to accomplish this, each of these techniques focusing on a different stage of the atherosclerotic process. In this review, we aim to define the current role of various of these non-invasive measurements of atherosclerosis in individual cardiovascular risk prediction, taking into account the most recent insights about validity and reproducibility of these techniques and the results of recent prospective outcome trials. We conclude that, although the clinical application of FMD (flow-mediated dilation) and PWA (pulse wave analysis) in individual cardiovascular risk prediction seems far away, there may be a role for PWV (pulse wave velocity) and IMT (intima-media thickness) measurements in the near future.


Medical Care | 2006

Neonatal intensive care unit characteristics affect the incidence of severe intraventricular hemorrhage.

Anne Synnes; Ying C. MacNab; Zhenguo Qiu; Arne Ohlsson; Paul Gustafson; C. B. Dean; Shoo K. Lee

Objectives:The incidence of intraventricular hemorrhage (IVH), adjusted for known risk factors, varies across neonatal intensive care units (NICU)s. The effect of NICU characteristics on this variation is unknown. The objective was to assess IVH attributable risks at both patient and NICU levels. Study Design:Subjects were <33 weeks’ gestation, <4 days old on admission in the Canadian Neonatal Network database (all infants admitted in 1996–97 to 17 NICUs). The variation in severe IVH rates was analyzed using Bayesian hierarchical modeling for patient level and NICU level factors. Results:Of 3772 eligible subjects, the overall crude incidence rates of grade 3–4 IVH was 8.3% (NICU range 2.0–20.5%). Male gender, extreme preterm birth, low Apgar score, vaginal birth, outborn birth, and high admission severity of illness accounted for 30% of the severe IVH rate variation; admission day therapy-related variables (treatment of acidosis and hypotension) accounted for an additional 14%. NICU characteristics, independent of patient level risk factors, accounted for 31% of the variation. NICUs with high patient volume and high neonatologist/staff ratio had lower rates of severe IVH. Conclusions:The incidence of severe IVH is affected by NICU characteristics, suggesting important new strategies to reduce this important adverse outcome.


Statistics in Medicine | 2000

Parametric bootstrap and penalized quasi-likelihood inference in conditional autoregressive models

Ying C. MacNab; C. B. Dean

This paper discusses a variety of conditional autoregressive (CAR) models for mapping disease rates, beyond the usual first-order intrinsic CAR model. We illustrate the utility and scope of such models for handling different types of data structures. To encourage their routine use for map production at statistical and health agencies, a simple algorithm for fitting such models is presented. This is derived from penalized quasi-likelihood (PQL) inference which uses an analogue of best-linear unbiased estimation for the regional risk ratios and restricted maximum likelihood for the variance components. We offer the practitioner here the use of the parametric bootstrap for inference. It is more reliable than standard maximum likelihood asymptotics for inference purposes since relevant hypotheses for the mapping of rates lie on the boundary of the parameter space. We illustrate the parametric bootstrap test of the practically relevant and important simplifying hypothesis that there is no spatial autocorrelation. Although the parametric bootstrap requires computational effort, it is straightforward to implement and offers a wealth of information relating to the estimators and their properties. The proposed methodology is illustrated by analysing infant mortality in the province of British Columbia in Canada.


Computational Statistics & Data Analysis | 2006

Approximate inference for disease mapping

L. M. Ainsworth; C. B. Dean

Disease mapping is an important area of statistical research. Contributions to the area over the last twenty years have been instrumental in helping to pinpoint potential causes of mortality and to provide a strategy for effective allocation of health funding. Because of the complexity of spatial analyses, new developments in methodology have not generally found application at Vital Statistics agencies. Inference for spatio-temporal analyses remains computationally prohibitive, for routine preparation of mortality atlases. This paper considers whether approximate methods of inference are reliable for mapping studies, especially in terms of providing accurate estimates of relative risks, ranks of regions and standard errors of risks. These approximate methods lie in the broader realm of approximate inference for generalized linear mixed models. Penalized quasi-likelihood is specifically considered here. The main focus is on assessing how close the penalized quasi-likelihood estimates are to target values, by comparison with the more rigorous and widespread Bayesian Markov Chain Monte Carlo methods. No previous studies have compared these two methods. The quantities of prime interest are small-area relative risks and the estimated ranks of the risks which are often used for ordering the regions. It will be shown that penalized quasi-likelihood is a reasonably accurate method of inference and can be recommended as a simple, yet quite precise method for initial exploratory studies.


Statistical Science | 2013

Wildfire Prediction to Inform Fire Management: Statistical Science Challenges

Steve W. Taylor; Douglas G. Woolford; C. B. Dean; David L. Martell

Wildfire is an important system process of the earth that occurs across a wide range of spatial and temporal scales. A variety of methods have been used to predict wildfire phenomena during the past century to better our understanding of fire processes and to inform fire and land management decision-making. Statistical methods have an important role in wildfire prediction due to the inherent stochastic nature of fire phenomena at all scales. Predictive models have exploited several sources of data describing fire phenomena. Experimental data are scarce; observational data are dominated by statistics compiled by government fire management agencies, primarily for administrative purposes and increasingly from remote sensing observations. Fires are rare events at many scales. The data describing fire phenomena can be zero-heavy and nonstationary over both space and time. Users of fire modeling methodologies are mainly fire management agencies often working under great time constraints, thus, complex models have to be efficiently estimated. We focus on providing an understanding of some of the information needed for fire management decision-making and of the challenges involved in predicting fire occurrence, growth and frequency at regional, national and global scales.


Computational Statistics & Data Analysis | 2004

Penalized quasi-likelihood with spatially correlated data

C. B. Dean; M. D. Ugarte; Ana F. Militino

Abstract This article discusses and evaluates penalized quasi-likelihood (PQL) estimation techniques for the situation where random effects are correlated, as is typical in mapping studies. This is an approximate fitting technique which uses a Laplace approximation to the integrated mixed model likelihood. It is much easier to implement than usual maximum likelihood estimation. Our results show that the PQL estimates are reasonably unbiased for analysis of mixed Poisson models when there is correlation in the random effects, except when the means are sufficiently small to yield sparse data. However, although the normal approximation to the distribution of the parameter estimates works fairly well for the parameters in the mean it does not perform as well for the variance components. In addition, when the mean mortality counts are small, the estimated standard errors of the variance components tend to become more biased than those for the mean. We illustrate our approaches by applying PQL for mapping mortality in British Columbia, Canada, over the five-year period 1985–1989.


American Journal of Human Biology | 2012

Who is stressed? Comparing cortisol levels between individuals

Pablo A. Nepomnaschy; Terry C.K. Lee; Leilei Zeng; C. B. Dean

Cortisol is the most commonly used biomarker to compare physiological stress between individuals. Its use, however, is frequently inappropriate. Basal cortisol production varies markedly between individuals. Yet, in naturalistic studies that variation is often ignored, potentially leading to important biases.


Statistical Methods in Medical Research | 2014

A multi-state model for the analysis of changes in cognitive scores over a fixed time interval

Nader Fallah; C. B. Dean; Kenneth Rockwood

In this article, we present the novel approach of using a multi-state model to describe longitudinal changes in cognitive test scores. Scores are modelled according to a truncated Poisson distribution, conditional on survival to a fixed endpoint, with the Poisson mean dependent upon the baseline score and covariates. The model provides a unified treatment of the distribution of cognitive scores, taking into account baseline scores and survival. It offers a simple framework for the simultaneous estimation of the effect of covariates modulating these distributions, over different baseline scores. A distinguishing feature is that this approach permits estimation of the probabilities of transitions in different directions: improvements, declines and death. The basic model is characterised by four parameters, two of which represent cognitive transitions in survivors, both for individuals with no cognitive errors at baseline and for those with non-zero errors, within the range of test scores. The two other parameters represent corresponding likelihoods of death. The model is applied to an analysis of data from the Canadian Study of Health and Aging (1991–2001) to identify the risk of death, and of changes in cognitive function as assessed by errors in the Modified Mini-Mental State Examination. The model performance is compared with more conventional approaches, such as multivariate linear and polytomous regressions. This model can also be readily applied to a wide variety of other cognitive test scores and phenomena which change with age.

Collaboration


Dive into the C. B. Dean's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ying C. MacNab

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cindy Feng

University of Saskatchewan

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ana F. Militino

Universidad Pública de Navarra

View shared research outputs
Researchain Logo
Decentralizing Knowledge