Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lawrence C. McCandless is active.

Publication


Featured researches published by Lawrence C. McCandless.


International Journal of Epidemiology | 2014

Good practices for quantitative bias analysis

Timothy L. Lash; Matthew P Fox; Richard F. MacLehose; George Maldonado; Lawrence C. McCandless; Sander Greenland

Quantitative bias analysis serves several objectives in epidemiological research. First, it provides a quantitative estimate of the direction, magnitude and uncertainty arising from systematic errors. Second, the acts of identifying sources of systematic error, writing down models to quantify them, assigning values to the bias parameters and interpreting the results combat the human tendency towards overconfidence in research results, syntheses and critiques and the inferences that rest upon them. Finally, by suggesting aspects that dominate uncertainty in a particular research result or topic area, bias analysis can guide efficient allocation of sparse research resources. The fundamental methods of bias analyses have been known for decades, and there have been calls for more widespread use for nearly as long. There was a time when some believed that bias analyses were rarely undertaken because the methods were not widely known and because automated computing tools were not readily available to implement the methods. These shortcomings have been largely resolved. We must, therefore, contemplate other barriers to implementation. One possibility is that practitioners avoid the analyses because they lack confidence in the practice of bias analysis. The purpose of this paper is therefore to describe what we view as good practices for applying quantitative bias analysis to epidemiological data, directed towards those familiar with the methods. We focus on answering questions often posed to those of us who advocate incorporation of bias analysis methods into teaching and research. These include the following. When is bias analysis practical and productive? How does one select the biases that ought to be addressed? How does one select a method to model biases? How does one assign values to the parameters of a bias model? How does one present and interpret a bias analysis?. We hope that our guide to good practices for conducting and presenting bias analyses will encourage more widespread use of bias analysis to estimate the potential magnitude and direction of biases, as well as the uncertainty in estimates potentially influenced by the biases.


Statistics in Medicine | 2009

Bayesian propensity score analysis for observational data

Lawrence C. McCandless; Paul Gustafson; Peter C. Austin

In the analysis of observational data, stratifying patients on the estimated propensity scores reduces confounding from measured variables. Confidence intervals for the treatment effect are typically calculated without acknowledging uncertainty in the estimated propensity scores, and intuitively this may yield inferences, which are falsely precise. In this paper, we describe a Bayesian method that models the propensity score as a latent variable. We consider observational studies with a dichotomous treatment, dichotomous outcome, and measured confounders where the log odds ratio is the measure of effect. Markov chain Monte Carlo is used for posterior simulation. We study the impact of modelling uncertainty in the propensity scores in a case study investigating the effect of statin therapy on mortality in Ontario patients discharged from hospital following acute myocardial infarction. Our analysis reveals that the Bayesian credible interval for the treatment effect is 10 per cent wider compared with a conventional propensity score analysis. Using simulations, we show that when the association between treatment and confounders is weak, then this increases uncertainty in the estimated propensity scores. Bayesian interval estimates for the treatment effect are longer on average, though there is little improvement in coverage probability. A novel feature of the proposed method is that it fits models for the treatment and outcome simultaneously rather than one at a time. The method uses the outcome variable to inform the fit of the propensity model. We explore the performance of the estimated propensity scores using cross-validation.


Journal of Clinical Epidemiology | 2008

A sensitivity analysis using information about measured confounders yielded improved uncertainty assessments for unmeasured confounding

Lawrence C. McCandless; Paul Gustafson; Adrian R. Levy

OBJECTIVE In the analysis of observational data, the argument is sometimes made that if adjustment for measured confounders induces little change in the treatment-outcome association, then there is less concern about the extent to which the association is driven by unmeasured confounding. We quantify this reasoning using Bayesian sensitivity analysis (BSA) for unmeasured confounding. Using hierarchical models, the confounding effect of a binary unmeasured variable is modeled as arising from the same distribution as that of measured confounders. Our objective is to investigate the performance of the method compared to sensitivity analysis, which assumes that there is no relationship between measured and unmeasured confounders. STUDY DESIGN AND SETTING We apply the method in an observational study of the effectiveness of beta-blocker therapy in heart failure patients. RESULTS BSA for unmeasured confounding using hierarchical prior distributions yields an odds ratio (OR) of 0.72, 95% credible interval (CrI): 0.56, 0.93 for the association between beta-blockers and mortality, whereas using independent priors yields OR=0.72, 95% CrI: 0.45, 1.15. CONCLUSION If the confounding effect of a binary unmeasured variable is similar to that of measured confounders, then conventional sensitivity analysis may give results that overstate the uncertainty about bias.


Journal of the American Statistical Association | 2012

Adjustment for Missing Confounders Using External Validation Data and Propensity Scores

Lawrence C. McCandless; Sylvia Richardson; Nicky Best

Reducing bias from missing confounders is a challenging problem in the analysis of observational data. Information about missing variables is sometimes available from external validation data, such as surveys or secondary samples drawn from the same source population. In principle, the validation data permit us to recover information about the missing data, but the difficulty is in eliciting a valid model for the nuisance distribution of the missing confounders. Motivated by a British study of the effects of trihalomethane exposure on risk of full-term low birthweight, we describe a flexible Bayesian procedure for adjusting for a vector of missing confounders using external validation data. We summarize the missing confounders with a scalar summary score using the propensity score methodology of Rosenbaum and Rubin. The score has the property that it induces conditional independence between the exposure and the missing confounders, given the measured confounders. It balances the unmeasured confounders across exposure groups, within levels of measured covariates. To adjust for bias, we need only model and adjust for the summary score during Markov chain Monte Carlo computation. Simulation results illustrate that the proposed method reduces bias from several missing confounders over a range of different sample sizes for the validation data. Appendices A–C are available as online supplementary material.


The International Journal of Biostatistics | 2010

Cutting Feedback in Bayesian Regression Adjustment for the Propensity Score

Lawrence C. McCandless; Ian J. Douglas; Stephen Evans; Liam Smeeth

McCandless, Gustafson and Austin (2009) describe a Bayesian approach to regression adjustment for the propensity score to reduce confounding. A unique property of the method is that the treatment and outcome models are combined via Bayes theorem. However, this estimation procedure can be problematic if the outcome model is misspecified. We observe feedback that can bias propensity score estimates. Building on new innovation in Bayesian computation, we propose a technique for cutting feedback in a Bayesian propensity analysis. We use the posterior distribution of the propensity scores as an input in the regression model for the outcome. The method is approximately Bayesian in the sense that it does not use the full likelihood for estimation. Nonetheless, it severs feedback between the treatment and outcome giving propensity score estimates that are free from bias but modeled with uncertainty. We illustrate the method in a matched cohort study investigating the effect of statins on primary stroke prevention.


International Journal of Environmental Research and Public Health | 2010

Probabilistic Approaches to Better Quantifying the Results of Epidemiologic Studies

Paul Gustafson; Lawrence C. McCandless

Typical statistical analysis of epidemiologic data captures uncertainty due to random sampling variation, but ignores more systematic sources of variation such as selection bias, measurement error, and unobserved confounding. Such sources are often only mentioned via qualitative caveats, perhaps under the heading of ‘study limitations.’ Recently, however, there has been considerable interest and advancement in probabilistic methodologies for more integrated statistical analysis. Such techniques hold the promise of replacing a confidence interval reflecting only random sampling variation with an interval reflecting all, or at least more, sources of uncertainty. We survey and appraise the recent literature in this area, giving some prominence to the use of Bayesian statistical methodology.


European Journal of Housing Policy | 2014

Emergency department utilisation among formerly homeless adults with mental disorders after one year of Housing First interventions: a randomised controlled trial

Angela Russolillo; Michelle Patterson; Lawrence C. McCandless; Akm Moniruzzaman; Julian M. Somers

Homeless individuals represent a disadvantaged and marginalised group who experience increased rates of physical illness as well as mental and substance use disorders. Compared to stably housed individuals, homeless adults with mental disorders use hospital emergency departments and other acute health care services at a higher frequency. Housing First integrates housing and support services in a client-centred model and has been shown to reduce acute health care among homeless populations. The present analysis is based on participants enrolled in the Vancouver At Home Study (n = 297) randomised to one of three intervention arms (Housing First in a ‘congregate setting’, in ‘scattered site’ [SS] apartments in the private rental market, or to ‘treatment as usual’ [TAU] where individuals continue to use existing services available to homeless adults with mental illness), and incorporates linked data from a regional database representing six urban emergency departments. Compared to TAU, significantly lower numbers of emergency visits were observed during the post-randomisation period in the SS group (adjusted rate ratio 0.55 [0.35,0.86]). Our results suggest that Housing First, particularly the SS model, produces significantly lower hospital emergency department visits among homeless adults with a mental disorder. These findings demonstrate the potential effectiveness of Housing First to reduce acute health care use among homeless individuals and have implications for future health and housing policy initiatives.


Biometrics | 2010

Simplified Bayesian Sensitivity Analysis for Mismeasured and Unobserved Confounders

Paul Gustafson; Lawrence C. McCandless; Adrian R. Levy; Sylvia Richardson

We examine situations where interest lies in the conditional association between outcome and exposure variables, given potential confounding variables. Concern arises that some potential confounders may not be measured accurately, whereas others may not be measured at all. Some form of sensitivity analysis might be employed, to assess how this limitation in available data impacts inference. A Bayesian approach to sensitivity analysis is straightforward in concept: a prior distribution is formed to encapsulate plausible relationships between unobserved and observed variables, and posterior inference about the conditional exposure-disease relationship then follows. In practice, though, it can be challenging to form such a prior distribution in both a realistic and simple manner. Moreover, it can be difficult to develop an attendant Markov chain Monte Carlo (MCMC) algorithm that will work effectively on a posterior distribution arising from a highly nonidentified model. In this article, a simple prior distribution for acknowledging both poorly measured and unmeasured confounding variables is developed. It requires that only a small number of hyperparameters be set by the user. Moreover, a particular computational approach for posterior inference is developed, because application of MCMC in a standard manner is seen to be ineffective in this problem.


BMC Health Services Research | 2014

Examining the relationship between health-related need and the receipt of care by participants experiencing homelessness and mental illness

Lauren Currie; Michelle Patterson; Akm Moniruzzaman; Lawrence C. McCandless; Julian M. Somers

BackgroundPeople experiencing homelessness and mental illness face multiple barriers to care. The goal of this study was to examine the association between health service use and indicators of need among individuals experiencing homelessness and mental illness in Vancouver, Canada. We hypothesized that those with more severe mental illness would access greater levels of primary and specialist health services than those with less severe mental illness.MethodsParticipants met criteria for homelessness and current mental disorder using standardized criteria (n = 497). Interviews assessed current health status and involvement with a variety of health services including specialist, general practice, and emergency services. The 80th percentile was used to differentiate ‘low health service use’ and ‘high health service use’. Using multivariate logistic regression analysis, we analyzed associations between predisposing, enabling and need-related factors with levels of primary and specialist health service use.ResultsTwenty-one percent of participants had high primary care use, and 12% had high use of specialist services. Factors significantly (p ≤ 0.05) associated with high primary care use were: multiple physical illnesses [AOR 2.74 (1.12, 6.70]; poor general health [AOR 1.68 (1.01, 2.81)]; having a regular family physician [AOR 2.27 (1.27, 4.07)]; and negative social relationships [AOR 1.74 (1.01, 2.99)]. Conversely, having a more severe mental disorder (e.g. psychotic disorder) was significantly associated with lower odds of high service use [AOR 0.59 (0.35, 0.97)]. For specialist care, recent history of psychiatric hospitalization [AOR 2.53 (1.35, 4.75)] and major depressive episode [AOR 1.98 (1.11, 3.56)] were associated with high use, while having a blood borne infectious disease (i.e., HIV, HCV, HBV) was associated with lower odds of high service use.ConclusionsContrary to our hypotheses, we found that individuals with greater assessed need, including more severe mental disorders, and blood-borne infectious diseases had significantly lower odds of being high health service users than those with lower assessed needs. Our findings reveal an important gap between levels of need and service involvement for individuals who are both homeless and mentally ill and have implications for health service reform in relation to the unmet and complex needs of a marginalized sub-population. (Trial registration: ISRCTN57595077 and ISRCTN66721740).


Statistics in Medicine | 2012

Hierarchical priors for bias parameters in Bayesian sensitivity analysis for unmeasured confounding.

Lawrence C. McCandless; Paul Gustafson; Adrian R. Levy; Sylvia Richardson

Recent years have witnessed new innovation in Bayesian techniques to adjust for unmeasured confounding. A challenge with existing methods is that the user is often required to elicit prior distributions for high-dimensional parameters that model competing bias scenarios. This can render the methods unwieldy. In this paper, we propose a novel methodology to adjust for unmeasured confounding that derives default priors for bias parameters for observational studies with binary covariates. The confounding effects of measured and unmeasured variables are treated as exchangeable within a Bayesian framework. We model the joint distribution of covariates by using a log-linear model with pairwise interaction terms. Hierarchical priors constrain the magnitude and direction of bias parameters. An appealing property of the method is that the conditional distribution of the unmeasured confounder follows a logistic model, giving a simple equivalence with previously proposed methods. We apply the method in a data example from pharmacoepidemiology and explore the impact of different priors for bias parameters on the analysis results.

Collaboration


Dive into the Lawrence C. McCandless's collaboration.

Top Co-Authors

Avatar

Paul Gustafson

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge