Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Guobing Lu is active.

Publication


Featured researches published by Guobing Lu.


Journal of the American Statistical Association | 2006

Assessing Evidence Inconsistency in Mixed Treatment Comparisons

Guobing Lu; Ae Ades

Randomized comparisons among several treatments give rise to an incomplete-blocks structure known as mixed treatment comparisons (MTCs). To analyze such data structures, it is crucial to assess whether the disparate evidence sources provide consistent information about the treatment contrasts. In this article we propose a general method for assessing evidence inconsistency in the framework of Bayesian hierarchical models. We begin with the distinction between basic parameters, which have prior distributions, and functional parameters, which are defined in terms of basic parameters. Based on a graphical analysis of MTC structures, evidence inconsistency is defined as a relation between a functional parameter and at least two basic parameters, supported by at least three evidence sources. The inconsistency degrees of freedom (ICDF) is the number of such inconsistencies. We represent evidence consistency as a set of linear relations between effect parameters on the log odds ratio scale, then relax these relations to allow for inconsistency by adding to the model random inconsistency factors (ICFs). The number of ICFs is determined by the ICDF. The overall consistency between evidence sources can be assessed by comparing models with and without ICFs, whereas their posterior distribution reflects the extent of inconsistency in particular evidence cycles. The methods are elucidated using two published datasets, implemented with standard Markov chain Monte Carlo software.


Research Synthesis Methods | 2012

Consistency and inconsistency in network meta-analysis: concepts and models for multi-arm studies

Julian P. T. Higgins; Dan Jackson; Jessica Kate Barrett; Guobing Lu; Ae Ades; Ian R. White

Meta-analyses that simultaneously compare multiple treatments (usually referred to as network meta-analyses or mixed treatment comparisons) are becoming increasingly common. An important component of a network meta-analysis is an assessment of the extent to which different sources of evidence are compatible, both substantively and statistically. A simple indirect comparison may be confounded if the studies involving one of the treatments of interest are fundamentally different from the studies involving the other treatment of interest. Here, we discuss methods for addressing inconsistency of evidence from comparative studies of different treatments. We define and review basic concepts of heterogeneity and inconsistency, and attempt to introduce a distinction between ‘loop inconsistency’ and ‘design inconsistency’. We then propose that the notion of design-by-treatment interaction provides a useful general framework for investigating inconsistency. In particular, using design-by-treatment interactions successfully addresses complications that arise from the presence of multi-arm trials in an evidence network. We show how the inconsistency model proposed by Lu and Ades is a restricted version of our full design-by-treatment interaction model and that there may be several distinct Lu–Ades models for any particular data set. We introduce novel graphical methods for depicting networks of evidence, clearly depicting multi-arm trials and illustrating where there is potential for inconsistency to arise. We apply various inconsistency models to data from trials of different comparisons among four smoking cessation interventions and show that models seeking to address loop inconsistency alone can run into problems. Copyright


Medical Decision Making | 2005

The Interpretation of Random-Effects Meta-Analysis in Decision Models

Ae Ades; Guobing Lu; Julian P. T. Higgins

This article shows that the interpretation of the random-effects models used in meta-analysis to summarize heterogeneous treatment effects can have a marked effect on the results from decision models. Sources of variation in meta-analysis include the following: random variation in outcome definition (amounting to a form of measurement error), variation between the patient groups in different trials, variation between protocols, and variation in the way a given protocol is implemented. Each of these alternatives leads to a different model for how the heterogeneity in the effect sizes previously observed might relate to the effect size(s) in a future implementation. Furthermore, these alternative models require different computations and, when the net benefits are nonlinear in the efficacy parameters, result in different expected net benefits. The authors’ analysis suggests that the mean treatment effect from a random-effects meta-analysis will only seldom be an appropriate representation of the efficacy expected in a future implementation. Instead, modelers should consider either the predictive distribution of a future treatment effect, or they should assume that the future implementation will result in a distribution of treatment effects. A worked example, in a probabilistic, Bayesian posterior framework, is used to illustrate the alternative computations and to show how parameter uncertainty can be combined with variation between individuals and heterogeneity in meta-analysis.


PharmacoEconomics | 2006

Bayesian methods for evidence synthesis in cost-effectiveness analysis

Ae Ades; Mark Sculpher; Alex J. Sutton; Keith R. Abrams; Nicola J. Cooper; Nicky J Welton; Guobing Lu

Recently, health systems internationally have begun to use cost-effectiveness research as formal inputs into decisions about which interventions and programmes should be funded from collective resources. This process has raised some important methodological questions for this area of research. This paper considers one set of issues related to the synthesis of effectiveness evidence for use in decision-analytic cost-effectiveness (CE) models, namely the need for the synthesis of all sources of available evidence, although these may not ‘fit neatly’ into a CE model.Commonly encountered problems include the absence of head-to-head trial evidence comparing all options under comparison, the presence of multiple endpoints from trials and different follow-up periods. Full evidence synthesis for CE analysis also needs to consider treatment effects between patient subpopulations and the use of nonrandomised evidence.Bayesian statistical methods represent a valuable set of analytical tools to utilise indirect evidence and can make a powerful contribution to the decision-analytic approach to CE analysis. This paper provides a worked example and a general overview of these methods with particular emphasis on their use in economic evaluation.


Medical Decision Making | 2013

Evidence Synthesis for Decision Making 4: Inconsistency in Networks of Evidence Based on Randomized Controlled Trials

Sofia Dias; Nicky J Welton; Alex J. Sutton; Deborah M Caldwell; Guobing Lu; Ae Ades

Inconsistency can be thought of as a conflict between “direct” evidence on a comparison between treatments B and C and “indirect” evidence gained from AC and AB trials. Like heterogeneity, inconsistency is caused by effect modifiers and specifically by an imbalance in the distribution of effect modifiers in the direct and indirect evidence. Defining inconsistency as a property of loops of evidence, the relation between inconsistency and heterogeneity and the difficulties created by multiarm trials are described. We set out an approach to assessing consistency in 3-treatment triangular networks and in larger circuit structures, its extension to certain special structures in which independent tests for inconsistencies can be created, and describe methods suitable for more complex networks. Sample WinBUGS code is given in an appendix. Steps that can be taken to minimize the risk of drawing incorrect conclusions from indirect comparisons and network meta-analysis are the same steps that will minimize heterogeneity in pairwise meta-analysis. Empirical indicators that can provide reassurance and the question of how to respond to inconsistency are also discussed.


Research Synthesis Methods | 2012

Automating network meta-analysis

Gert van Valkenhoef; Guobing Lu; Bert de Brock; Hans L. Hillege; Ae Ades; Nicky J. Welton

Mixed treatment comparison (MTC) (also called network meta-analysis) is an extension of traditional meta-analysis to allow the simultaneous pooling of data from clinical trials comparing more than two treatment options. Typically, MTCs are performed using general-purpose Markov chain Monte Carlo software such as WinBUGS, requiring a model and data to be specified using a specific syntax. It would be preferable if, for the most common cases, both could be derived from a well-structured data file that can be easily checked for errors. Automation is particularly valuable for simulation studies in which the large number of MTCs that have to be estimated may preclude manual model specification and analysis. Moreover, automated model generation raises issues that provide additional insight into the nature of MTC. We present a method for the automated generation of Bayesian homogeneous variance random effects consistency models, including the choice of basic parameters and trial baselines, priors, and starting values for the Markov chain(s). We validate our method against the results of five published MTCs. The method is implemented in freely available open source software. This means that performing an MTC no longer requires manually writing a statistical model. This reduces time and effort, and facilitates error checking of the dataset. Copyright


Biostatistics | 2009

Modeling between-trial variance structure in mixed treatment comparisons

Guobing Lu; Ae Ades

In mixed treatment comparison (MTC) meta-analysis, modeling the heterogeneity in between-trial variances across studies is a difficult problem because of the constraints on the variances inherited from the MTC structure. Starting from a consistent Bayesian hierarchical model for the mean treatment effects, we represent the variance configuration by a set of triangle inequalities on the standard deviations. We take the separation strategy (Barnard and others, 2000) to specify prior distributions for standard deviations and correlations separately. The covariance matrix of the latent treatment arm effects can be employed as a vehicle to load the triangular constraints, which in addition allows incorporation of prior beliefs about the correlations between treatment effects. The spherical parameterization based on Cholesky decomposition (Pinheiro and Bates, 1996) is used to generate a positive-definite matrix for the prior correlations in Markov chain Monte Carlo (MCMC). Elicited prior information on correlations between treatment arms is introduced in the form of its equivalent data likelihood. The procedure is implemented in a MCMC framework and illustrated with example data sets from medical research practice.


Statistics in Medicine | 2008

Mixed treatment comparison with multiple outcomes reported inconsistently across trials: Evaluation of antivirals for treatment of influenza A and B

Nicky J Welton; Nicola J. Cooper; Ae Ades; Guobing Lu; Alex J. Sutton

We present a mixed treatment meta-analysis of antivirals for treatment of influenza, where some trials report summary measures on at least one of the two outcomes: time to alleviation of fever and time to alleviation of symptoms. The synthesis is further complicated by the variety of summary measures reported: mean time, median time and proportion symptom free at the end of follow-up. We compare several models using the deviance information criteria and the contribution of different evidence sources to the residual deviance to aid model selection. A Weibull model with exchangeable treatment effects that are independent for each outcome but have a common random effect mean for the two outcomes gives the best fit according to these criteria. This model allows us to summarize treatment effect on two outcomes in a single summary measure and draw conclusions as to the most effective treatment. Amantadine and Oseltamivir were the most effective treatments, with the probability of being most effective of 0.56 and 0.37, respectively. Amantadine reduces the duration of symptoms by an estimated 2.8 days, and Oseltamivir 2.6 days, compared with placebo. The models provide flexible methods for synthesis of evidence on multiple treatments in the absence of head-to-head trial data, when different summary measures are used and either different clinical outcomes are reported or where the same outcomes are reported at different or multiple time points.


Journal of Health Services Research & Policy | 2008

Multiparameter evidence synthesis in epidemiology and medical decision-making:

Ae Ades; Nicky J Welton; Deborah M Caldwell; Malcolm J Price; Aicha Goubar; Guobing Lu

Meta-analysis has been well-established for many years, but has been largely confined to pooling evidence on pair-wise contrasts. Broader forms of synthesis have also been described, apparently re-invented in disparate fields, each time taking different computational approaches. The potential value of Bayesian estimation of a joint posterior parameter distribution and simultaneously sampling from it for decision analysis has also been appreciated. However, applications have been relatively few in number, sometimes stylized, and presented mainly to a statistical methods audience. As a result, the potential for multiparameter evidence synthesis in both epidemiology and health technology assessment has remained largely unrecognized. The advent of flexible software for Bayesian Markov chain Monte Carlo in the shape of WinBUGS has the made these earlier strands of work more widely available. Researchers can now carry out synthesis at a realistic level of complexity. The Bristol programme has not only contributed to a growing body of literature on how to synthesize different evidence structures, but also on how to check the consistency of multiple information sources and how to use the resulting models to prioritize future research.


Annals of Statistics | 2004

Missing at random, likelihood ignorability and model completeness

Guobing Lu; John Copas

This paper provides further insight into the key concept of missing at random (MAR) in incomplete data analysis. Following the usual selection modelling approach we envisage two models with separable parameters: a model for the response of interest and a model for the missing data mechanism (MDM). If the response model is given by a complete density family, then frequentist inference from the likelihood function ignoring the MDM is valid if and only if the MDM is MAR. This necessary and sufficient condition also holds more generally for models for coarse data, such as censoring. Examples are given to show the necessity of the completeness of the underlying model for this equivalence to hold.

Collaboration


Dive into the Guobing Lu's collaboration.

Top Co-Authors

Avatar

Ae Ades

University of Bristol

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge