Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Georgina Imberger is active.

Publication


Featured researches published by Georgina Imberger.


PLOS ONE | 2011

The Number of Patients and Events Required to Limit the Risk of Overestimation of Intervention Effects in Meta-Analysis—A Simulation Study

Kristian Thorlund; Georgina Imberger; Michael Walsh; Rong Chu; Christian Gluud; Jørn Wetterslev; Gordon H. Guyatt; Philip J. Devereaux; Lehana Thabane

Background Meta-analyses including a limited number of patients and events are prone to yield overestimated intervention effect estimates. While many assume bias is the cause of overestimation, theoretical considerations suggest that random error may be an equal or more frequent cause. The independent impact of random error on meta-analyzed intervention effects has not previously been explored. It has been suggested that surpassing the optimal information size (i.e., the required meta-analysis sample size) provides sufficient protection against overestimation due to random error, but this claim has not yet been validated. Methods We simulated a comprehensive array of meta-analysis scenarios where no intervention effect existed (i.e., relative risk reduction (RRR) = 0%) or where a small but possibly unimportant effect existed (RRR = 10%). We constructed different scenarios by varying the control group risk, the degree of heterogeneity, and the distribution of trial sample sizes. For each scenario, we calculated the probability of observing overestimates of RRR>20% and RRR>30% for each cumulative 500 patients and 50 events. We calculated the cumulative number of patients and events required to reduce the probability of overestimation of intervention effect to 10%, 5%, and 1%. We calculated the optimal information size for each of the simulated scenarios and explored whether meta-analyses that surpassed their optimal information size had sufficient protection against overestimation of intervention effects due to random error. Results The risk of overestimation of intervention effects was usually high when the number of patients and events was small and this risk decreased exponentially over time as the number of patients and events increased. The number of patients and events required to limit the risk of overestimation depended considerably on the underlying simulation settings. Surpassing the optimal information size generally provided sufficient protection against overestimation. Conclusions Random errors are a frequent cause of overestimation of intervention effects in meta-analyses. Surpassing the optimal information size will provide sufficient protection against overestimation.


PLOS ONE | 2012

Evolution of Heterogeneity (I2) Estimates and Their 95% Confidence Intervals in Large Meta-Analyses

Kristian Thorlund; Georgina Imberger; Bradley C. Johnston; Michael Walsh; Tahany Awad; Lehana Thabane; Christian Gluud; P. J. Devereaux; Jørn Wetterslev

Background Assessment of heterogeneity is essential in systematic reviews and meta-analyses of clinical trials. The most commonly used heterogeneity measure, I2, provides an estimate of the proportion of variability in a meta-analysis that is explained by differences between the included trials rather than by sampling error. Recent studies have raised concerns about the reliability of I2 estimates, due to their dependence on the precision of included trials and time-dependent biases. Authors have also advocated use of 95% confidence intervals (CIs) to express the uncertainty associated with I2 estimates. However, no previous studies have explored how many trials and events are required to ensure stable and reliable I2 estimates, or how 95% CIs perform as evidence accumulates. Methodology/Principal Findings To assess the stability and reliability of I2 estimates and their 95% CIs, in relation to the cumulative number of trials and events in meta-analysis, we looked at 16 large Cochrane meta-analyses - each including a sufficient number of trials and events to reliably estimate I2 - and monitored the I2 estimates and their 95% CIs for each year of publication. In 10 of the 16 meta-analyses, the I2 estimates fluctuated more than 40% over time. The median number of events and trials required before the cumulative I2 estimates stayed within +/−20% of the final I2 estimate was 467 and 11. No major fluctuations were observed after 500 events and 14 trials. The 95% confidence intervals provided good coverage over time. Conclusions/Significance I2 estimates need to be interpreted with caution when the meta-analysis only includes a limited number of events or trials. Confidence intervals for I2 estimates provide good coverage as evidence accumulates, and are thus valuable for reflecting the uncertainty associated with estimating I2.


Anesthesia & Analgesia | 2015

Systematic Reviews of Anesthesiologic Interventions Reported as Statistically Significant: Problems with Power, Precision, and Type 1 Error Protection.

Georgina Imberger; Christian Gluud; John F. Boylan; Jørn Wetterslev

BACKGROUND:The GRADE Working Group assessment of the quality of evidence is being used increasingly to inform clinical decisions and guidelines. The assessment involves explicit consideration of all sources of uncertainty. One of these sources is imprecision or random error. Many published meta-analyses are underpowered and likely to be updated in the future. When data are sparse and there are repeated updates, the risk of random error is increased. Trial Sequential Analysis (TSA) is one of several methodologies that estimates this increased risk (and decreased precision) in meta-analyses. With nominally statistically significant meta-analyses of anesthesiologic interventions, we used TSA to estimate power and imprecision in the context of sparse data and repeated updates. METHODS:We conducted a search to identify all systematic reviews with meta-analyses that investigated an intervention that may be implemented by an anesthesiologist during the perioperative period. We randomly selected 50 meta-analyses that reported a statistically significant dichotomous outcome in their abstract. We applied TSA to these meta-analyses by using 2 main TSA approaches: relative risk reduction 20% and relative risk reduction consistent with the conventional 95% confidence limit closest to null. We calculated the power achieved by each included meta-analysis, by using each TSA approach, and we calculated the proportion that maintained statistical significance when allowing for sparse data and repeated updates. RESULTS:From 11,870 titles, we found 682 systematic reviews that investigated anesthesiologic interventions. In the 50 sampled meta-analyses, the median number of trials included was 8 (interquartile range [IQR], 5–14), the median number of participants was 964 (IQR, 523–1736), and the median number of participants with the outcome was 202 (IQR, 96–443). By using both of our main TSA approaches, only 12% (95% CI, 5%–25%) of the meta-analyses had power ≥80%, and only 32% (95% CI, 20%–47%) of the meta-analyses preserved the risk of type 1 error <5%. CONCLUSIONS:Most nominally statistically significant meta-analyses of anesthesiologic interventions are underpowered, and many do not maintain their risk of type 1 error <5% if TSA monitoring boundaries are applied. Consideration of the effect of sparse data and repeated updates is needed when assessing the imprecision of meta-analyses of anesthesiologic interventions.


BMJ Open | 2016

False-positive findings in Cochrane meta-analyses with and without application of trial sequential analysis: an empirical review

Georgina Imberger; Kristian Thorlund; Christian Gluud; Jørn Wetterslev

Objective Many published meta-analyses are underpowered. We explored the role of trial sequential analysis (TSA) in assessing the reliability of conclusions in underpowered meta-analyses. Methods We screened The Cochrane Database of Systematic Reviews and selected 100 meta-analyses with a binary outcome, a negative result and sufficient power. We defined a negative result as one where the 95% CI for the effect included 1.00, a positive result as one where the 95% CI did not include 1.00, and sufficient power as the required information size for 80% power, 5% type 1 error, relative risk reduction of 10% or number needed to treat of 100, and control event proportion and heterogeneity taken from the included studies. We re-conducted the meta-analyses, using conventional cumulative techniques, to measure how many false positives would have occurred if these meta-analyses had been updated after each new trial. For each false positive, we performed TSA, using three different approaches. Results We screened 4736 systematic reviews to find 100 meta-analyses that fulfilled our inclusion criteria. Using conventional cumulative meta-analysis, false positives were present in seven of the meta-analyses (7%, 95% CI 3% to 14%), occurring more than once in three. The total number of false positives was 14 and TSA prevented 13 of these (93%, 95% CI 68% to 98%). In a post hoc analysis, we found that Cochrane meta-analyses that are negative are 1.67 times more likely to be updated (95% CI 0.92 to 2.68) than those that are positive. Conclusions We found false positives in 7% (95% CI 3% to 14%) of the included meta-analyses. Owing to limitations of external validity and to the decreased likelihood of updating positive meta-analyses, the true proportion of false positives in meta-analysis is probably higher. TSA prevented 93% of the false positives (95% CI 68% to 98%).


PLOS ONE | 2011

Statistical Multiplicity in Systematic Reviews of Anaesthesia Interventions: A Quantification and Comparison between Cochrane and Non-Cochrane Reviews

Georgina Imberger; Alexandra Damgaard Vejlby; Sara Bohnstedt Hansen; Ann Merete Møller; Jørn Wetterslev

Background Systematic reviews with meta-analyses often contain many statistical tests. This multiplicity may increase the risk of type I error. Few attempts have been made to address the problem of statistical multiplicity in systematic reviews. Before the implications are properly considered, the size of the issue deserves clarification. Because of the emphasis on bias evaluation and because of the editorial processes involved, Cochrane reviews may contain more multiplicity than their non-Cochrane counterparts. This study measured the quantity of statistical multiplicity present in a population of systematic reviews and aimed to assess whether this quantity is different in Cochrane and non-Cochrane reviews. Methods/Principal Findings We selected all the systematic reviews published by the Cochrane Anaesthesia Review Group containing a meta-analysis and matched them with comparable non-Cochrane reviews. We counted the number of statistical tests done in each systematic review. The median number of tests overall was 10 (interquartile range (IQR) 6 to 18). The median was 12 in Cochrane and 8 in non-Cochrane reviews (difference in medians 4 (95% confidence interval (CI) 2.0–19.0). The proportion that used an assessment of risk of bias as a reason for doing extra analyses was 42% in Cochrane and 28% in non-Cochrane reviews (difference in proportions 14% (95% CI −8 to 36). The issue of multiplicity was addressed in 6% of all the reviews. Conclusion/Significance Statistical multiplicity in systematic reviews requires attention. We found more multiplicity in Cochrane reviews than in non-Cochrane reviews. Many of the reasons for the increase in multiplicity may well represent improved methodological approaches and greater transparency, but multiplicity may also cause an increased risk of spurious conclusions. Few systematic reviews, whether Cochrane or non-Cochrane, address the issue of multiplicity.


BJA: British Journal of Anaesthesia | 2014

Does anaesthesia with nitrous oxide affect mortality or cardiovascular morbidity? A systematic review with meta-analysis and trial sequential analysis

Georgina Imberger; A. Orr; Kristian Thorlund; Jørn Wetterslev; Paul S. Myles; Ann Merete Møller

BACKGROUND The role of nitrous oxide in modern anaesthetic practice is contentious. One concern is that exposure to nitrous oxide may increase the risk of cardiovascular complications. ENIGMA II is a large randomized clinical trial currently underway which is investigating nitrous oxide and cardiovascular complications. Before the completion of this trial, we performed a systematic review and meta-analysis, using Cochrane methodology, on the outcomes that make up the composite primary outcome. METHODS We used conventional meta-analysis and trial sequential analysis (TSA). We reviewed 8282 abstracts and selected 138 that fulfilled our criteria for study type, population, and intervention. We attempted to contact the authors of all the selected publications to check for unpublished outcome data. RESULTS Thirteen trials had outcome data eligible for our outcomes. We assessed three of these trials as having a low risk of bias. Using conventional meta-analysis, the relative risk of short-term mortality in the nitrous oxide group was 1.38 [95% confidence interval (CI) 0.22-8.71] and the relative risk of long-term mortality in the nitrous oxide group was 0.94 (95% CI 0.80-1.10). In both cases, TSA demonstrated that the data were far too sparse to make any conclusions. There were insufficient data to perform meta-analysis for stroke, myocardial infarct, pulmonary embolus, or cardiac arrest. CONCLUSION This systematic review demonstrated that we currently do not have robust evidence for how nitrous oxide used as part of general anaesthesia affects mortality and cardiovascular complications.


British Journal of Obstetrics and Gynaecology | 2011

The information gap between the required and the actual accrued information size in the meta-analysis of antenatal magnesium sulphate to prevent cerebral palsy in preterm infants.

Jørn Wetterslev; Georgina Imberger

Sir, We would like to compliment Huusom et al. for their conclusion that we have not yet obtained firm evidence for a beneficial effect of antenatal magnesium sulphate on cerebral palsy in preterm infants. Additionally, the authors tried to evaluate the information gap to detect an intervention effect of 25% relative risk reduction (RRR) in different scenarios of the overall type 1 and 2 errors. They stated that the estimated number of additional participants required in a randomised clinical trial to cross the monitoring boundary and obtain firm evidence approximates to 400 (type 1 error 5%) or 4000 (type 1 error 1%). To conduct their trial sequential analysis (TSA), the authors calculated the required information size using an RRR of 25% and a control event proportion of 5%. To estimate the information gap, they then predicted that this anticipated intervention effect would show up in future trials. The scientific philosophy speaking here seems to be: this intervention works, its effect is a 25% RRR and we just need 400 (or 4000) more participants to show it definitively. We suggest that this use of the anticipated intervention effect—to calculate the number of participants needed to break the boundary for benefit—is not a helpful approach. We emphasise rather that the important finding of the TSA is that we do not yet know whether this intervention works or by how much. The TSA challenges the anticipation of a 25% RRR by monitoring cumulative results of the meta-analyses when new trials are added. In this case, the meta-analytic point estimate to date is a 30% (10–46%) RRR and a 25% RRR may be a realistic anticipation to challenge. However, an expectation that every future trial will have a point estimate equal to the anticipated intervention effect disregards the random variation to which all trial results are subject. A randomised clinical trial is the most audacious challenge of a strict formulated hypothesis and it does include a quantified estimate of the effect of the intervention. The integrity of this trial concept should be respected. In a single trial, 5676 participants would be needed to challenge a 30% RRR. If the true RRR is lower then the number of participants needed would be even higher. Clearly, Huusom et al. are suggesting that evidence from the new trial will not stand alone; it will be appended to the existing evidence in a cumulative meta-analysis and therefore does not need to be as large. We suggest a solution that both incorporates an assessment of the accumulating evidence and preserves the integrity of the single trial. A new trial should be designed to challenge the hypothesis in its own right and then interim analyses should be made. These interim analyses should include a cumulative meta-analysis including all previous trials allowing for early stopping of the trial if the effect is substantially higher than anticipated or if the total evidence, including the interim group of participants, persuasively points to a clinically relevant effect beyond a reasonable risk of random error in a TSA. j


Clinical Trials | 2010

Comments on ‘Sequential meta-analysis: an efficient decision-making tool’ by I van der Tweel and C Bollen

Kristian Thorlund; Georgina Imberger; Jørn Wetterslev; Jesper Brok; Christian Gluud

In a recent paper published in Clinical Trials, van der Tweel and Bollen [1] compared trial sequential analysis (TSA, alpha-spending monitoring boundaries applied to meta-analysis) with sequential meta-analysis (SMA, Whitehead’s triangular boundaries applied to meta-analysis). Repeated updates in meta-analyses increase the risk of type 1 error and may lead to spurious conclusions and we welcome any discussion about potential methodological techniques that may alleviate this increased risk. In the spirit of that discussion, we point out that several of the comments made by van der Tweel and Bollen, regarding the comparison between TSA and SMA, are incorrect. van der Tweel and Bollen [1] claimed that SMA facilitates futility testing and TSA does not. This claim is incorrect. Using an alpha-spending function, TSA produces thresholds for statistical significance (controlling overall type 1 error) [2,3]. Using a betaspending function, TSA produces futility boundaries (controlling overall type 2 error) [4]. van der Tweel and Bollen compared the results of Whitehead’s triangular boundaries with triangular boundaries corresponding to O’Brien–Fleming boundaries, finding identical results in the futile meta-analyses. If they had employed the O’Brien–Fleming betaspending function in TSA, they would also have found identical results and appreciated that TSA and SMA may be equally efficient in futility testing. van der Tweel and Bollen [1] wrote that the triangular boundaries ‘ . . . enables the investigator to preserve the overall type 1 error’. However, in one of the papers they cited to support this claim, it was shown that the triangular test does not preserve the type 1 error well when the metaanalysis incurs a moderate or substantial degree of heterogeneity [5]. Given that heterogeneity is often at least moderate, the ability of the triangular test to regularly preserve overall type 1 error in real-life meta-analyses seems questionable. van der Tweel and Bollen commented that it is difficult to accurately estimate heterogeneity when a meta-analysis includes few trials. We agree with this assertion. Moreover, we stress that this inaccurate estimation of heterogeneity may affect the reliability of SMA. In meta-analyses including few trials, heterogeneity estimates may be underestimated due to lack of power to detect heterogeneity, bias, or inaccuracies associated with the employed heterogeneity estimator [6–8]. When this happens, the cumulative statistical information (used for SMA) is overestimated, corresponding to an underestimation of the required information size. In cases of cumulative ‘extreme’ overestimation of statistical information, an SMA may prematurely or spuriously cross a significance or futility boundary. Conversely, TSA is equipped with the option to account a priori for different anticipated degrees of heterogeneity. Thus, the required information size (the total number of patients) is not impacted by early inaccuracies in the heterogeneity estimation. We certainly value the role of sequential metaanalysis and we appreciate the attention given to this issue. We wish only to clarify some of the points raised by van der Tweel and Bollen and further the discussion. Both sequential meta-analysis and trial sequential analysis have advantages and limitations. A clear appreciation of these variations can only further our progress in this area, and help us achieve our goal to accurately communicate the strength of evidence in systematic reviews with meta-analyses.


American Journal of Case Reports | 2017

Prolonged Unexplained Hypoxemia as Initial Presentation of Cirrhosis: A Case Report

Anand Puttappa; Kumaraswamy Sheshadri; Aurelie Fabre; Georgina Imberger; John F. Boylan; Silke Ryan; Masood Iqbal; N. Conlon

Patient: Male, 43 Final Diagnosis: Hepatopulmonary syndrome Symptoms: Dyspnea Medication: — Clinical Procedure: — Specialty: Gastroenterology and Hepatology Objective: Unusual clinical course Background: Hepatopulmonary syndrome (HPS) is a pulmonary complication of advanced liver disease with dyspnea as the predominant presenting symptom. The diagnosis of HPS can often be missed due to its nonspecific presentation and the presence of other comorbidities. Case Report: We present an interesting case of an obese 43-year-old man who presented with progressive, unexplained hypoxemia and shortness of breath in the absence of any symptoms or signs of chronic liver disease. After extensive cardiopulmonary investigations, he was diagnosed with severe HPS as a result of non-alcoholic steatohepatitis (NASH) leading to cirrhosis. He subsequently underwent successful hepatic transplantation and continues to improve at 12-month follow-up. Conclusions: HPS needs to be considered in the differential diagnosis of unexplained hypoxemia. Given its poor prognosis, early diagnosis is warranted and treatment with liver transplantation is the preferred choice.


Statistics in Medicine | 2011

Comments on ‘Sequential methods for random-effects meta-analysis’ by J. P. Higgins, A. Whitehead and M. Simmonds, Statistics in Medicine 2010; DOI: 10.1002/sim.4088

Georgina Imberger; Christian Gluud; Jørn Wetterslev

We wish to commend Higgins and colleagues on their recent article ‘Sequential methods for randomeffects meta-analysis’ [1]. Repeated updates of a meta-analysis are obviously mandatory if the information is to be kept up-to-date. As an adverse effect of these updates, repeated analyses increase the risk of type 1 error and can lead to inaccurate communication of uncertainty in conclusions [2, 3]. This increased risk has been ignored by many until now and the current version of The Cochrane Handbook does not discuss sequential multiplicity directly [4]. We hope that Higgins and colleagues, with their current publication, will do much to amend this omission. We agree entirely with Higgins and colleagues that there must be an emphasis on good empirical properties and that the approach must be relatively straightforward. At The Copenhagen Trial Unit, we have been using Trial Sequential Analysis (TSA) to conduct sequential analyses with the aim of adjusting for sparse data and sequential multiplicity [5, 6]. TSA uses the O’Brien-Fleming boundaries to monitor significance (and futility), as trials are added to cumulative meta-analysis. A prediction has to be made about the proportion in the control group with the outcome in question, the anticipated intervention effect size in the experimental group, the type 1 error, the type 2 error, and the expected ultimate heterogeneity. Based on this information, the required information size and the trial sequential monitoring boundaries are calculated [5, 6]. Like Higgins and colleagues, we consider the prediction of heterogeneity as a major challenge. Our approach has been to consider different realistic and relevant possibilities a priori and to explore the impact of different values of heterogeneity on the inferential results. As such, uncertainties in priors can be considered and discussed in terms of the uncertainties they cause in conclusions. Similar explorative analysis can be done by varying other variables, most notably the control group event proportion and the anticipated effect size. In an explorative spirit, we performed TSA on the bleeding peptic ulcer meta-analysis used by Higgins and colleagues [1]. We wondered what set of ‘prior predictions’ in TSA would correspond to the inverse gamma (IG) prior distributions. Given the size of the statistical heterogeneity in the full meta-analysis of 23 trials (I 2 =72 per cent), we decided to focus our comparison on the ‘approximate semi-Bayes IG (1.5, 1) sequential analysis’, for which significance was declared after 15 of the 23 trials. Using a type 1 error of 0.05, a type 2 error of 0.20, and using the included trials to estimate the heterogeneity and the control event proportion, we found—using TSA—that the meta-analysis crossed the significance boundary after the 15th trial when we challenged a relative odds ratio reduction of 25 per cent. The odds ratio at stopping was 0.38 with a sequentially adjusted 95 per cent confidence interval of 0.16–0.92. The heterogeneity-adjusted required information size was 2553 (Figure 1). We would very much like to hear the impression of Higgins and colleagues of this comparison. Is there a measure of information size incorporated in the assumptions for the approximate semi-Bayes sequential meta-analysis? For the approximate semi-Bayes technique, can the parameters of the prior be thought of in terms of any clinical parameters? Such as anticipated effect size? Or control event group

Collaboration


Dive into the Georgina Imberger's collaboration.

Top Co-Authors

Avatar

Jørn Wetterslev

Copenhagen University Hospital

View shared research outputs
Top Co-Authors

Avatar

Christian Gluud

Copenhagen University Hospital

View shared research outputs
Top Co-Authors

Avatar

Kristian Thorlund

Copenhagen University Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jesper Brok

Copenhagen University Hospital

View shared research outputs
Top Co-Authors

Avatar

Kristian Thorlund

Copenhagen University Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lehana Thabane

St. Joseph's Healthcare Hamilton

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge