Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dena M. Bravata is active.

Publication


Featured researches published by Dena M. Bravata.


The Lancet | 2009

Coronary artery bypass surgery compared with percutaneous coronary interventions for multivessel disease: a collaborative analysis of individual patient data from ten randomised trials

Mark A. Hlatky; Derek B. Boothroyd; Dena M. Bravata; Eric Boersma; Jean Booth; Maria Mori Brooks; Didier Carrié; Tim Clayton; Nicolas Danchin; Marcus Flather; Christian W. Hamm; Whady Hueb; Jan Kähler; Sheryl F. Kelsey; Spencer B. King; Andrzej S. Kosinski; Neuza Lopes; Kathryn M McDonald; Alfredo E. Rodriguez; Patrick W. Serruys; Ulrich Sigwart; Rodney H. Stables; Douglas K Owens; Stuart J. Pocock

BACKGROUNDnCoronary artery bypass graft (CABG) and percutaneous coronary intervention (PCI) are alternative treatments for multivessel coronary disease. Although the procedures have been compared in several randomised trials, their long-term effects on mortality in key clinical subgroups are uncertain. We undertook a collaborative analysis of data from randomised trials to assess whether the effects of the procedures on mortality are modified by patient characteristics.nnnMETHODSnWe pooled individual patient data from ten randomised trials to compare the effectiveness of CABG with PCI according to patients baseline clinical characteristics. We used stratified, random effects Cox proportional hazards models to test the effect on all-cause mortality of randomised treatment assignment and its interaction with clinical characteristics. All analyses were by intention to treat.nnnFINDINGSnTen participating trials provided data on 7812 patients. PCI was done with balloon angioplasty in six trials and with bare-metal stents in four trials. Over a median follow-up of 5.9 years (IQR 5.0-10.0), 575 (15%) of 3889 patients assigned to CABG died compared with 628 (16%) of 3923 patients assigned to PCI (hazard ratio [HR] 0.91, 95% CI 0.82-1.02; p=0.12). In patients with diabetes (CABG, n=615; PCI, n=618), mortality was substantially lower in the CABG group than in the PCI group (HR 0.70, 0.56-0.87); however, mortality was similar between groups in patients without diabetes (HR 0.98, 0.86-1.12; p=0.014 for interaction). Patient age modified the effect of treatment on mortality, with hazard ratios of 1.25 (0.94-1.66) in patients younger than 55 years, 0.90 (0.75-1.09) in patients aged 55-64 years, and 0.82 (0.70-0.97) in patients 65 years and older (p=0.002 for interaction). Treatment effect was not modified by the number of diseased vessels or other baseline characteristics.nnnINTERPRETATIONnLong-term mortality is similar after CABG and PCI in most patient subgroups with multivessel coronary artery disease, so choice of treatment should depend on patient preferences for other outcomes. CABG might be a better option for patients with diabetes and patients aged 65 years or older because we found mortality to be lower in these subgroups.


JAMA | 2010

Diagnosing and Managing Common Food Allergies: A Systematic Review

Jennifer Schneider Chafen; Sydne Newberry; Marc Riedl; Dena M. Bravata; Margaret Maglione; Marika J Suttorp; Vandana Sundaram; Neil M. Paige; Ali Towfigh; Benjamin J. Hulley; Paul G. Shekelle

CONTEXTnThere is heightened interest in food allergies but no clear consensus exists regarding the prevalence or most effective diagnostic and management approaches to food allergies.nnnOBJECTIVEnTo perform a systematic review of the available evidence on the prevalence, diagnosis, management, and prevention of food allergies.nnnDATA SOURCESnElectronic searches of PubMed, Cochrane Database of Systematic Reviews, Cochrane Database of Abstracts of Reviews of Effects, and Cochrane Central Register of Controlled Trials. Searches were limited to English-language articles indexed between January 1988 and September 2009.nnnSTUDY SELECTIONnDiagnostic tests were included if they had a prospective, defined study population, used food challenge as a criterion standard, and reported sufficient data to calculate sensitivity and specificity. Systematic reviews and randomized controlled trials (RCTs) for management and prevention outcomes were also used. For foods where anaphylaxis is common, cohort studies with a sample size of more than 100 participants were included.nnnDATA EXTRACTIONnTwo investigators independently reviewed all titles and abstracts to identify potentially relevant articles and resolved discrepancies by repeated review and discussion. Quality of systematic reviews and meta-analyses was assessed using the AMSTAR criteria, the quality of diagnostic studies using the QUADAS criteria most relevant to food allergy, and the quality of RCTs using the Jadad criteria.nnnDATA SYNTHESISnA total of 12,378 citations were identified and 72 citations were included. Food allergy affects more than 1% to 2% but less than 10% of the population. It is unclear if the prevalence of food allergies is increasing. Summary receiver operating characteristic curves comparing skin prick tests (area under the curve [AUC], 0.87; 95% confidence interval [CI], 0.81-0.93) and serum food-specific IgE (AUC, 0.84; 95% CI, 0.78-0.91) to food challenge showed no statistical superiority for either test. Elimination diets are the mainstay of therapy but have been rarely studied. Immunotherapy is promising but data are insufficient to recommend use. In high-risk infants, hydrolyzed formulas may prevent cows milk allergy but standardized definitions of high risk and hydrolyzed formula do not exist.nnnCONCLUSIONnThe evidence for the prevalence and management of food allergy is greatly limited by a lack of uniformity for criteria for making a diagnosis.


Annals of Internal Medicine | 2007

Systematic Review: The Comparative Effectiveness of Percutaneous Coronary Interventions and Coronary Artery Bypass Graft Surgery

Dena M. Bravata; Allison Gienger; Kathryn M McDonald; Vandana Sundaram; Marco V Perez; Robin Varghese; John R Kapoor; Reza Ardehali; Douglas K Owens; Mark A. Hlatky

Context The relative benefits and harms of coronary artery bypass graft surgery (CABG) versus percutaneous coronary intervention (PCI) are sometimes unclear. Contribution This systematic review of 23 randomized trials found that survival at 10 years was similar for CABG and PCI, even among diabetic patients. Procedural strokes and angina relief were more common after CABG (risk difference, 0.6% and about 5% to 8%, respectively), whereas repeated revascularization procedures were more common after PCI (risk difference, 24% at 1 year). Caution Only 1 small trial used drug-eluting stents. Few patients with extensive coronary disease or poor ventricular function were enrolled. The Editors Coronary artery bypass graft (CABG) surgery and catheter-based percutaneous coronary intervention (PCI), with or without coronary stents, are alternative approaches to mechanical coronary revascularization. These 2 coronary revascularization techniques are among the most common major medical procedures performed in North America and Europe: In 2005, 261000 CABG procedures and 645000 PCI procedures were performed in the United States alone (1). However, the comparative effectiveness of CABG and PCI remains poorly understood for patients in whom both procedures are technically feasible and coronary revascularization is clinically indicated. In patients with left main or triple-vessel coronary artery disease with reduced left ventricular function, CABG is generally preferred because randomized, controlled trials (RCTs) have shown that it improves survival compared with medical therapy (2, 3). In patients with most forms of single-vessel disease, PCI is generally the preferred form of coronary revascularization (4), in light of its lower clinical risk and the evidence that PCI reduces angina and myocardial ischemia in this subset of patients (5). Most RCTs comparing CABG and PCI have been conducted in populations with coronary artery disease between these extremes, namely patients with single-vessel, proximal left anterior descending disease; most forms of double-vessel disease; or less extensive forms of triple-vessel disease. We sought to evaluate the evidence from RCTs on the comparative effectiveness of PCI and CABG. We included trials using balloon angioplasty or coronary stents because quantitative reviews have shown no differences in mortality or myocardial infarction between these PCI techniques (6, 7). We also included trials using standard or minimally invasive CABG or both procedures (8, 9). We sought to document differences between PCI and CABG in survival, cardiovascular complications (such as stroke and myocardial infarction), and freedom from angina. Finally, we reviewed selected observational studies to assess the generalizability of the RCTs. Methods Data Sources We searched the MEDLINE, EMBASE, and Cochrane databases for studies published between January 1966 and August 2006 by using such terms as angioplasty, coronary, and coronary artery bypass surgery, as reported in detail elsewhere (10). We also sought additional studies by reviewing the reference lists of included articles, conference abstracts, and the bibliographies of expert advisors. We did not limit the searches to the English language. Study Selection We sought RCTs that compared health outcomes of PCI and CABG. We excluded trials that compared PCI alone or CABG alone with medical therapy, those that compared 2 forms of PCI, and those that compared 2 forms of CABG. The outcomes of interest were survival, myocardial infarction, stroke, angina, and use of additional revascularization procedures. Two investigators independently reviewed titles, abstracts, and the full text as needed to determine whether studies met inclusion criteria. Conflicts between reviewers were resolved through re-review and discussion. We did not include results published solely in abstract form. Data Extraction and Quality Assessment Two authors independently abstracted data on study design; setting; population characteristics (sex, age, race/ethnicity, comorbid conditions, and coronary anatomy); eligibility and exclusion criteria; procedures performed; numbers of patients screened, eligible, enrolled, and lost to follow-up; method of outcome assessment; and results for each outcome. We assessed the quality of included trials by using predefined criteria and graded their quality as A, B, or C by using methods described in detail elsewhere (10). In brief, a grade of A indicates a high-quality trial that clearly described the population, setting, interventions, and comparison groups; randomly allocated patients to alternative treatments; had low dropout rates; and reported intention-to-treat analysis of outcomes. A grade of B indicates a randomized trial with incomplete information about methods that might mask important limitations. A grade of C indicates that the trial had evident flaws, such as improper randomization, that could introduce significant bias. Data Synthesis and Analysis We used random-effects models to compute weighted mean rates and SEs for each outcome. We computed summary risk differences and odds ratios between PCI and CABG and the 95% CI for each outcome of interest at annual intervals. Because the results did not differ materially when risk differences and odds ratios (10) were used and the low rate of several outcomes (for example, procedural mortality) made the risk difference a more stable outcome metric (11, 12), we report here only the risk differences. We assessed heterogeneity of effects by using chi-square and I 2 statistics (13). When effects were heterogeneous (I 2 > 50%), we explored the effects of individual studies on summary effects by removing each study individually. We assessed the possibility of publication bias by visual inspection of funnel plots and calculated the number of missing studies required to change a statistically significant summary effect to not statistically significant (11). We performed analyses by using Comprehensive Meta-Analysis software, version 2.0 (Biostat, Englewood, New Jersey). Inclusion of Observational Studies We also searched for observational data to evaluate the generalizability of the RCT results, as reported in detail elsewhere (10). In brief, we included observational studies from clinical or administrative databases that included at least 1000 recipients of each revascularization procedure and provided sufficient information about the patient populations (such as demographic characteristics, preprocedure coronary anatomy, and comorbid conditions) and procedures performed (such as balloon angioplasty vs. bare-metal stents vs. drug-eluting stents). Role of the Funding Source This project was supported by the Agency for Healthcare Research and Quality. Representatives of the funding agency reviewed and commented on the study protocol and drafts of the manuscript, but the authors had final responsibility for the design, conduct, analysis, and reporting of the study. Results We identified 1695 potentially relevant articles, of which 204 merited full-text review (Appendix Figure). A total of 113 articles reporting on 23 unique RCTs met inclusion criteria (Table 1 [14126]). These trials enrolled a total of 9963 patients, of whom 5019 were randomly assigned to PCI and 4944 to CABG. Most trials were conducted in Europe, the United Kingdom, or both locations; only 3 trials were performed in the United States. The early studies (patient entry from 1987 to 1993) used balloon angioplasty as the PCI technique, and the later studies (patient entry from 1994 to 2002) used stents as the PCI technique. Only 1 small trial of PCI versus CABG used drug-eluting stents (116). Nine trials limited entry to patients with single-vessel disease of the proximal left anterior descending artery, whereas the remaining 14 trials enrolled patients with multivessel disease, either predominantly (3 trials) or exclusively (11 trials). Appendix Figure. Study flow diagram. CABG= coronary artery bypass grafting; CAD= coronary artery disease; PCI= percutaneous coronary intervention; RCT= randomized, controlled trial. Table 1. Overview of Randomized, Controlled Trials The quality of 21 trials was graded as A, and 1 trial (117) was graded as B. One trial (116) was graded as C because randomization may not have been properly executed (details are available elsewhere [10]). We performed sensitivity analyses by removing these studies from the analysis, and our summary results did not change statistically significantly. The average age of the trial participants was 61 years, 27% were women, and most were of European ancestry. Roughly 20% had diabetes, half had hypertension, and half had hyperlipidemia. Whereas approximately 40% of patients had a previous myocardial infarction, few had heart failure or poor left ventricular function. Among studies that enrolled patients with multivessel coronary disease, most had double-vessel rather than triple-vessel disease. Revascularization procedures were performed by using standard methods for the time the trial was conducted (Table 1). Among patients with multivessel disease, more grafts were placed during CABG than vessels were dilated during PCI. Among patients assigned to PCI, stents were commonly used in the recent studies, but in the earlier trials, balloon angioplasty was standard. Among patients assigned to CABG, arterial grafting with the left internal mammary artery was frequently done, especially in more recent trials. Some studies used minimally invasive, direct coronary artery bypass and off-pump operations to perform CABG in patients with single-vessel left anterior descending disease (Table 1). Short-Term and Procedural Outcomes Survival (within 30 days of the procedure) was high for both procedures: 98.9% for PCI and 98.2% for CABG. When data from all trials were combined, the survival difference between PCI and CABG was small and not statistically significant (0.2% [95% CI, 0.3% to 0.6%]) (Figure 1


Annals of Internal Medicine | 2012

Are Organic Foods Safer or Healthier Than Conventional Alternatives?: A Systematic Review

Crystal M. Smith-Spangler; Margaret L. Brandeau; Grace E. Hunter; J. Clay Bavinger; Maren Pearson; Paul J. Eschbach; Vandana Sundaram; Hau Liu; Patricia Schirmer; Christopher D Stave; Ingram Olkin; Dena M. Bravata

BACKGROUNDnThe health benefits of organic foods are unclear.nnnPURPOSEnTo review evidence comparing the health effects of organic and conventional foods.nnnDATA SOURCESnMEDLINE (January 1966 to May 2011), EMBASE, CAB Direct, Agricola, TOXNET, Cochrane Library (January 1966 to May 2009), and bibliographies of retrieved articles.nnnSTUDY SELECTIONnEnglish-language reports of comparisons of organically and conventionally grown food or of populations consuming these foods.nnnDATA EXTRACTIONn2 independent investigators extracted data on methods, health outcomes, and nutrient and contaminant levels.nnnDATA SYNTHESISn17 studies in humans and 223 studies of nutrient and contaminant levels in foods met inclusion criteria. Only 3 of the human studies examined clinical outcomes, finding no significant differences between populations by food type for allergic outcomes (eczema, wheeze, atopic sensitization) or symptomatic Campylobacter infection. Two studies reported significantly lower urinary pesticide levels among children consuming organic versus conventional diets, but studies of biomarker and nutrient levels in serum, urine, breast milk, and semen in adults did not identify clinically meaningful differences. All estimates of differences in nutrient and contaminant levels in foods were highly heterogeneous except for the estimate for phosphorus; phosphorus levels were significantly higher than in conventional produce, although this difference is not clinically significant. The risk for contamination with detectable pesticide residues was lower among organic than conventional produce (risk difference, 30% [CI, -37% to -23%]), but differences in risk for exceeding maximum allowed limits were small. Escherichia coli contamination risk did not differ between organic and conventional produce. Bacterial contamination of retail chicken and pork was common but unrelated to farming method. However, the risk for isolating bacteria resistant to 3 or more antibiotics was higher in conventional than in organic chicken and pork (risk difference, 33% [CI, 21% to 45%]).nnnLIMITATIONnStudies were heterogeneous and limited in number, and publication bias may be present.nnnCONCLUSIONnThe published literature lacks strong evidence that organic foods are significantly more nutritious than conventional foods. Consumption of organic foods may reduce exposure to pesticide residues and antibiotic-resistant bacteria.nnnPRIMARY FUNDING SOURCEnNone.


Annals of Internal Medicine | 2004

Systematic Review: Surveillance Systems for Early Detection of Bioterrorism-Related Diseases

Dena M. Bravata; Kathryn M McDonald; Wendy M. Smith; Chara E. Rydzak; Herbert Szeto; David L. Buckeridge; Corinna A. Haberland; Douglas K Owens

Key Summary Points The practice of surveillance is changing to address the threat of bioterrorism and to take advantage of the increasing availability of electronic data. The authors identified published descriptions of 29 systems designed specifically for bioterrorism surveillance. Bioterrorism surveillance systems either monitor the incidence of bioterrorism-related syndromes (9) or monitor environmental samples for bioterrorism agents (20). Only 2 syndromic surveillance systems and no environmental monitoring system were evaluated in peer-reviewed studies. Both evaluations of syndromic surveillance systems compared the incidence of flu-like illness syndromes with results from national influenza surveillance. Existing evaluations of surveillance systems for detecting bioterrorism are insufficient to characterize the performance of these systems. Evaluation of bioterrorism surveillance is needed to inform decisions about deploying systems and to facilitate decision making on the basis of system results. The anthrax attacks of 2001 and the recent outbreaks of severe acute respiratory syndrome (SARS) and influenza strikingly demonstrate the continuing threat from illnesses resulting from bioterrorism and related infectious diseases. In particular, these outbreaks have highlighted that an essential component of preparations for illnesses and syndromes potentially related to bioterrorism includes the deployment of surveillance systems that can rapidly detect and monitor the course of an outbreak and thus minimize associated morbidity and mortality (1-3). Driven by the threat of additional outbreaks resulting from bioterrorism and the increasing availability of data available for surveillance, surveillance systems have proliferated. The Centers for Disease Control and Prevention (CDC) defines surveillance systems as those that collect and analyze morbidity, mortality, and other relevant data and facilitate the timely dissemination of results to appropriate decision makers (3, 4). However, there is little consensus as to which sources of surveillance data or which collection, analysis, and reporting technologies are probably the most timely, sensitive, and specific for detecting and managing bioterrorism-related illness and related emerging infectious diseases (5). Existing surveillance systems for bioterrorism-related diseases vary widely with respect to the methods used to collect the surveillance data, surveillance characteristics of the data collected, and analytic methods used to determine when a potential outbreak has occurred. Traditionally, the primary method for collecting surveillance data was manual reporting of suspicious and notifiable clinical and laboratory data from clinicians, hospitals, and laboratories to public health officials (6). Recent innovations in disease surveillance that may improve the timeliness, sensitivity, and specificity of bioterrorism-related outbreak detection include surveillance for syndromes rather than specific diseases and the automated extraction and analysis of routinely collected clinical, administrative, pharmacy, and laboratory data. Little is known about the accuracy of surveillance systems for bioterrorism and related emerging infectious diseases, perhaps because of the diversity of potential data sources for bioterrorism surveillance data; methods for their analysis; and the uncertainty about the costs, benefits, and detection characteristics of each. Under the auspices of the University of California, San FranciscoStanford Evidence-based Practice Center, we prepared a comprehensive systematic review that evaluated the ability of available information technologies to inform clinicians and public health officials who are preparing for and responding to bioterrorism and related emerging infectious diseases (7). In this paper, we present the available data on existing systems for surveillance of illnesses and syndromes potentially related to bioterrorism and the published evaluation data on these systems. Methods We sought to identify published reports of surveillance systems designed to collect, analyze, and report surveillance data for bioterrorism-related diseases or syndromes or reports of surveillance systems for naturally occurring diseases, if potentially useful for bioterrorism surveillance. We used the U.S. Department of Health and Human Services definition of bioterrorism-related diseases (8-10). Because most patients with bioterrorism-related diseases initially present with influenza-like illness, acute respiratory distress, gastrointestinal symptoms, febrile hemorrhagic syndromes, and febrile illnesses with either dermatologic or neurologic findings, we considered these conditions to be the bioterrorism-related syndromes. We briefly summarize our methods, which are described in detail elsewhere (7). Literature Sources and Search Strategies We searched 3 sources for relevant reports: 5 databases of peer-reviewed articles (for example, MEDLINE, GrayLIT, and National Technical Information Service), government reports, and Web sites of relevant government and commercial entities. We consulted public health, bioterrorism preparedness, and national security experts to identify the 16 government agencies most likely to fund, develop, or use bioterrorism systems (for example, CDC and U.S. Department of Defense). We searched the Web sites of these government agencies and other academic and commercial sites. Finally, we identified additional articles from the bibliographies of included articles and from conference proceedings. We developed 2 separate search strategies: 1 for MEDLINE (January 1985 to April 2002) and 1 for other sources. In both searches, we included terms such as bioterrorism, biological warfare, information technology, surveillance, public health, and epidemiology. Complete search strategies are available from the authors (7). Study Selection and Data Abstraction We reviewed titles, abstracts, and full-length articles to identify potentially relevant articles. Two abstractors, who were blinded to the study authors, abstracted data from all included peer-reviewed articles onto pretested abstraction forms. Given the large volume of Web sites screened, only 1 abstractor, whose work was frequently reviewed by a colleague, collected data from each Web-based report. Evaluation of Reports of Surveillance Systems The CDC developed a draft guideline for evaluating public health surveillance systems (3, 11, 12). This guideline recommends that reports of surveillance systems include the following: descriptions of the public health importance of the health event under surveillance; the system under evaluation; the direct costs needed to operate the system; the usefulness of the system; and evaluations of the systems simplicity, flexibility (that is, the systems ability to change as surveillance needs change), acceptability (as reflected by the willingness of participants and stakeholders to contribute to the data collection, analysis and use), sensitivity to detect outbreaks, positive predictive value of system alarms for true outbreaks, representativeness of the population covered by the system, and timeliness of detection (11, 12). The guideline describes these key elements to consider in an evaluation of a surveillance system but does not provide specific scoring or an evaluation tool. We abstracted information about each CDC criterion from each included reference. Data Synthesis We reviewed 17510 citations of peer-reviewed articles and 8088 Web sites, of which 192 reports on 115 surveillance systems met our inclusion criteria (Figure 1). Of these, 29 systems were designed specifically for detecting bioterrorism-related diseases (as defined by the U.S. Department of Health and Human Services [8-10]) or bioterrorism-related syndromes (for example, flu-like syndrome and fever with rash). An additional 86 systems were designed for surveillance of naturally occurring illnesses, but elements of their design, deployment, or evaluations may be relevant for implementing or evaluating bioterrorism surveillance systems. For example, we included reports of systems for surveillance of nonbiothreat pathogens if they were designed to rapidly transmit surveillance data from sources that could be useful for detecting bioterrorism-related illness (for example, laboratory data, clinicians reports, hospital-based data, or veterinary data) or if they reported methods of spatial or temporal analyses that facilitated rapid and accurate decision making by public health users. We present the evidence about the systems designed principally for bioterrorism surveillance systems and summarize the evidence about the other surveillance systems. Figure 1. Search results. Surveillance Systems Designed for Bioterrorism-Related Diseases or Syndromes We identified 2 types of systems for surveillance of bioterrorism-related diseases or syndromes: those that monitor the incidence of bioterrorism-related syndromes and those that collect and transmit bioterrorism detection data from environmental or clinical samples to decision makers. Surveillance Systems Collecting Syndromic Reports The 9 surveillance systems designed to monitor the incidence of bioterrorism-related syndromes vary widely with respect to syndromes under surveillance, data collected, flexibility of the data collection tool (for example, some Web-based systems allow remote users to change the prompts given to data collectors), acceptability to data collectors, and methods used to analyze the data (13-23) (Table). Table. Surveillance Systems Collecting Syndromic Reports* Two syndromic surveillance systems were evaluated in peer-reviewed reports: the National Health Service Direct system and the program of systematic surveillance of International Classification of Diseases, Ninth Revision (ICD-9), codes from the electronic medical records of the Harvard Vanguard Medical Associates (20, 23). In these evaluations, the numbers of


Annals of Internal Medicine | 2007

Systematic Review: The Safety and Efficacy of Growth Hormone in the Healthy Elderly

Hau Liu; Dena M. Bravata; Ingram Olkin; Smita Nayak; Brian K. Roberts; Alan M. Garber; Andrew R. Hoffman

Context Human growth hormone (GH) is widely sold and used as an antiaging agent. Contributions The researchers reviewed all clinical trials of GH to determine if it is safe and effective in the healthy elderly. They found that GH had no important effects on body composition but led to frequent adverse effects, most notably soft tissue edema and arthralgias. Cautions Clinical trials of GH have been small, and they may not have been able to detect important differences. Implications Published data about GH use in the elderly is limited, but available evidence suggests that risks far outweigh benefits when it is used as an antiaging treatment in healthy older adults. The Editors Since the 1990 publication of an article by Rudman and colleagues (1) suggesting that a short course of recombinant human growth hormone (GH) therapy could reverse decades of age-related changes in body composition in otherwise healthy elderly men, the use of GH as an antiaging therapy has increased rapidly in the United States and worldwide (2). Interest in Rudman and colleagues results has remained high (3), spawning several popular books in the lay press (47). Use of GH as an antiaging therapy ranks as 1 of the most popular health-related Internet searches (8). Although the exact number of people who use GH as an antiaging therapy is unknown, Perls and colleagues (2) reported that 20000 to 30000 people used GH in the United States as an antiaging therapy in 2004 (9), a more than 10-fold increase since the mid-1990s (10, 11). Annual sales of GH worldwide exceed


Annals of Internal Medicine | 2011

Systematic Review: Benefits and Harms of In-Hospital Use of Recombinant Factor VIIa for Off-Label Indications

Veronica Yank; C Vaughan Tuohy; Aaron C Logan; Dena M. Bravata; Kristan Staudenmayer; Robin Eisenhut; Vandana Sundaram; Donal McMahon; Ingram Olkin; Kathryn M McDonald; Douglas K Owens; Randall S. Stafford

1.5 billion (2), one third of which may be for off-label use (12). Proponents of GH for its antiaging properties claimed that more than 100000 people received GH without a prescription in 2002 (2, 11). The rationale for using GH as an antiaging therapy, referred to by some as the sweet syringe of youth (10), lies in the age-related decline in activity of the hypothalamic growth hormoneinsulin-like growth factor axis, a phenomenon referred to as the somatopause (1319). Some signs and symptoms of GH deficiency (that is, GH deficiency due to hypothalamic or pituitary defects), such as increased adiposity and decreased lean body mass, are similar to changes that occur with aging, suggesting that GH replacement therapy may ameliorate age-related changes. Although GH therapy improves body composition (20), bone density (20, 21), and cholesterol levels (22) and may decrease death (23) in people who are GH-deficient, its safety, efficacy, and role in the healthy elderly is highly controversial (24). Whereas proponents of GH have recommended its use for treating the somatopause (18, 19, 25, 26), others, including the American Association of Clinical Endocrinologists (27), have warned that such therapy is not warranted. High levels of insulin-like growth factor-1 (IGF-1), which are regulated by GH levels, may be associated with serious adverse events (12), including prostate cancer (28). Furthermore, the distribution of GH for use as an antiaging therapy in the United States is illegal (2). We performed a systematic review and meta-analysis of randomized, controlled trials to determine the safety and efficacy of GH therapy in the healthy elderly. We aimed to evaluate the effects of GH on body composition, exercise capacity, bone density, serum lipid levels, and glucose metabolism. In addition, we sought to synthesize the evidence on adverse events associated with GH use in the healthy elderly. Methods Literature Searches An author and a professional librarian developed search strategies to identify potentially relevant studies. We searched MEDLINE and EMBASE databases for English-language studies published through 21 November 2005 using keywords including growth hormone; aging; and randomized, controlled clinical trials (Appendix Table 1). We searched bibliographies of retrieved articles for additional studies. Appendix Table 1. Search Strategy Study Selection We sought 2 types of randomized, controlled trials: those that compared injectable GH therapy with no GH therapy and those that compared injectable GH therapy plus lifestyle interventions (that is, exercise with or without a dietary intervention) with lifestyle interventions alone. We included studies that: 1) evaluated at least 10 participants; 2) included participants who received GH therapy for 2 weeks or more; 3) enrolled only community-dwelling participants; 4) assessed participants with a mean body mass index of 35 kg/m2 or less and a mean age of 50 years or more; and 5) provided data on at least 1 clinical outcome of interest. We excluded studies that: 1) focused solely on evaluating GH-releasing factor, other GH secretagogues, or IGF-1; 2) explicitly included patients with diabetes mellitus, cardiac disease, thyroid disease, osteoporosis, or cancer; or 3) evaluated GH as a treatment for a specific illness (for example, adult GH deficiency, the HIV wasting syndrome, renal failure, or critical illness). Data Abstraction An author reviewed the titles and abstracts of articles identified through our search and retrieved potentially relevant studies. Two physicians with postdoctoral training in health services research, endocrinology, or both reviewed each retrieved study and abstracted data independently onto pretested abstraction forms. We resolved abstraction differences by repeated review. If a study did not present data necessary for analysis, mentioned results but did not present data, or presented data graphically, we requested additional data from study authors. If several studies presented findings from the same cohort, we used these data only once in our analysis. Abstracted Data We abstracted 4 types of data from each study: 1) study quality (for example, quality of randomization, blinding, outcomes, and statistical analyses) (29, 30); 2) study sample characteristics (for example, age, sex, weight, medical conditions, and baseline IGF-1 levels); 3) study interventions (for example, dosage, frequency, and length of GH therapy); and 4) clinical outcomes. We included studies that provided data on at least 1 of the following 6 clinical outcomes of interest: 1) body composition (for example, weight, lean body or fat-free mass, or fat mass); 2) strength or functional capacity (for example, handgrip strength or maximal rate of oxygen consumption); 3) bone dynamics (for example, femoral neck or lumbar spine bone mineral density or bone mineral content); 4) cardiovascular risk factors (for example, heart rate, total, low-density lipoprotein, and high-density lipoprotein cholesterol levels or triglyceride levels); 5) insulin resistance markers (for example, fasting glucose and insulin levels and 2-hour glucose post75-gram oral glucose tolerance test results); 5) quality-of-life or depression scales; or 6) adverse events. Because the terms lean body mass and fat-free mass are typically used interchangeably in scientific literature, we combined data on fat-free mass and lean body mass into the single category of lean body mass. Quantitative Data Synthesis To describe key study characteristics, we computed mean values weighted by the number of participants in each trial. To evaluate the effects of GH on the outcomes of interest, we computed a change score for each clinical outcome for participants in the treatment and control groups as the value of the outcome at the end of the trial minus the value of the outcome at the start of the trial. We then used these change scores to calculate 2 study effect sizes: the Hedges adjusted g, which is an estimate of the standardized mean difference (31), and the weighted mean difference (32). We calculated both study effect sizes because the Hedges adjusted g, although an unbiased estimate, lacks units; whereas the results of the weighted mean difference are in the same units as the clinical outcome of interest, facilitating clinical interpretation. Our results from either method did not substantially differ, and we present effect sizes calculated by using only the weighted mean difference. If studies reported standard errors, we converted them to standard deviations. For studies that did not report the variance of an outcome at the end of the trial minus that at the start of the trial, we calculated the variance as the sum of the variances at the start and end of the trial minus twice the covariance. Calculation of the covariance between the end of the trial and the start of the trial requires the correlations from individual patient data. Because these correlations were unavailable, we computed the correlation of the reported means, which ranged from 0.61 to 0.99, and used values over this interval to estimate the covariance for each outcome. We chose a correlation of 0.80 as our baseline value, although the pooled effect sizes did not substantially change when we varied the correlation over the range of 0.61 to 0.99. We combined studies by using the DerSimonian and Laird inverse variance weighted method (random-effects model) and the MantelHaenszel method (fixed-effects model) (31, 32). We present the results from only the random-effects model because of statistical heterogeneity in some clinical outcomes. For body composition measures, we calculated separate summary effect sizes for the following: 1) studies of groups receiving GH versus studies of groups not receiving GH; 2) studies of GH plus lifestyle interventions versus studies of lifestyle interventions alone; 3) studies in which researchers administered GH for less than 26 weeks versus studies in which they administered GH for 26 weeks or more; and 4) study populations in which researchers evaluated only men versus studies evaluating only women. Because few studies have reported outcomes other than body composition measures, we calculated a single effect size for other clinical outcomes. We evaluated the effects of study heterogeneity on our summary results. We sought sources of heterogeneity affecting body composition outcomes t


Annals of Internal Medicine | 2006

Systematic Review: A Century of Inhalational Anthrax Cases from 1900 to 2005

Jon-Erik C Holty; Dena M. Bravata; Hau Liu; Richard A. Olshen; Kathryn M McDonald; Douglas K Owens

BACKGROUNDnRecombinant factor VIIa (rFVIIa), a hemostatic agent approved for hemophilia, is increasingly used for off-label indications.nnnPURPOSEnTo evaluate the benefits and harms of rFVIIa use for 5 off-label, in-hospital indications: intracranial hemorrhage, cardiac surgery, trauma, liver transplantation, and prostatectomy.nnnDATA SOURCESnTen databases (including PubMed, EMBASE, and the Cochrane Library) queried from inception through December 2010. Articles published in English were analyzed.nnnSTUDY SELECTIONnTwo reviewers independently screened titles and abstracts to identify clinical use of rFVIIa for the selected indications and identified all randomized, controlled trials (RCTs) and observational studies for full-text review.nnnDATA EXTRACTIONnTwo reviewers independently assessed study characteristics and rated study quality and indication-wide strength of evidence.nnnDATA SYNTHESISn16 RCTs, 26 comparative observational studies, and 22 noncomparative observational studies met inclusion criteria. Identified comparators were limited to placebo (RCTs) or usual care (observational studies). For intracranial hemorrhage, mortality was not improved with rFVIIa use across a range of doses. Arterial thromboembolism was increased with medium-dose rFVIIa use (risk difference [RD], 0.03 [95% CI, 0.01 to 0.06]) and high-dose rFVIIa use (RD, 0.06 [CI, 0.01 to 0.11]). For adult cardiac surgery, there was no mortality difference, but there was an increased risk for thromboembolism (RD, 0.05 [CI, 0.01 to 0.10]) with rFVIIa. For body trauma, there were no differences in mortality or thromboembolism, but there was a reduced risk for the acute respiratory distress syndrome (RD, -0.05 [CI, -0.02 to -0.08]). Mortality was higher in observational studies than in RCTs.nnnLIMITATIONSnThe amount and strength of evidence were low for most outcomes and indications. Publication bias could not be excluded.nnnCONCLUSIONnLimited available evidence for 5 off-label indications suggests no mortality reduction with rFVIIa use. For some indications, it increases thromboembolism.


Annals of Internal Medicine | 2009

Systematic Review: Elective Induction of Labor Versus Expectant Management of Pregnancy

Aaron B. Caughey; Vandana Sundaram; Anjali J Kaimal; Allison Gienger; Yvonne W. Cheng; Kathryn M McDonald; Brian L Shaffer; Douglas K Owens; Dena M. Bravata

Key Summary Points Initiation of antibiotic or anthrax antiserum therapy during the prodromal phase of inhalational anthrax is associated with an improved short-term survival. Multidrug antibiotic regimens are associated with decreased mortality, especially when they are administered during the prodromal phase. Most surviving patients will probably require drainage of reaccumulating pleural effusions. Despite modern intensive care, fulminant-phase anthrax is rarely survivable. The 2001 anthrax attack demonstrated the vulnerability of the United States to anthrax bioterrorism. The mortality rate observed during the 2001 U.S. attack (45%) was considerably lower than that historically reported for inhalational anthrax (89% to 96%) (1, 2). This reduction generally is attributed to the rapid provision of antibiotics and supportive care in modern intensive care units (3). However, no comprehensive reviews of reports of inhalational anthrax cases (including those from 2001) that evaluate how patient factors and therapeutic interventions affect disease progression and mortality have been published. Before the introduction of antibiotics, anthrax infection was primarily treated with antiserum (4). Anthrax antiserum reportedly decreased mortality by 75% compared with no treatment (5-8), and its efficacy is supported by recent animal data (9). Later, effective antibiotics, such as penicillin and chloramphenicol, were added to anthrax treatment strategies (10, 11). Currently, combination antibiotic therapy with ciprofloxacin (or doxycycline), rifampin, and clindamycin is recommended on the basis of anecdotal evidence from the U.S. 2001 experience (1, 12, 13). Historically, the clinical course of untreated inhalational anthrax has been described as biphasic, with an initial benign prodromal latent phase, characterized by a nonspecific flu-like syndrome, followed by a severe fulminant acute phase, characterized by respiratory distress and shock that usually culminates in death (2, 14). The duration of the prodromal phase has been reported to range from 1 to 6 days (14, 15), whereas that of the fulminant phase has been described as less than 24 hours (14, 16). A 1957 study confirmed these estimates of disease progression but was based on only 6 patients (17). Because a report synthesizing the data from all reported cases of inhalational anthrax (including those from 2001) has not been published, we do not have accurate estimates of the time course associated with disease progression or a clear understanding of the extent to which patient characteristics and treatment factors affect disease progression and mortality. This information is important for developing appropriate treatment and prophylaxis protocols and for accurately simulating anthrax-related illness to inform planning efforts for bioterrorism preparedness. We systematically reviewed published cases of inhalational anthrax between 1900 and 2005 to evaluate the effects of patient factors (for example, age and sex) and therapeutic factors (for example, time to onset of treatment) on disease progression and mortality. Methods Literature Sources and Search Terms We searched MEDLINE to identify case reports of inhalational anthrax (January 1966 to June 2005) by using the Medical Subject Heading (MeSH) terms anthrax and case reports. Because many reports were published before 1966 (the earliest publication date referenced in MEDLINE), we performed additional comprehensive searches of retrieved bibliographies and the indexes of 14 selected journals from 1900 to 1966 (for example, New England Journal of Medicine, The Lancet, La Presse Mdicale, Deutsche Medizinische Wochenschrift, and La Semana Mdica) to obtain additional citations. We considered all case reports of inhalational anthrax to be potentially eligible for inclusion, regardless of language. Study Selection We considered a case report to be eligible for inclusion if its authors established a definitive diagnosis of inhalational anthrax. Appendix Table 1 presents the details of our inclusion criteria. We excluded articles that described cases presenting before 1900 because Bacillus anthracis was not identified as the causative agent of clinical inhalational anthrax until 1877 (18) and because the use of reliable microscopic (19) and culture examination techniques (20) to confirm the diagnosis were not developed until the late 19th century. Appendix Table 1. Inclusion Criteria Data Abstraction One author screened potentially relevant articles to determine whether they met inclusion criteria. Two authors independently abstracted data from each included English-language article and reviewed bibliographies for additional potentially relevant studies. One author abstracted data from nonEnglish-language articles. We resolved abstraction discrepancies by repeated review and discussion. If 2 or more studies presented the same data from 1 patient, we included these data only once in our analyses. We abstracted 4 types of data from each included article: year of disease onset, patient information (that is, age, sex, and nationality), symptom and disease progression information (for example, time of onset of symptoms, fulminant phase, and recovery or death and whether the patient developed meningitis), and treatment information (for example, time and disease stage of the initiation of appropriate treatment and hospitalization). We based our criteria for determining whether a patient had progressed from the prodromal phase to the fulminant phase on distinguishing clinical features of five 2001 (3, 21, 22) and five 1957 (17) cases of fulminant inhalational anthrax. The fulminant phase is described historically as a severe symptomatic disease characterized by abrupt respiratory distress (for example, dyspnea, stridor, and cyanosis) and shock. Meningoencephalitis has been reported to occur in up to 50% of cases of fulminant inhalational anthrax (23). We considered any patient who had marked cyanosis with respiratory failure, who needed mechanical ventilation, who had meningoencephalitis, or who died as having been in the fulminant phase of disease. We used the reported time of an acute change in symptoms or deteriorating clinical picture to estimate when a confirmed fulminant case had progressed from the prodromal phase. We considered therapy for inhalational anthrax to be appropriate if either an antibiotic to which anthrax is susceptible was given (by oral, intramuscular, or intravenous routes) (24-27) or anthrax antiserum therapy was initiated. We classified patients who received antibiotics that are resistant to strains of B. anthracis (<70% susceptibility) as having received no antibiotics. If treatment with antibiotics or antiserum was given, we assumed that the treatment was appropriately dosed and administered. Statistical Analyses We used univariate analyses with SAS software, version 9.1 (SAS Institute Inc., Cary, North Carolina), to summarize the key patient and treatment characteristics. We compared categorical variables with the Fisher exact test and continuous variables with a 2-tailed WilcoxonMannWhitney test. For single comparisons, we considered a P value less than 0.05 to be statistically significant. When comparing U.S. 2001 with pre-2001 cases (or comparing patients who lived with those who died), we applied a Bonferroni correction to account for multiple comparisons (we considered P< 0.025 to be statistically significant: 0.05/2 = 0.025). We computed correlations for pairs of predictors available for each case at the beginning of the course of disease. Adjustments for Censored Data Infectious disease data are subject to incomplete observations of event times (that is, to censoring), particularly in the presence of therapeutic interventions. This can lead to invalid estimation of relevant event time distributions. For example, patients with longer prodromal stage durations are more likely to receive antibiotics than patients with shorter prodromal stage durations, and they may be, therefore, less likely to progress to fulminant stage or death. To account for censoring of our time data, we used maximum likelihood estimates by using both Weibull and log-normal distributions (28). The Appendix provides a detailed description of these analyses. Evaluating Predictors of Disease Progression and Mortality We used a multivariate Cox proportional hazards model to evaluate the prognostic effects of the following features on survival: providing antibiotics or antiserum (a time-dependent covariate in 3 categories: none, single-drug regimen, or multidrug regimen); the stage during which treatment with antibiotics or antiserum was initiated (prodromal stage vs. fulminant stage or no therapy); age (continuous variable); sex; if therapy was given, whether patients received a multidrug regimen (for example, 2 appropriate antibiotics or combination antibioticanthrax antiserum therapy); the use of pleural fluid drainage (a time-dependent covariate); development of anthrax meningoencephalitis (a time-dependent covariate); and whether the case was from the 2001 U.S. attack. We assessed each variable by stepwise backward regression using a P value cutoff of 0.100 or less. We excluded 8 adult patients for whom age was not reported. Although we did not perform extensive goodness-of-fit tests of our models, we did at least fit models in which we entered time not only linearly but also quadratically. Improvement in fit, as judged by conventional Wald and other tests, did not result, nor did including quadratic time variables further explain the data. To estimate mortality as a function of duration from symptom onset to antibiotic initiation, we first calculated a disease progression curve describing the time from symptom onset to fulminant phase among untreated patients by using the Weibull maximum likelihood estimates from the 71 cases for which time estimates were known. We then assigned a mortality rate to patients who had treatmen


Annals of Internal Medicine | 2008

Systematic Review: The Effects of Growth Hormone on Athletic Performance

Hau Liu; Dena M. Bravata; Ingram Olkin; Anne L. Friedlander; Vincent Liu; Brian K. Roberts; Eran Bendavid; Olga Saynina; Shelley R. Salpeter; Alan M. Garber; Andrew R. Hoffman

Induction of labor is increasing in the United Statesfrom 9.5% of births in 1990 to 22.1% of births in 2004 (1, 2). Labor may be induced because of maternal (for example, diabetes mellitus, unstable cardiac disease, hypertensive disease of pregnancy) or fetal (for example, nonreassuring results on antenatal testing, intrauterine growth restriction) indications. Induction of labor without a medical indication is termed elective induction of labor and appears to be increasing even more rapidly than induction of labor as a whole (35). Elective induction may be motivated by a variety of reasons. For example, pregnant women may wish to end their pregnancy because of physical discomfort; concern that rapidly progressing labor would preclude timely arrival at the hospital or epidural placement; scheduling issues; or ongoing concerns for maternal, fetal, or neonatal complications (1). Clinicians caring for pregnant women (such as obstetricians, family practice physicians, and midwives) may have similar nonmedical reasons for recommending elective induction of labor for their patients (4). They, too, may wish to end the ongoing risk for complications in the pregnancy, limit their patients physical discomfort, or reduce the risks imposed by geographic barriers (6, 7). Clinicians may also have an incentive to use elective induction for their own economic benefit and scheduling preferences. Thus, it is imperative to characterize the potential maternal and neonatal outcomes associated with elective induction of labor. Elective induction of labor necessarily reduces some risks of an ongoing pregnancy, such as development of preeclampsia, oligohydramnios, macrosomia, or intrauterine fetal demise at a later gestational age. Randomized, controlled trials have compared the rates of cesarean delivery between women with induction of labor and those with expectant management of pregnancy, and have generally concluded that the cesarean rate was unchanged or lower among the induced group (8, 9). However, the commonly held dogma regarding induction of labor is that it increases the risk for cesarean delivery (10), which in turn is associated with a host of maternal complications. In addition, a cesarean delivery in the current pregnancy increases both maternal and neonatal risks in future pregnancies (11, 12). One critical aspect of the existing literature is the control group used for comparison with elective induction of labor. For a pregnant woman at a particular gestational age, the choices are to expectantly manage the pregnancy (no intervention) or to intervene with an induction of labor. Expectant management of the pregnancy allows the pregnancy to progress to a future gestational age. Thus, the woman undergoing expectant management may go into spontaneous labor, or she may require a medically indicated induction of labor at a future gestation because of developing preeclampsia, nonreassuring results on antenatal testing, or postterm pregnancy (8). One methodological problem with many studies of induction of labor, particularly observational studies, is that women in spontaneous labor are used as a control group. This is not a realistic comparison because women and their providers actually face the choice between induction of labor and expectant management, not spontaneous labor (13). Thus, in studies evaluating the risks and benefits of elective induction of labor, women undergoing elective induction of labor should be compared with women having expectant management of the pregnancy rather than women undergoing spontaneous labor. The effect of elective induction of labor on the frequency of cesarean delivery is a critical uncertainty. An understanding of the effect of induction of labor on cesarean delivery would help clinicians and policymakers determine the benefits and harms, and thus define a reasonable role for elective induction of labor in current obstetric practice. In this review, we evaluated the published evidence on the maternal and neonatal risks of elective induction relative to expectant management of pregnancy. We also evaluated the evidence comparing elective induction of labor with spontaneous labor to demonstrate the reasons for the currently held opinion about the effect of elective induction of labor on cesarean delivery. Methods Data Sources and Searches We searched MEDLINE (from 1966 to February 2009), Web of Science, CINAHL, and the Cochrane Central Register of Controlled Trials (up to March 2009) to identify all English-language studies on elective induction of labor in humans. We also manually reviewed the reference lists of included articles and bibliographies of systematic reviews to identify additional relevant articles. The Appendix, presents the details of our search strategy. Study Selection We sought studies that reported maternal and neonatal outcomes for women who had induction of labor without a specific indication during the term period of pregnancy, at or after 37 weeks and before 42 weeks of gestation; beyond 42 weeks is defined as a postterm pregnancy by the American College of Obstetricians and Gynecologists and is a medical indication for induction of labor. We included studies on elective induction of labor only if the article reported mode of delivery (that is, cesarean, spontaneous vaginal, or operative vaginal deliveries) or maternal or fetal and neonatal outcomes. We excluded articles that only compared different methods of induction of labor. We included duplicate articles of the same study only once in our analyses. We included only articles published from 1966 and beyond to represent modern obstetric practice. Study Design We included randomized, controlled trials (RCTs), cohort studies, and casecontrol studies. Because most RCTs compared elective induction of labor with expectant management and most of the observational studies compared elective induction of labor with spontaneous labor, we included all 3 study designs and both types of controls but analyzed them separately. The fundamental comparison for the study was between elective induction of labor and expectant management of pregnancy. Because the comparison between elective induction of labor and spontaneous labor is commonly reported in the observational literature, we present these findings primarily to demonstrate the current state of the existing literature as well as to explore the differential findings between the RCTs and observational studies. Data Extraction and Quality Assessment Two authors independently reviewed the title and abstract of all studies retrieved from our searches to assess whether the article met inclusion criteria. They then reviewed and extracted the following information from each included study: study period, location and setting of the study, method used to achieve labor induction, study design, mode of delivery, maternal and neonatal outcomes, and quality assessment variables. When the reviewers disagreed on the data abstracted, a third reviewer abstracted the data as well to resolve the differences. The resolution was agreed upon by all 3 reviewers. Consistent with the Agency for Healthcare Research and Quality (AHRQ) draft Grading the Strength of a Body of Evidence When Comparing Medical Interventions (14), we developed specific criteria for evaluating the quality of the individual included studies and for assessing the applicability of these studies. Our quality assessments were based on the extent to which the included studies had a prospective design, compared women undergoing elective induction of labor with women being managed expectantly, treated key factors affecting cesarean delivery rates (such as maternal age, parity, body mass index, cervical stage, and gestational age) similarly in intervention and control patients, and were adequately powered to evaluate relatively rare outcomes of interest for both mothers and neonates. We summarized the results of our quality appraisal of the studies by using stacked bars, as has been done in previous systematic reviews (1517). These assessments were also summarized as good, fair, or poor ratings for each individual study. We assessed the applicability of the individual studies by evaluating the population studied, place and time the study was conducted, and methods of induction used. Individual applicability was assessed as good, fair, or poor. To grade the overall strength of evidence, we considered the quality and applicability of the individual studies, the consistency of the results across the included studies, and volume of the literature. Each outcome examined was then assigned a grade of high, moderate, low, or insufficient to represent overall quality. Data Synthesis and Analysis We computed 2 summary effect sizes by using random-effects models for each outcome of interest reported by more than 4 studies: a summary odds ratio (OR) and a summary risk difference. We present the summary OR as the primary outcome metric in our figures and text, and we also provide the summary risk difference when applicable. The summary ORs were created such that a value greater than 1.0 means that expectant management of the pregnancy is associated with a higher risk for a particular outcome. We also conducted stratified (subgroup) analyses to evaluate the effect of the following variables on our outcomes of interest: year (1990 and earlier vs. after 1990), country (United States vs. a country other than the United States), gestational age (before or after 41 completed weeks of gestation), and setting (academic, community hospital, both, or multicenter). We defined study year as the year in which the study was started; if this was not reported, then we used the publication year for the study year. We assessed the statistical heterogeneity for all computed summary effects by calculating the Q statistic (designated Q statistic with a P value<0.05 was considered heterogeneous) and I 2 statistic (designated I 2 >50% was considered heterogeneous) (18

Collaboration


Dive into the Dena M. Bravata's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jason Carter

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul G Shekelle

VA Palo Alto Healthcare System

View shared research outputs
Researchain Logo
Decentralizing Knowledge