Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lisa Bero is active.

Publication


Featured researches published by Lisa Bero.


Medical Care | 2001

Changing Provider Behavior: An Overview of Systematic Reviews of Interventions

Jeremy Grimshaw; Liz Shirran; Thomas R; G Mowatt; Fraser C; Lisa Bero; Roberto Grilli; Emma Harvey; Andrew D Oxman; O'Brien Ma

Background.Increasing recognition of the failure to translate research findings into practice has led to greater awareness of the importance of using active dissemination and implementation strategies. Although there is a growing body of research evidence about the effectiveness of different strategies, this is not easily accessible to policy makers and professionals. Objectives.To identify, appraise, and synthesize systematic reviews of professional educational or quality assurance interventions to improve quality of care. Research design. An overview was made of systematic reviews of professional behavior change interventions published between 1966 and 1998. Results.Forty-one reviews were identified covering a wide range of interventions and behaviors. In general, passive approaches are generally ineffective and unlikely to result in behavior change. Most other interventions are effective under some circumstances; none are effective under all circumstances. Promising approaches include educational outreach (for prescribing) and reminders. Multifaceted interventions targeting different barriers to change are more likely to be effective than single interventions. Conclusions.Although the current evidence base is incomplete, it provides valuable insights into the likely effectiveness of different interventions. Future quality improvement or educational activities should be informed by the findings of systematic reviews of professional behavior change interventions.


The New England Journal of Medicine | 2000

Coverage by the News Media of the Benefits and Risks of Medications

Ray Moynihan; Lisa Bero; Dennis Ross-Degnan; David Henry; Kirby Lee; Judy Watkins; Connie Mah; Stephen B. Soumerai

BACKGROUND The news media are an important source of information about new medical treatments, but there is concern that some coverage may be inaccurate and overly enthusiastic. METHODS We studied coverage by U.S. news media of the benefits and risks of three medications that are used to prevent major diseases. The medications were pravastatin, a cholesterol-lowering drug for the prevention of cardiovascular disease; alendronate, a bisphosphonate for the treatment and prevention of osteoporosis; and aspirin, which is used for the prevention of cardiovascular disease. We analyzed a systematic probability sample of 180 newspaper articles (60 for each drug) and 27 television reports that appeared between 1994 and 1998. RESULTS Of the 207 stories, 83 (40 percent) did not report benefits quantitatively. Of the 124 that did, 103 (83 percent) reported relative benefits only, 3 (2 percent) absolute benefits only, and 18 (15 percent) both absolute and relative benefits. Of the 207 stories, 98 (47 percent) mentioned potential harm to patients, and only 63 (30 percent) mentioned costs. Of the 170 stories citing an expert or a scientific study, 85 (50 percent) cited at least one expert or study with a financial tie to a manufacturer of the drug that had been disclosed in the scientific literature. These ties were disclosed in only 33 (39 percent) of the 85 stories. CONCLUSIONS News-media stories about medications may include inadequate or incomplete information about the benefits, risks, and costs of the drugs as well as the financial ties between study groups or experts and pharmaceutical manufacturers.


PLOS Medicine | 2008

Reporting bias in drug trials submitted to the Food and Drug Administration: review of publication and presentation.

Kristin Rising; Peter Bacchetti; Lisa Bero

Background Previous studies of drug trials submitted to regulatory authorities have documented selective reporting of both entire trials and favorable results. The objective of this study is to determine the publication rate of efficacy trials submitted to the Food and Drug Administration (FDA) in approved New Drug Applications (NDAs) and to compare the trial characteristics as reported by the FDA with those reported in publications. Methods and Findings This is an observational study of all efficacy trials found in approved NDAs for New Molecular Entities (NMEs) from 2001 to 2002 inclusive and all published clinical trials corresponding to the trials within the NDAs. For each trial included in the NDA, we assessed its publication status, primary outcome(s) reported and their statistical significance, and conclusions. Seventy-eight percent (128/164) of efficacy trials contained in FDA reviews of NDAs were published. In a multivariate model, trials with favorable primary outcomes (OR = 4.7, 95% confidence interval [CI] 1.33–17.1, p = 0.018) and active controls (OR = 3.4, 95% CI 1.02–11.2, p = 0.047) were more likely to be published. Forty-one primary outcomes from the NDAs were omitted from the papers. Papers included 155 outcomes that were in the NDAs, 15 additional outcomes that favored the test drug, and two other neutral or unknown additional outcomes. Excluding outcomes with unknown significance, there were 43 outcomes in the NDAs that did not favor the NDA drug. Of these, 20 (47%) were not included in the papers. The statistical significance of five of the remaining 23 outcomes (22%) changed between the NDA and the paper, with four changing to favor the test drug in the paper (p = 0.38). Excluding unknowns, 99 conclusions were provided in both NDAs and papers, nine conclusions (9%) changed from the FDA review of the NDA to the paper, and all nine did so to favor the test drug (100%, 95% CI 72%–100%, p = 0.0039). Conclusions Many trials were still not published 5 y after FDA approval. Discrepancies between the trial information reviewed by the FDA and information found in published trials tended to lead to more favorable presentations of the NDA drugs in the publications. Thus, the information that is readily available in the scientific literature to health care professionals is incomplete and potentially biased.


Annals of Internal Medicine | 1996

The Quality of Drug Studies Published in Symposium Proceedings

Mildred K. Cho; Lisa Bero

For physicians, pharmacists, pharmacologists, and others, the medical literature is a key source of information about prescription drugs [1, 2]. The medical literature on drugs includes articles from peer-reviewed journals, non-peer-reviewed (controlled circulation or throwaway) journals, and the published proceedings of symposia [3, 4]. Symposia are a rapidly growing and potentially major means of disseminating information about drugs. In the clinical journals with the highest circulation rates, the number of symposia published increased from 83 during 1972-1977 to 307 during 1984-1989. Approximately half of these symposia were on pharmaceutical topics [4]. Symposia can be valuable sources of information about drugs, but evidence suggests that they can also be used to market drugs and other interventions, especially if they are industry sponsored. Approximately 70% of symposia on pharmaceutical topics are sponsored by drug companies [3, 4]. Among symposia, sponsorship by a single drug company is associated with promotional characteristics that include a focus on a single drug, misleading titles, use of brand names, and lack of peer review [4]. Other studies indicate that clinical trials, including those published in symposia, are more likely to favor a new drug therapy if they are funded by the pharmaceutical industry than if they are not [5, 6]. Although physicians often report that the peer-reviewed literature is one of their main sources of drug information, industry sources of information can sometimes have a stronger influence on prescribing behavior [2]. Thus, if symposia sponsored by drug companies are a growing source of information about drugs for pharmacists and physicians, assessing the quality of the articles in these symposia is important. We compared the methodologic quality and relevance of drug studies published in symposia sponsored by single drug companies with those of studies that were published in symposia that had other sponsors or in the peer-reviewed parent journals. We also assessed whether a methods section was present, because such a section is necessary for evaluating quality. Finally, we tested whether drug industry support of research was associated with study outcome. Methods A symposium is a collection of papers published as a separate issue or as a special section in a regular issue of a medical journal [4]. We defined original clinical drug articles as articles that 1) appeared to present original data from studies done in humans [that is, articles that had at least one table or figure that was not acknowledged to have been reprinted from another source] and 2) did not specifically state that they were reviews [4]. Selection of Articles We identified original clinical drug articles that had a section describing the study methods, because such a section is needed to assess the quality of an article. Using a computer-generated list of random numbers from 1 to 625, we randomly selected symposia from 625 symposia that had been identified for a previous study [4]. We had data on the type of sponsorship of publication for each symposium. From each selected symposium, we randomly selected one original clinical drug article that had a methods section. We continued selecting symposia until we had enough articles (n = 127) according to the sample size estimates described below. We also calculated the proportion of articles in the selected symposia, overall and by type of sponsorship, that had methods sections. Quality Assessment We compared the quality of original clinical drug articles published in symposia sponsored by single drug companies with that of similar articles published in symposia that had other sponsors and in the peer-reviewed parent journals. Sample Size Estimates We estimated the sample size needed to test the association between the independent variable type of sponsorship of publication and the main outcome measure, methodologic quality score. For a three-group comparison, a minimum sample of 108 symposium articles was needed to detect a minimum effect size of 0.10 (on a scale of 0 to 1), with an value of 0.05 and a value of 0.80, and standard deviation of quality scores of 0.18 based on previous results [7]. To compare articles from symposia sponsored by single pharmaceutical companies with articles from the peer-reviewed parent journals, we estimated that we would need 45 symposia articles and 45 journal articles; this estimate was the result of sample size calculations done using the variables described above. Because date of publication, journal, and therapeutic class of drug could have confounded the association between source of publication and quality [8-10], we matched each symposium article to an article from the parent journal by using these characteristics, as described previously [7]. Our sample of symposium articles contained 50 articles sponsored by single drug companies, but 5 articles published in Transplantation Proceedings were excluded from this analysis because no parent journal is associated with that publication. Instruments We used previously developed instruments to measure the methodologic quality of articles (defined as the minimization of systematic bias and the consistency of conclusions with results) and nonmethodologic indices of quality, such as clinical relevance and generalizability. Both instruments were valid and reliable and have been published elsewhere [7]. Four reviewers independently assessed each article: Two used the methodologic quality instrument, and two used the clinical relevance instrument. We derived methodologic quality and clinical relevance scores for each article by using a previously described scoring system [7]. Each score was between 0 (lowest quality) and 1 (highest quality) and was the average of the scores of the two reviewers. Two clinical pharmacologists with extensive research experience in the health sciences did the methodologic quality assessment. For the clinical relevance instrument, three pairs of reviewers with clinical experience in general internal medicine and research experience in the health sciences each assessed one third of the articles. Each pair of reviewers reviewed the articles in the same randomized order. For both instruments, reviewers were trained as described previously [7]. For the quality assessments, each reviewer worked independently, was blinded as to whether an article had been published in a symposium, and was given photocopies of articles from which author names, institution names, journal names, dates, and all other reference information had been obliterated. Reviewers were unaware of our hypotheses and the purpose of their reviewing, and they were paid for their work. None of the reviewers were known to us or knew of our previous work before the study. We assessed the inter-rater reliability of quality scores by using the Kendall coefficient of concordance (W) with adjustment for tied ranks [11] and the intraclass correlation (R; treating both reviewers and articles as random effects [12]). Inter-rater reliability of quality scores was high (for methodologic quality scores: W equals 0.85, R equals 0.74 [95% CI, 0.67 to 0.80]; for clinical relevance scores: W equals 0.77, R equals 0.56 [CI, 0.44 to 0.65]). Drug Company Support and Study Outcome For each article, one of us determined whether a drug company had supported the research and whether the article 1) reported an outcome favorable to the drug of interest, 2) did not report an outcome favorable to the drug of interest, or 3) did not test a hypothesis. The drug of interest (as defined from the perspective of the authors, according to Gotzsche [13]) was the newest drug if two or more drugs were studied. We defined research as having had drug company support if the article that reported the research acknowledged either that a drug company had provided funding or drugs or that any of the authors were employed by a drug company. We determined drug company support solely on the basis of information in the paper. If an article did not test a hypothesis, it was excluded from this analysis. We classified the remaining articles as favorable or unfavorable using Gotzsches definitions [13]. An article was favorable if the drug that seemed to be of primary interest to the authors had the same effect as the comparison drug or drugs but with less pronounced side effects, had a better effect without more pronounced side effects, or was preferred more often by patients when the effect and side-effect evaluations were combined. All other articles were considered not favorable. The conclusions of the authors were taken at face value, even if they conflicted with the study results. To test inter-rater reliability, the other author independently assessed a subset of the articles (n = 90). Agreement in classifying articles as favorable or not favorable was 85%. Statistical Analyses Because methodologic quality and relevance scores were distributed normally (Shapiro-Wilk test), we analyzed differences between groups (type of sponsorship of publication) by using parametric one-way analysis of variance followed by the Tukey test for multiple comparisons or two-way analysis of variance (total error rate, 0.05). We compared matched groups (symposium articles and peer-reviewed parent journal articles) by using the paired t-test (two-tailed equals 0.05). To analyze categorical data on the outcome of studies, we tested for differences in proportions between groups by using the chi-square statistic. For tests of significance, we used an value of 0.05. All hypothesis tests were two-sided. Results Presence of a Method Section To obtain 127 original clinical drug articles for quality assessment, we had to select 213 symposia containing a total of 5041 articles. The proportions of articles that reported original data but contained no methods sections were 4% overall (195 of 5041), 10% (108 of 1064) in the symposia sponsored by single drug companies,


The New England Journal of Medicine | 2009

Outcome Reporting in Industry-Sponsored Trials of Gabapentin for Off-Label Use

S. Swaroop Vedula; Lisa Bero; Roberta W. Scherer; Kay Dickersin

BACKGROUND There is good evidence of selective outcome reporting in published reports of randomized trials. METHODS We examined reporting practices for trials of gabapentin funded by Pfizer and Warner-Lamberts subsidiary, Parke-Davis (hereafter referred to as Pfizer and Parke-Davis) for off-label indications (prophylaxis against migraine and treatment of bipolar disorders, neuropathic pain, and nociceptive pain), comparing internal company documents with published reports. RESULTS We identified 20 clinical trials for which internal documents were available from Pfizer and Parke-Davis; of these trials, 12 were reported in publications. For 8 of the 12 reported trials, the primary outcome defined in the published report differed from that described in the protocol. Sources of disagreement included the introduction of a new primary outcome (in the case of 6 trials), failure to distinguish between primary and secondary outcomes (2 trials), relegation of primary outcomes to secondary outcomes (2 trials), and failure to report one or more protocol-defined primary outcomes (5 trials). Trials that presented findings that were not significant (P > or = 0.05) for the protocol-defined primary outcome in the internal documents either were not reported in full or were reported with a changed primary outcome. The primary outcome was changed in the case of 5 of 8 published trials for which statistically significant differences favoring gabapentin were reported. Of the 21 primary outcomes described in the protocols of the published trials, 6 were not reported at all and 4 were reported as secondary outcomes. Of 28 primary outcomes described in the published reports, 12 were newly introduced. CONCLUSIONS We identified selective outcome reporting for trials of off-label use of gabapentin. This practice threatens the validity of evidence for the effectiveness of off-label interventions.


Quality & Safety in Health Care | 2003

Systematic reviews of the effectiveness of quality improvement strategies and programmes

Jeremy Grimshaw; L M McAuley; Lisa Bero; Roberto Grilli; Andrew D Oxman; Craig Ramsay; L Vale; Merrick Zwarenstein

Systematic reviews provide the best evidence on the effectiveness of healthcare interventions including quality improvement strategies. The methods of systematic review of individual patient randomised trials of healthcare interventions are well developed. We discuss methodological and practice issues that need to be considered when undertaking systematic reviews of quality improvement strategies including developing a review protocol, identifying and screening evidence sources, quality assessment and data abstraction, analytical methods, reporting systematic reviews, and appraising systematic reviews. This paper builds on our experiences within the Cochrane Effective Practice and Organisation of Care (EPOC) review group.


PLOS Medicine | 2007

Factors Associated with Findings of Published Trials of Drug-Drug Comparisons: Why Some Statins Appear More Efficacious than Others

Lisa Bero; Peter Bacchetti; Kirby Lee

Background Published pharmaceutical industry–sponsored trials are more likely than non-industry-sponsored trials to report results and conclusions that favor drug over placebo. Little is known about potential biases in drug–drug comparisons. This study examined associations between research funding source, study design characteristics aimed at reducing bias, and other factors that potentially influence results and conclusions in randomized controlled trials (RCTs) of statin–drug comparisons. Methods and Findings This is a cross-sectional study of 192 published RCTs comparing a statin drug to another statin drug or non-statin drug. Data on concealment of allocation, selection bias, blinding, sample size, disclosed funding source, financial ties of authors, results for primary outcomes, and author conclusions were extracted by two coders (weighted kappa 0.80 to 0.97). Univariate and multivariate logistic regression identified associations between independent variables and favorable results and conclusions. Of the RCTs, 50% (95/192) were funded by industry, and 37% (70/192) did not disclose any funding source. Looking at the totality of available evidence, we found that almost all studies (98%, 189/192) used only surrogate outcome measures. Moreover, study design weaknesses common to published statin–drug comparisons included inadequate blinding, lack of concealment of allocation, poor follow-up, and lack of intention-to-treat analyses. In multivariate analysis of the full sample, trials with adequate blinding were less likely to report results favoring the test drug, and sample size was associated with favorable conclusions when controlling for other factors. In multivariate analysis of industry-funded RCTs, funding from the test drug company was associated with results (odds ratio = 20.16 [95% confidence interval 4.37–92.98], p < 0.001) and conclusions (odds ratio = 34.55 [95% confidence interval 7.09–168.4], p < 0.001) that favor the test drug when controlling for other factors. Studies with adequate blinding were less likely to report statistically significant results favoring the test drug. Conclusions RCTs of head-to-head comparisons of statins with other drugs are more likely to report results and conclusions favoring the sponsors product compared to the comparator drug. This bias in drug–drug comparison trials should be considered when making decisions regarding drug choice.


International Journal of Technology Assessment in Health Care | 1996

Influences on the Quality of Published Drug Studies

Lisa Bero; Drummond Rennie

To practice evidence-based medicine, physicians need data on the clinical effectiveness, toxicity, convenience, and cost of new drugs compared with available alternatives. We give examples of published drug studies that are defective, sometimes because pharmaceutical industry funding has affected their content and quality. We make recommendations on how to avoid these defects.


Annals of Internal Medicine | 1997

How Consumers and Policymakers Can Use Systematic Reviews for Decision Making

Lisa Bero; Alejandro R. Jadad

A healthy pregnant woman is deciding whether she should have the ultrasonography recommended by her physician. Members of a city council are deciding whether to prohibit tobacco smoking in local restaurants and bars. Decisions such as these are made daily by health care consumers who must determine whether to have a diagnostic procedure or select one of several treatment alternatives and by policymakers who must choose the types of health care to provide. In this article, we discuss how systematic reviews can help during the decision-making process. In our discussion, consumers include both patients and healthy persons, their family members, and their advocates. Policymakers include decision makers at the national, regional, local, and institutional levels. For example, administrators, local health authorities, purchasers of health care, and regulatory bodies are considered policymakers. Our discussion concentrates on the factors that influence decisions common to both consumers and policymakers. However, one fundamental difference between the decision-making process of policymakers and that of patients and healthy persons is the tendency of policymakers to consider the perspective of the general population, whereas patients or healthy persons are obviously more likely to consider their own perspective. When making decisions, policymakers consider the burden of suffering, that is, the morbidity and mortality associated with a condition if a person does not receive treatment and the prevalence of a condition in the general population [1]. If the burden of suffering is high, then policymakers may recommend action. Consumers, in contrast, are understandably more likely to consider personal suffering and benefits when making a decision. What may be best for the group may not necessarily be best for the individual [2]. Current Use of Systematic Reviews by Consumers and Policymakers Our search for evaluations of the use of systematic reviews identified little published evidence to support the opinion that systematic reviews currently influence the medical or health care decisions made by the general public and by policymakers (details of our search strategy are available by contacting Dr. Bero at the address listed at the end of text). We found only one study [3] in which systematic reviews influenced hypothetical decisions about reimbursement for mammography screening and cardiac rehabilitation and one case report [4] in which one of the authors conducted a systematic review that persuaded a physician to change his recommendations. The lack of research on the impact that systematic reviews have on decision making may be the result of a lack of interest by the research community or the complexity of studying decision-making processes. In our search, we attempted to identify studies that assessed the direct impact of systematic reviews on decisions made by policymakers and consumers. Although we only found two evaluations, the literature does offer numerous examples of how systematic reviews have been used to gather information for policymaking. For example, Light and Pillemer [5] describe how systematic reviews have been commissioned by policymakers to answer their questions. Guidelines on clinical practice (such as those from the Agency for Health Care Policy and Research and from the American College of Physicians) are often based on systematic reviews. In addition, technology assessments (such as those conducted by the U.S. Office of Technology Assessment) often include a systematic review of the literature on clinical efficacy as part of the assessment. We have learned that systematic reviews are more frequently cited than original research articles in coverage by the news media of research on the effects of environmental tobacco smoke; this fact suggests that systematic reviews might be indirectly influencing policy decisions as a result of such coverage [6]. The use of systematic reviews in policy development reinforces the need to rigorously evaluate their direct impact on policy decisions. Several factors may explain the reason that minimal data are available on the impact of systematic reviews on decisions made by policymakers and consumers. Decision makers consider the source, format, perceived relevance, and other aspects of information when making decisions (Table 1). The tendency of decision makers to use anecdotal aspects of the most recent evidence or personal experience rather than evaluate evidence broadly and systematically undermines the use of systematic reviews [7]. In addition, the role of information depends on its interaction with other components of the decision-making process (including the values, preferences, and beliefs of the decision maker) and the context in which the decision is being made (Table 1) [8]. Furthermore, although the methods for conducting systematic reviews have been available to the medical community for years, these reviews have only recently been applied to clinical care [12-14]. For example, a landmark article summarizing the state of the science of systematic reviewing was published in the medical literature in 1987 [15]. In addition, the Cochrane Collaboration, an international organization whose goal is to design, conduct, and disseminate systematic reviews in medicine, was founded in 1992 [16]. Table 1. Examples of Factors That Influence Decisions of Consumers and Policymakers* The role of information as only one aspect of the complex decision-making process is illustrated by our hypothetical examples. The healthy pregnant woman who was deciding whether to have ultrasonography should be interested to learn that four systematic reviews that assessed routine ultrasonography in early pregnancy have found the procedure to be safe and effective for detecting fetal malformations [17-20]. However, she will probably weigh this information against her perceived risk for having a baby with a malformation, the amount of time she must take from work, the inconvenience associated with having ultrasonography (for example, out-of-pocket expenses), and the experience of her sister or next-door neighbor [21]. In contrast, the members of a city council, while making their decision on whether to restrict tobacco smoking, can be informed by three meta-analyses of the effects of passive smoke on heart disease [22-24]. However, council members are also likely to consider the opinions of their constituents and the pressures exerted by lobbyists for the tobacco industry and advocacy groups. The Rationale for Using Systematic Reviews Although information in any format plays only a limited (but potentially significant) role in the decision-making process, we have reason to believe that systematic reviews can have a particularly important influence on the decisions made by both consumers and policymakers. A properly conducted systematic review can provide an objective summary of large amounts of data. For consumers and policymakers who are interested in the bottom line of evidence, systematic reviews can help cohere conflicting results of research. Systematic reviews can form the basis for other integrative articles produced by policymakers, such as risk assessments, practice guidelines, economic analyses, and decision analyses [25]. Systematic reviews can aid the process of consensus development by curtailing the criticism that consensus development tends to occur in the absence of an objective framework for collecting and reviewing evidence [1]. Systematic reviews also typically identify gaps in knowledge, thereby helping consumers and policymakers decide not to proceed in the absence of evidence or encouraging them to address the gaps in medical research. Savulescu and colleagues [26] have recommended that medical ethics committees require researchers to conduct systematic reviews of existing relevant research to ensure the need for a new study. As is true for the results of any research, however, inappropriate use of systematic reviews can result in more harm than good. Some of the risks of systematic reviews can be illustrated with our hypothetical example of the healthy pregnant woman. In the United Kingdom, leaflets that are targeted to pregnant women and health care professionals offer informed choice by reviewing the value of routine ultrasonography. The leaflets summarize systematic reviews of the best available evidence on the efficacy and safety of routine ultrasonography in pregnant women. In a case study on the reactions of women and health care professionals to the leaflets [27], women reacted with shock at the contents of the leaflets but were glad to be presented with the advantages and disadvantages of routine scanning and often requested additional information. Midwives believed that the leaflets would help women seek better health care, whereas ultrasonographers were concerned that the leaflets would provoke anxiety among women and lessen the use of routine ultrasonography [27]. The conflicting reactions of the midwives and ultrasonographers could lead to confrontation and lack of trust. Additional harm could result from misrepresentation of the conclusions of systematic reviews to promote the self-interests of organizations or to support political positions. For example, a systematic review [24] of the cardiac effects of environmental tobacco smoke was presented without referring to other literature on the effects of passive smoking and was misconstrued as failing to conclude that environmental tobacco smoke is harmful [28]. Misinterpretation of systematic reviews in the lay literature can affect the decisions made by government officials, including our hypothetical example of a city council regulating tobacco smoking. Although the potential benefits of systematic reviews seem to outweigh the harms, sustained efforts are needed to increase our understanding of each stage involved in the use of systematic reviews as a decision-making tool (Table 2). In the following s


BMJ | 2007

Financial ties and concordance between results and conclusions in meta-analyses: retrospective cohort study

Veronica Yank; Drummond Rennie; Lisa Bero

Objective To determine whether financial ties to one drug company are associated with favourable results or conclusions in meta-analyses on antihypertensive drugs. Design Retrospective cohort study. Setting Meta-analyses published up to December 2004 that were not duplicates and evaluated the effects of antihypertensive drugs compared with any comparator on clinical end points in adults. Financial ties were categorised as one drug company compared with all others. Main outcome measures The main outcomes were the results and conclusions of meta-analyses, with both outcomes separately categorised as being favourable or not favourable towards the study drug. We also collected data on characteristics of meta-analyses that the literature suggested might be associated with favourable results or conclusions. Results 124 meta-analyses were included in the study, 49 (40%) of which had financial ties to one drug company. On univariate logistic regression analyses, meta-analyses of better methodological quality were more likely to have favourable results (odds ratio 1.16, 95% confidence interval 1.07 to 1.27). Although financial ties to one drug company were not associated with favourable results, such ties constituted the only characteristic significantly associated with favourable conclusions (4.09, 1.30 to 12.83). When controlling for other characteristics of meta-analyses in multiple logistic regression analyses, meta-analyses that had financial ties to one drug company remained more likely to report favourable conclusions (5.11, 1.54 to 16.92). Conclusion Meta-analyses on antihypertensive drugs and with financial ties to one drug company are not associated with favourable results but are associated with favourable conclusions.

Collaboration


Dive into the Lisa Bero's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ruth E. Malone

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Suzanne Hill

World Health Organization

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge