Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kathryn M McDonald is active.

Publication


Featured researches published by Kathryn M McDonald.


Journal of the American College of Cardiology | 2001

The prognostic value of troponin in patients with non-ST elevation acute coronary syndromes: a meta-analysis

Paul A. Heidenreich; Thomas Alloggiamento; Kathryn M McDonald; Alan S. Go; Mark A. Hlatky; Kathryn Melsop

OBJECTIVES This study was designed to compare the prognostic value of an abnormal troponin level derived from studies of patients with non-ST elevation acute coronary syndromes (ACS). BACKGROUND Risk stratification for patients with suspected ACS is important for determining need for hospitalization and intensity of treatment. METHODS We identified clinical trials and cohort studies of consecutive patients with suspected ACS without ST-elevation from 1966 through 1999. We excluded studies limited to patients with acute myocardial infarction and studies not reporting mortality or troponin results. RESULTS Seven clinical trials and 19 cohort studies reported data for 5,360 patients with a troponin T test and 6,603 with a troponin I test. Patients with positive troponin (I or T) had significantly higher mortality than those with a negative test (5.2% vs. 1.6%, odds ratio [OR] 3.1). Cohort studies demonstrated a greater difference in mortality between patients with a positive versus negative troponin I (8.4% vs. 0.7%, OR 8.5) than clinical trials (4.8% if positive, 2.1% if negative, OR 2.6, p = 0.01). Prognostic value of a positive troponin T was also slightly greater for cohort studies (11.6% mortality if positive, 1.7% if negative, OR 5.1) than for clinical trials (3.8% if positive, 1.3% if negative, OR 3.0, p = 0.2) CONCLUSIONS In patients with non-ST elevation ACS, the short-term odds of death are increased three- to eightfold for patients with an abnormal troponin test. Data from clinical trials suggest a lower prognostic value for troponin than do data from cohort studies.


Medical Decision Making | 2012

Model Transparency and Validation A Report of the ISPOR-SMDM Modeling Good Research Practices Task Force–7

David M. Eddy; William Hollingworth; J. Jaime Caro; Joel Tsevat; Kathryn M McDonald; John Wong

Trust and confidence are critical to the success of health care models. There are two main methods for achieving this: transparency (people can see how the model is built) and validation (how well it reproduces reality). This report describes recommendations for achieving transparency and validation, developed by a task force appointed by the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) and the Society for Medical Decision Making (SMDM). Recommendations were developed iteratively by the authors. A nontechnical description should be made available to anyone—including model type and intended applications; funding sources; structure; inputs, outputs, other components that determine function, and their relationships; data sources; validation methods and results; and limitations. Technical documentation, written in sufficient detail to enable a reader with necessary expertise to evaluate the model and potentially reproduce it, should be made available openly or under agreements that protect intellectual property, at the discretion of the modelers. Validation involves face validity (wherein experts evaluate model structure, data sources, assumptions, and results), verification or internal validity (check accuracy of coding), cross validity (comparison of results with other models analyzing same problem), external validity (comparing model results to real-world results), and predictive validity (comparing model results with prospectively observed events). The last two are the strongest form of validation. Each section of this paper contains a number of recommendations that were iterated among the authors, as well as the wider modeling task force jointly set up by the International Society for Pharmacoeconomics and Outcomes Research and the Society for Medical Decision Making.


Annals of Internal Medicine | 2007

Systematic Review: The Comparative Effectiveness of Percutaneous Coronary Interventions and Coronary Artery Bypass Graft Surgery

Dena M. Bravata; Allison Gienger; Kathryn M McDonald; Vandana Sundaram; Marco V Perez; Robin Varghese; John R Kapoor; Reza Ardehali; Douglas K Owens; Mark A. Hlatky

Context The relative benefits and harms of coronary artery bypass graft surgery (CABG) versus percutaneous coronary intervention (PCI) are sometimes unclear. Contribution This systematic review of 23 randomized trials found that survival at 10 years was similar for CABG and PCI, even among diabetic patients. Procedural strokes and angina relief were more common after CABG (risk difference, 0.6% and about 5% to 8%, respectively), whereas repeated revascularization procedures were more common after PCI (risk difference, 24% at 1 year). Caution Only 1 small trial used drug-eluting stents. Few patients with extensive coronary disease or poor ventricular function were enrolled. The Editors Coronary artery bypass graft (CABG) surgery and catheter-based percutaneous coronary intervention (PCI), with or without coronary stents, are alternative approaches to mechanical coronary revascularization. These 2 coronary revascularization techniques are among the most common major medical procedures performed in North America and Europe: In 2005, 261000 CABG procedures and 645000 PCI procedures were performed in the United States alone (1). However, the comparative effectiveness of CABG and PCI remains poorly understood for patients in whom both procedures are technically feasible and coronary revascularization is clinically indicated. In patients with left main or triple-vessel coronary artery disease with reduced left ventricular function, CABG is generally preferred because randomized, controlled trials (RCTs) have shown that it improves survival compared with medical therapy (2, 3). In patients with most forms of single-vessel disease, PCI is generally the preferred form of coronary revascularization (4), in light of its lower clinical risk and the evidence that PCI reduces angina and myocardial ischemia in this subset of patients (5). Most RCTs comparing CABG and PCI have been conducted in populations with coronary artery disease between these extremes, namely patients with single-vessel, proximal left anterior descending disease; most forms of double-vessel disease; or less extensive forms of triple-vessel disease. We sought to evaluate the evidence from RCTs on the comparative effectiveness of PCI and CABG. We included trials using balloon angioplasty or coronary stents because quantitative reviews have shown no differences in mortality or myocardial infarction between these PCI techniques (6, 7). We also included trials using standard or minimally invasive CABG or both procedures (8, 9). We sought to document differences between PCI and CABG in survival, cardiovascular complications (such as stroke and myocardial infarction), and freedom from angina. Finally, we reviewed selected observational studies to assess the generalizability of the RCTs. Methods Data Sources We searched the MEDLINE, EMBASE, and Cochrane databases for studies published between January 1966 and August 2006 by using such terms as angioplasty, coronary, and coronary artery bypass surgery, as reported in detail elsewhere (10). We also sought additional studies by reviewing the reference lists of included articles, conference abstracts, and the bibliographies of expert advisors. We did not limit the searches to the English language. Study Selection We sought RCTs that compared health outcomes of PCI and CABG. We excluded trials that compared PCI alone or CABG alone with medical therapy, those that compared 2 forms of PCI, and those that compared 2 forms of CABG. The outcomes of interest were survival, myocardial infarction, stroke, angina, and use of additional revascularization procedures. Two investigators independently reviewed titles, abstracts, and the full text as needed to determine whether studies met inclusion criteria. Conflicts between reviewers were resolved through re-review and discussion. We did not include results published solely in abstract form. Data Extraction and Quality Assessment Two authors independently abstracted data on study design; setting; population characteristics (sex, age, race/ethnicity, comorbid conditions, and coronary anatomy); eligibility and exclusion criteria; procedures performed; numbers of patients screened, eligible, enrolled, and lost to follow-up; method of outcome assessment; and results for each outcome. We assessed the quality of included trials by using predefined criteria and graded their quality as A, B, or C by using methods described in detail elsewhere (10). In brief, a grade of A indicates a high-quality trial that clearly described the population, setting, interventions, and comparison groups; randomly allocated patients to alternative treatments; had low dropout rates; and reported intention-to-treat analysis of outcomes. A grade of B indicates a randomized trial with incomplete information about methods that might mask important limitations. A grade of C indicates that the trial had evident flaws, such as improper randomization, that could introduce significant bias. Data Synthesis and Analysis We used random-effects models to compute weighted mean rates and SEs for each outcome. We computed summary risk differences and odds ratios between PCI and CABG and the 95% CI for each outcome of interest at annual intervals. Because the results did not differ materially when risk differences and odds ratios (10) were used and the low rate of several outcomes (for example, procedural mortality) made the risk difference a more stable outcome metric (11, 12), we report here only the risk differences. We assessed heterogeneity of effects by using chi-square and I 2 statistics (13). When effects were heterogeneous (I 2 > 50%), we explored the effects of individual studies on summary effects by removing each study individually. We assessed the possibility of publication bias by visual inspection of funnel plots and calculated the number of missing studies required to change a statistically significant summary effect to not statistically significant (11). We performed analyses by using Comprehensive Meta-Analysis software, version 2.0 (Biostat, Englewood, New Jersey). Inclusion of Observational Studies We also searched for observational data to evaluate the generalizability of the RCT results, as reported in detail elsewhere (10). In brief, we included observational studies from clinical or administrative databases that included at least 1000 recipients of each revascularization procedure and provided sufficient information about the patient populations (such as demographic characteristics, preprocedure coronary anatomy, and comorbid conditions) and procedures performed (such as balloon angioplasty vs. bare-metal stents vs. drug-eluting stents). Role of the Funding Source This project was supported by the Agency for Healthcare Research and Quality. Representatives of the funding agency reviewed and commented on the study protocol and drafts of the manuscript, but the authors had final responsibility for the design, conduct, analysis, and reporting of the study. Results We identified 1695 potentially relevant articles, of which 204 merited full-text review (Appendix Figure). A total of 113 articles reporting on 23 unique RCTs met inclusion criteria (Table 1 [14126]). These trials enrolled a total of 9963 patients, of whom 5019 were randomly assigned to PCI and 4944 to CABG. Most trials were conducted in Europe, the United Kingdom, or both locations; only 3 trials were performed in the United States. The early studies (patient entry from 1987 to 1993) used balloon angioplasty as the PCI technique, and the later studies (patient entry from 1994 to 2002) used stents as the PCI technique. Only 1 small trial of PCI versus CABG used drug-eluting stents (116). Nine trials limited entry to patients with single-vessel disease of the proximal left anterior descending artery, whereas the remaining 14 trials enrolled patients with multivessel disease, either predominantly (3 trials) or exclusively (11 trials). Appendix Figure. Study flow diagram. CABG= coronary artery bypass grafting; CAD= coronary artery disease; PCI= percutaneous coronary intervention; RCT= randomized, controlled trial. Table 1. Overview of Randomized, Controlled Trials The quality of 21 trials was graded as A, and 1 trial (117) was graded as B. One trial (116) was graded as C because randomization may not have been properly executed (details are available elsewhere [10]). We performed sensitivity analyses by removing these studies from the analysis, and our summary results did not change statistically significantly. The average age of the trial participants was 61 years, 27% were women, and most were of European ancestry. Roughly 20% had diabetes, half had hypertension, and half had hyperlipidemia. Whereas approximately 40% of patients had a previous myocardial infarction, few had heart failure or poor left ventricular function. Among studies that enrolled patients with multivessel coronary disease, most had double-vessel rather than triple-vessel disease. Revascularization procedures were performed by using standard methods for the time the trial was conducted (Table 1). Among patients with multivessel disease, more grafts were placed during CABG than vessels were dilated during PCI. Among patients assigned to PCI, stents were commonly used in the recent studies, but in the earlier trials, balloon angioplasty was standard. Among patients assigned to CABG, arterial grafting with the left internal mammary artery was frequently done, especially in more recent trials. Some studies used minimally invasive, direct coronary artery bypass and off-pump operations to perform CABG in patients with single-vessel left anterior descending disease (Table 1). Short-Term and Procedural Outcomes Survival (within 30 days of the procedure) was high for both procedures: 98.9% for PCI and 98.2% for CABG. When data from all trials were combined, the survival difference between PCI and CABG was small and not statistically significant (0.2% [95% CI, 0.3% to 0.6%]) (Figure 1


Medical Care | 2006

Quality improvement strategies for hypertension management: a systematic review.

Judith M. E. Walsh; Kathryn M McDonald; Kaveh G. Shojania; Vandana Sundaram; Smita Nayak; Robyn Lewis; Douglas K Owens; Mary K. Goldstein

Background:Care remains suboptimal for many patients with hypertension. Purpose:The purpose of this study was to assess the effectiveness of quality improvement (QI) strategies in lowering blood pressure. Data Sources:MEDLINE, Cochrane databases, and article bibliographies were searched for this study. Study Selection:Trials, controlled before–after studies, and interrupted time series evaluating QI interventions targeting hypertension control and reporting blood pressure outcomes were studied. Data Extraction:Two reviewers abstracted data and classified QI strategies into categories: provider education, provider reminders, facilitated relay of clinical information, patient education, self-management, patient reminders, audit and feedback, team change, or financial incentives were extracted. Data Synthesis:Forty-four articles reporting 57 comparisons underwent quantitative analysis. Patients in the intervention groups experienced median reductions in systolic blood pressure (SBP) and diastolic blood pressure (DBP) that were 4.5 mm Hg (interquartile range [IQR]: 1.5 to 11.0) and 2.1 mm Hg (IQR: −0.2 to 5.0) greater than observed for control patients. Median increases in the percentage of individuals achieving target goals for SBP and DBP were 16.2% (IQR: 10.3 to 32.2) and 6.0% (IQR: 1.5 to 17.5). Interventions that included team change as a QI strategy were associated with the largest reductions in blood pressure outcomes. All team change studies included assignment of some responsibilities to a health professional other than the patients physician. Limitations:Not all QI strategies have been assessed equally, which limits the power to compare differences in effects between strategies. Conclusion:QI strategies are associated with improved hypertension control. A focus on hypertension by someone in addition to the patients physician was associated with substantial improvement. Future research should examine the contributions of individual QI strategies and their relative costs.


Annals of Internal Medicine | 2011

Systematic Review: Benefits and Harms of In-Hospital Use of Recombinant Factor VIIa for Off-Label Indications

Veronica Yank; C Vaughan Tuohy; Aaron C Logan; Dena M. Bravata; Kristan Staudenmayer; Robin Eisenhut; Vandana Sundaram; Donal McMahon; Ingram Olkin; Kathryn M McDonald; Douglas K Owens; Randall S. Stafford

BACKGROUND Recombinant factor VIIa (rFVIIa), a hemostatic agent approved for hemophilia, is increasingly used for off-label indications. PURPOSE To evaluate the benefits and harms of rFVIIa use for 5 off-label, in-hospital indications: intracranial hemorrhage, cardiac surgery, trauma, liver transplantation, and prostatectomy. DATA SOURCES Ten databases (including PubMed, EMBASE, and the Cochrane Library) queried from inception through December 2010. Articles published in English were analyzed. STUDY SELECTION Two reviewers independently screened titles and abstracts to identify clinical use of rFVIIa for the selected indications and identified all randomized, controlled trials (RCTs) and observational studies for full-text review. DATA EXTRACTION Two reviewers independently assessed study characteristics and rated study quality and indication-wide strength of evidence. DATA SYNTHESIS 16 RCTs, 26 comparative observational studies, and 22 noncomparative observational studies met inclusion criteria. Identified comparators were limited to placebo (RCTs) or usual care (observational studies). For intracranial hemorrhage, mortality was not improved with rFVIIa use across a range of doses. Arterial thromboembolism was increased with medium-dose rFVIIa use (risk difference [RD], 0.03 [95% CI, 0.01 to 0.06]) and high-dose rFVIIa use (RD, 0.06 [CI, 0.01 to 0.11]). For adult cardiac surgery, there was no mortality difference, but there was an increased risk for thromboembolism (RD, 0.05 [CI, 0.01 to 0.10]) with rFVIIa. For body trauma, there were no differences in mortality or thromboembolism, but there was a reduced risk for the acute respiratory distress syndrome (RD, -0.05 [CI, -0.02 to -0.08]). Mortality was higher in observational studies than in RCTs. LIMITATIONS The amount and strength of evidence were low for most outcomes and indications. Publication bias could not be excluded. CONCLUSION Limited available evidence for 5 off-label indications suggests no mortality reduction with rFVIIa use. For some indications, it increases thromboembolism.


Annals of Internal Medicine | 2006

Systematic Review: A Century of Inhalational Anthrax Cases from 1900 to 2005

Jon-Erik C Holty; Dena M. Bravata; Hau Liu; Richard A. Olshen; Kathryn M McDonald; Douglas K Owens

Key Summary Points Initiation of antibiotic or anthrax antiserum therapy during the prodromal phase of inhalational anthrax is associated with an improved short-term survival. Multidrug antibiotic regimens are associated with decreased mortality, especially when they are administered during the prodromal phase. Most surviving patients will probably require drainage of reaccumulating pleural effusions. Despite modern intensive care, fulminant-phase anthrax is rarely survivable. The 2001 anthrax attack demonstrated the vulnerability of the United States to anthrax bioterrorism. The mortality rate observed during the 2001 U.S. attack (45%) was considerably lower than that historically reported for inhalational anthrax (89% to 96%) (1, 2). This reduction generally is attributed to the rapid provision of antibiotics and supportive care in modern intensive care units (3). However, no comprehensive reviews of reports of inhalational anthrax cases (including those from 2001) that evaluate how patient factors and therapeutic interventions affect disease progression and mortality have been published. Before the introduction of antibiotics, anthrax infection was primarily treated with antiserum (4). Anthrax antiserum reportedly decreased mortality by 75% compared with no treatment (5-8), and its efficacy is supported by recent animal data (9). Later, effective antibiotics, such as penicillin and chloramphenicol, were added to anthrax treatment strategies (10, 11). Currently, combination antibiotic therapy with ciprofloxacin (or doxycycline), rifampin, and clindamycin is recommended on the basis of anecdotal evidence from the U.S. 2001 experience (1, 12, 13). Historically, the clinical course of untreated inhalational anthrax has been described as biphasic, with an initial benign prodromal latent phase, characterized by a nonspecific flu-like syndrome, followed by a severe fulminant acute phase, characterized by respiratory distress and shock that usually culminates in death (2, 14). The duration of the prodromal phase has been reported to range from 1 to 6 days (14, 15), whereas that of the fulminant phase has been described as less than 24 hours (14, 16). A 1957 study confirmed these estimates of disease progression but was based on only 6 patients (17). Because a report synthesizing the data from all reported cases of inhalational anthrax (including those from 2001) has not been published, we do not have accurate estimates of the time course associated with disease progression or a clear understanding of the extent to which patient characteristics and treatment factors affect disease progression and mortality. This information is important for developing appropriate treatment and prophylaxis protocols and for accurately simulating anthrax-related illness to inform planning efforts for bioterrorism preparedness. We systematically reviewed published cases of inhalational anthrax between 1900 and 2005 to evaluate the effects of patient factors (for example, age and sex) and therapeutic factors (for example, time to onset of treatment) on disease progression and mortality. Methods Literature Sources and Search Terms We searched MEDLINE to identify case reports of inhalational anthrax (January 1966 to June 2005) by using the Medical Subject Heading (MeSH) terms anthrax and case reports. Because many reports were published before 1966 (the earliest publication date referenced in MEDLINE), we performed additional comprehensive searches of retrieved bibliographies and the indexes of 14 selected journals from 1900 to 1966 (for example, New England Journal of Medicine, The Lancet, La Presse Mdicale, Deutsche Medizinische Wochenschrift, and La Semana Mdica) to obtain additional citations. We considered all case reports of inhalational anthrax to be potentially eligible for inclusion, regardless of language. Study Selection We considered a case report to be eligible for inclusion if its authors established a definitive diagnosis of inhalational anthrax. Appendix Table 1 presents the details of our inclusion criteria. We excluded articles that described cases presenting before 1900 because Bacillus anthracis was not identified as the causative agent of clinical inhalational anthrax until 1877 (18) and because the use of reliable microscopic (19) and culture examination techniques (20) to confirm the diagnosis were not developed until the late 19th century. Appendix Table 1. Inclusion Criteria Data Abstraction One author screened potentially relevant articles to determine whether they met inclusion criteria. Two authors independently abstracted data from each included English-language article and reviewed bibliographies for additional potentially relevant studies. One author abstracted data from nonEnglish-language articles. We resolved abstraction discrepancies by repeated review and discussion. If 2 or more studies presented the same data from 1 patient, we included these data only once in our analyses. We abstracted 4 types of data from each included article: year of disease onset, patient information (that is, age, sex, and nationality), symptom and disease progression information (for example, time of onset of symptoms, fulminant phase, and recovery or death and whether the patient developed meningitis), and treatment information (for example, time and disease stage of the initiation of appropriate treatment and hospitalization). We based our criteria for determining whether a patient had progressed from the prodromal phase to the fulminant phase on distinguishing clinical features of five 2001 (3, 21, 22) and five 1957 (17) cases of fulminant inhalational anthrax. The fulminant phase is described historically as a severe symptomatic disease characterized by abrupt respiratory distress (for example, dyspnea, stridor, and cyanosis) and shock. Meningoencephalitis has been reported to occur in up to 50% of cases of fulminant inhalational anthrax (23). We considered any patient who had marked cyanosis with respiratory failure, who needed mechanical ventilation, who had meningoencephalitis, or who died as having been in the fulminant phase of disease. We used the reported time of an acute change in symptoms or deteriorating clinical picture to estimate when a confirmed fulminant case had progressed from the prodromal phase. We considered therapy for inhalational anthrax to be appropriate if either an antibiotic to which anthrax is susceptible was given (by oral, intramuscular, or intravenous routes) (24-27) or anthrax antiserum therapy was initiated. We classified patients who received antibiotics that are resistant to strains of B. anthracis (<70% susceptibility) as having received no antibiotics. If treatment with antibiotics or antiserum was given, we assumed that the treatment was appropriately dosed and administered. Statistical Analyses We used univariate analyses with SAS software, version 9.1 (SAS Institute Inc., Cary, North Carolina), to summarize the key patient and treatment characteristics. We compared categorical variables with the Fisher exact test and continuous variables with a 2-tailed WilcoxonMannWhitney test. For single comparisons, we considered a P value less than 0.05 to be statistically significant. When comparing U.S. 2001 with pre-2001 cases (or comparing patients who lived with those who died), we applied a Bonferroni correction to account for multiple comparisons (we considered P< 0.025 to be statistically significant: 0.05/2 = 0.025). We computed correlations for pairs of predictors available for each case at the beginning of the course of disease. Adjustments for Censored Data Infectious disease data are subject to incomplete observations of event times (that is, to censoring), particularly in the presence of therapeutic interventions. This can lead to invalid estimation of relevant event time distributions. For example, patients with longer prodromal stage durations are more likely to receive antibiotics than patients with shorter prodromal stage durations, and they may be, therefore, less likely to progress to fulminant stage or death. To account for censoring of our time data, we used maximum likelihood estimates by using both Weibull and log-normal distributions (28). The Appendix provides a detailed description of these analyses. Evaluating Predictors of Disease Progression and Mortality We used a multivariate Cox proportional hazards model to evaluate the prognostic effects of the following features on survival: providing antibiotics or antiserum (a time-dependent covariate in 3 categories: none, single-drug regimen, or multidrug regimen); the stage during which treatment with antibiotics or antiserum was initiated (prodromal stage vs. fulminant stage or no therapy); age (continuous variable); sex; if therapy was given, whether patients received a multidrug regimen (for example, 2 appropriate antibiotics or combination antibioticanthrax antiserum therapy); the use of pleural fluid drainage (a time-dependent covariate); development of anthrax meningoencephalitis (a time-dependent covariate); and whether the case was from the 2001 U.S. attack. We assessed each variable by stepwise backward regression using a P value cutoff of 0.100 or less. We excluded 8 adult patients for whom age was not reported. Although we did not perform extensive goodness-of-fit tests of our models, we did at least fit models in which we entered time not only linearly but also quadratically. Improvement in fit, as judged by conventional Wald and other tests, did not result, nor did including quadratic time variables further explain the data. To estimate mortality as a function of duration from symptom onset to antibiotic initiation, we first calculated a disease progression curve describing the time from symptom onset to fulminant phase among untreated patients by using the Weibull maximum likelihood estimates from the 71 cases for which time estimates were known. We then assigned a mortality rate to patients who had treatmen


Annals of Internal Medicine | 1997

Cost-effectiveness of implantable cardioverter defibrillators relative to amiodarone for prevention of sudden cardiac death.

Douglas K Owens; Gillian D Sanders; Ryan A. Harris; Kathryn M McDonald; Paul A. Heidenreich; Anne D. Dembitzer; Mark A. Hlatky

Sudden cardiac death struck approximately 360 000 persons in the United States in 1990 [1, 2]. Defined as death that occurs within 1 hour of the onset of symptoms, sudden cardiac death accounts for about one half of all deaths from heart disease [2]. Sudden cardiac death occurs primarily in patients who have an established history of heart disease, particularly those with a history of severe congestive heart failure, myocardial infarction, or sustained ventricular arrhythmia [3]. The therapeutic alternatives are treatment with antiarrhythmic drugs or treatment with an implantable cardioverter defibrillator (ICD) [4]. Type Ia antiarrhythmic agents (for example, procainamide) were previously a mainstay of pharmacologic therapy, but recent evidence has raised concern about their effectiveness and potential toxicity [5-8]. In 1991, it was found that the type Ic agents encainide and flecainide increased mortality when used to suppress ventricular ectopy after myocardial infarction. This unexpected finding further limited the choice of antiarrhythmic drugs [9]. Amiodarone is one of the most promising pharmacologic alternatives [10-14]. However, amiodarone therapy is complicated by lengthy loading regimens; persistence of the drug in adipose tissue for long periods; and severe adverse effects, including pulmonary fibrosis and thyroid abnormalities [15, 16]. With the development of ICDs that can be implanted without thoracotomy (and possibly during an outpatient procedure), use of an ICD is now a practical therapeutic alternative to antiarrhythmic drug therapy [4]. Although ICDs are remarkably effective in terminating ventricular arrhythmias [17-20], they are expensive (


Annals of Internal Medicine | 2013

The Top Patient Safety Strategies That Can Be Encouraged for Adoption Now

Paul G. Shekelle; Peter J. Pronovost; Robert M. Wachter; Kathryn M McDonald; Karen M Schoelles; Sydney M. Dy; Kaveh G. Shojania; James Reston; Alyce S. Adams; Peter B. Angood; David W. Bates; Leonard Bickman; Pascale Carayon; Liam Donaldson; Naihua Duan; Donna O. Farley; Trisha Greenhalgh; John Haughom; Eillen T. Lake; Richard Lilford; Kathleen N. Lohr; Gregg S. Meyer; Marlene R. Miller; D Neuhauser; Gery W. Ryan; Sanjay Saint; Stephen M. Shortell; David P. Stevens; Kieran Walshe

40 000 to


Annals of Internal Medicine | 2011

Advancing the science of patient safety

Paul G. Shekelle; Peter J. Pronovost; Robert M. Wachter; Stephanie L. Taylor; Sydney M. Dy; Robbie Foy; Susanne Hempel; Kathryn M McDonald; John Øvretveit; Lisa V. Rubenstein; Alyce S. Adams; Peter B. Angood; David W. Bates; Leonard Bickman; Pascale Carayon; Liam Donaldson; Naihua Duan; Donna O. Farley; Trisha Greenhalgh; John Haughom; Eileen T. Lake; Richard Lilford; Kathleen N. Lohr; Gregg S. Meyer; Marlene R. Miller; D Neuhauser; Gery W. Ryan; Sanjay Saint; Kaveh G. Shojania; Stephen M. Shortell

60 000 for implantation), and the extent to which they extend life is unknown [21]. Ongoing or planned randomized, controlled trials [19, 20, 22-27] will clarify the role of ICDs and drug therapy for the prevention of sudden cardiac death, but their results will not be available for several years. Economic analyses have suggested that ICDs have favorable cost-effectiveness ratios [28-32], but these analyses were based on controversial assumptions about efficacy in improving survival [28, 30, 31]; compared the implantation of an ICD with an expensive alternative, such as electrophysiologically guided therapy [29, 31, 32]; or limited the use of ICDs to extremely high-risk patients [28-32]. In this study, we used data from ongoing randomized trials and data on the costs of third-generation ICDs to evaluate the cost-effectiveness of treatment with an ICD (implanted without thoracotomy) relative to empirical therapy with amiodarone. We determined the reduction in total mortality that ICD use would have to confer to reach specified cost-effectiveness ratios. Because the indications for ICD use may expand the use of this therapy into new patient populations, we evaluated how the cost-effectiveness of treatment with an ICD varies when the device is used in a population that has a lower risk for sudden cardiac death than do survivors of cardiac arrest. Methods We used a decision model to estimate the quality-adjusted length of life and expenditures for a population of patients who received amiodarone or an ICD. We used the perspective of society and incorporated benefits and costs accordingly. We examined three treatment strategies (Figure 1). Patients who received the ICD-only regimen began treatment with an ICD and continued to receive this therapy regardless of subsequent arrhythmic events. Patients who received amiodarone only began treatment with amiodarone and continued to receive this drug as sole therapy regardless of subsequent arrhythmic events. They crossed over to receive an ICD only if they had intolerable side effects as a result of amiodarone use. Patients who received the amiodarone-to-ICD therapy began treatment with amiodarone and crossed over to ICD if they were subsequently resuscitated from ventricular fibrillation (all survivors) or from ventricular tachycardia (50% of survivors) or if severe drug toxicity occurred. We did not evaluate treatment strategies that used ICD and amiodarone simultaneously because evidence was not sufficient to assess the efficacy of this combined therapy. We discounted health benefits and costs using a 3% annual discount rate, as recommended by a panel on cost-effectiveness analysis in health care [33], and we did sensitivity analyses on all model variables. Figure 1. Schematic representation of the decision model. Decision Model We developed a Markov model [34, 35] (Figure 1, Appendix) using SMLTree software (version 2.9, J. Hollenberg, New York, New York); the model tracked a hypothetical cohort of patients over time. Each cohort began receiving one of the three therapeutic regimens: ICD only, amiodarone only, or amiodarone-to-ICD. Each month, a patient was at risk for ventricular fibrillation, ventricular tachycardia, nonarrhythmic cardiac death, noncardiac death, and illness or death from drug toxicity (the latter was applicable only to patients who received amiodarone). Patients who had an ICD were also at risk for perioperative death. If a patient had ventricular fibrillation or ventricular tachycardia, the patient either died, survived with neurologic impairment, or survived without neurologic impairment (Figure 2). The model included a decrement in quality of life for patients who survived an arrhythmic event with neurologic sequelae [36-39]. Patients who were treated with amiodarone were at risk for acute drug toxicity (Figure 2). Figure 2. Decision model subtrees. Top. Upper middle. Lower middle. Bottom. Quality of Life The Markov model incorporated adjustments for quality of life associated with current health, ICD or amiodarone therapy, arrhythmic events, ICD discharges (shocks), and amiodarone toxicity. We used the time-tradeoff technique to calculate quality-adjusted life-years [40, 41]. In the base-case analysis, we assumed that the quality of life of current health was 0.75 [39], and we assumed that quality of life did not change as a result of ICD or amiodarone therapy. In sensitivity analyses, we evaluated the importance of changes in quality of life caused by ICD or amiodarone therapy. Effectiveness of Implantable Cardioverter Defibrillators We assumed that treatment with an ICD did not affect the frequency of arrhythmias but did increase the chance for surviving an arrhythmic event if one occurred. Evidence from randomized trials and patient registries indicates that ICDs successfully treat life-threatening ventricular arrhythmias. In a registry that contained more than 600 patients with third-generation ICDs [17], ventricular tachycardia was terminated successfully in 98.7% of cases and ventricular fibrillation was converted in 98.9% of cases. The Cardiac Arrest Survivors in Hamburg (CASH) study, a randomized trial that compared ICD with pharmacologic therapy in survivors of cardiac arrest, reported a cardiac death rate of 0% at 1 year among 59 patients treated with an ICD [16, 19, 20, 42]. Other studies [18, 43-46] have also reported low rates of sudden cardiac death in ICD recipients. In our base-case analysis, we assumed that ICDs successfully terminated arrhythmias at rates similar to those reported in the patient registry [17] (Table 1). Table 1. Input Variables and Sources* The effect of ICD use on total mortality is less clear [67-70]. At approximately 1 year of follow-up, total mortality rates in the CASH study were 14.3% in the group that received ICDs and 14.7% in the group that received amiodarone; the difference between the groups was not statistically significant [16]. However, this trial is incomplete and relatively small: Each treatment group contained fewer than 60 patients. In the Multicenter Automatic Defibrillator Implantation Trial (MADIT), 196 patients who had a history of previous Q-wave infarction, documented nonsustained ventricular tachycardia, an ejection fraction of 0.35 or less, and inducible sustained ventricular tachycardia (shown on electrophysiologic testing) that could not be suppressed by the infusion of procainamide were randomly assigned to receive either an ICD or conventional pharmacologic care [25]. The trial was stopped early because 15 deaths occurred in the ICD group and 39 occurred in the usual care group, producing a hazard ratio for total mortality of 0.46 (95% CI, 0.26 to 0.82). This corresponds to Kaplan-Meier survival rates of 87% in the ICD group and 65% in the usual care group at approximately 2 years (Moss AJ for the MADIT Investigators. Multicenter Automatic Implantable Defibrillator Trial [MADIT]. 17th Annual Scientific Session of the North American Society of Pacing and Electrophysiology. Seattle; 1996). The risk reduction found in MADIT may particularly favor ICD because selected patients were not suppressed by drug therapy and because the trial was discontinued early. Other large randomized trials of ICD treatment are still ongoing and have not reported any results. Cost-effectiveness studies [28, 30, 31] (with one exception [32]) have assumed that patients who receive ICDs have substantial survival advantages. This assumption has been based on the results of nonrandomized trials, which are subject to selection bias because healthier patients may have had ICD implantation. Such bias precludes definitive inferences about the effect of ICDs on total mortality [21, 71]. For our base-case analysis, therefore, we assumed that ICD use would reduce total mortality at 1 year by 20% to 40% relative to amiodarone therapy in patients who survive ICD implantation. This reduction was approximately constant over time. In sensitivity analyses, we examined reductions in total mortality by ICDs compared with amiodarone that varied from 5% to 60%. A reduction in total mortality of 30% is a reasonable point estimate, given the current evidence. We assumed that the implantation of an ICD was associated with a perioperative mortality rate of 1.


Value in Health | 2012

Model transparency and validation: a report of the ISPOR-SMDM Modeling Good Research Practices Task Force--7

David M. Eddy; William Hollingworth; J. Jaime Caro; Joel Tsevat; Kathryn M McDonald; John Wong

Over the past 12 years, since the publication of the Institute of Medicines report, “To Err is Human: Building a Safer Health System,” improving patient safety has been the focus of considerable public and professional interest. Although such efforts required changes in policies; education; workforce; and health care financing, organization, and delivery, the most important gap has arguably been in research. Specifically, to improve patient safety we needed to identify hazards, determine how to measure them accurately, and identify solutions that work to reduce patient harm. A 2001 report commissioned by the Agency for Healthcare Research and Quality, “Making Health Care Safer: A Critical Analysis of Patient Safety Practices” (1), helped identify some early evidence-based safety practices, but it also highlighted an enormous gap between what was known and what needed to be known.

Collaboration


Dive into the Kathryn M McDonald's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dena M Bravata

American Medical Association

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeffrey Geppert

Boston Children's Hospital

View shared research outputs
Top Co-Authors

Avatar

Patrick S Romano

Boston Children's Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aaron C Logan

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge