Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert M. Wachter is active.

Publication


Featured researches published by Robert M. Wachter.


The New England Journal of Medicine | 2010

Accountability Measures — Using Measurement to Promote Quality Improvement

Mark R. Chassin; Jerod M. Loeb; Stephen Schmaltz; Robert M. Wachter

Measuring the quality of health care and using those measurements to promote improvements in the delivery of care, to influence payment for services, and to increase transparency are now commonplace. These activities, which now involve virtually all U.S. hospitals, are migrating to ambulatory and other care settings and are increasingly evident in health care systems worldwide. Many constituencies are pressing for continued expansion of programs that rely on quality measurement and reporting. In this article, we review the origins of contemporary standardized quality measurement, with a focus on hospitals, where such programs have reached their most highly developed state. We discuss some lessons learned from recent experience and propose a conceptual framework to guide future developments in this fast-moving field. Although many of the points we make are relevant to all kinds of quality measurement, including outcome measures, we focus our comments on process measures, both because these account for most of the measures in current use and because outcome measures have additional scientific challenges surrounding the need for case-mix adjustment. We write not as representatives of the Joint Commission articulating a specific new position of that group, but rather as individuals who have worked in the fields of quality measurement and improvement in a variety of roles and settings over many years.


The New England Journal of Medicine | 2009

Balancing “No Blame” with Accountability in Patient Safety

Robert M. Wachter; Peter J. Pronovost

The authors argue that in the context of appropriate efforts to reduce medical errors by correcting problems in care-delivery systems, health care organizations have underemphasized individual responsibility. They propose punishing providers who repeatedly do not adhere to procedures for improving patient safety, such as hand washing.


Annals of Internal Medicine | 2013

The Top Patient Safety Strategies That Can Be Encouraged for Adoption Now

Paul G. Shekelle; Peter J. Pronovost; Robert M. Wachter; Kathryn M McDonald; Karen M Schoelles; Sydney M. Dy; Kaveh G. Shojania; James Reston; Alyce S. Adams; Peter B. Angood; David W. Bates; Leonard Bickman; Pascale Carayon; Liam Donaldson; Naihua Duan; Donna O. Farley; Trisha Greenhalgh; John Haughom; Eillen T. Lake; Richard Lilford; Kathleen N. Lohr; Gregg S. Meyer; Marlene R. Miller; D Neuhauser; Gery W. Ryan; Sanjay Saint; Stephen M. Shortell; David P. Stevens; Kieran Walshe

Over the past 12 years, since the publication of the Institute of Medicines report, “To Err is Human: Building a Safer Health System,” improving patient safety has been the focus of considerable public and professional interest. Although such efforts required changes in policies; education; workforce; and health care financing, organization, and delivery, the most important gap has arguably been in research. Specifically, to improve patient safety we needed to identify hazards, determine how to measure them accurately, and identify solutions that work to reduce patient harm. A 2001 report commissioned by the Agency for Healthcare Research and Quality, “Making Health Care Safer: A Critical Analysis of Patient Safety Practices” (1), helped identify some early evidence-based safety practices, but it also highlighted an enormous gap between what was known and what needed to be known.


Annals of Internal Medicine | 2002

Implementation of a Voluntary Hospitalist Service at a Community Teaching Hospital: Improved Clinical Efficiency and Patient Outcomes

Andrew D. Auerbach; Robert M. Wachter; Patricia P. Katz; Jonathan Showstack; Robert B. Baron; Lee Goldman

Context Many studies suggest that hospitalists reduce average length of stay and costs but have little or no effect on patient survival. Contribution This 2-year cohort study from a community-based urban teaching hospital found that patients cared for by faculty hospitalists rather than community physicians had shorter lengths of stay, lower costs, and better in-hospital and 1- and 2-month survival rates. Implications Length of stay and cost benefits were apparent only in year 2 of the study, which suggests that experience is an important aspect of successful care by hospitalists. Cautions The study was retrospective, was done in a single site, and involved only five hospitalists. The Editors The organization of inpatient services has been transformed with the development of the hospitalist (1). Traditionally, primary care physicians have cared for their own inpatients. In the hospitalist model, a hospitalist becomes the patients attending physician during hospitalization and the outpatient physician resumes supervision of the patient after discharge (2). Several studies have demonstrated improved clinical efficiency in the hospitalist model, but these studies have focused largely on academic centers or health maintenance organizations, or have not used concurrent controls or reported longer periods of follow-up (3-7). One published study examining a hospitalist system at a community-based teaching hospital suggested improvement in clinical efficiency and a reduction in readmissions (8). However, analytic limitations open these findings to many interpretations. To examine the effects of implementation of a hospitalist service on resource utilization and patient outcomes over time, we studied 5308 consecutive patients admitted to an urban community teaching hospital in San Francisco, California. Methods Study Site Mount Zion Hospital (San Francisco, California) was a 280-bed community-based teaching hospital affiliated with University of California, San Francisco. Mount Zions inpatient facilities were closed in November 1999 because of financial pressures. During the year before closure, all physicians were aware of the hospitals financial difficulties, but no individual or group was made a focus of efforts to improve clinical efficiency. Discussions about possible closure began 1 month after this study ended, and the hospital closed 5 months later. Medical patients at Mount Zion Hospital were admitted to one of four medical teams composed of a resident, one to two interns, and zero to three medical students. Mount Zion medical teams cared for common inpatient diagnoses, as well as specialty-associated diagnoses such as cancer, acute myocardial infarction, and cerebrovascular accidents. Housestaff wrote all orders and provided 24-hour coverage to inpatients. Each team had a ward attending physician who before 1 July 1997 was a full-time faculty member serving in this role for 1 month each year. Community-based physicians remained the physician of record for most patients and worked with house officers in the care of their hospitalized patients. On 1 July 1997, Mount Zion implemented a voluntary hospitalist service. Hospitalists, who were University of California, San Francisco, faculty based at Mount Zion, served as ward attendings 6 to 8 months per year and spent their remaining time in ambulatory practice or teaching. Hospitalists cared for patients without primary physicians, patients with faculty or house officer primary care physicians, and patients whose community-based physician chose to use the hospitalist service. Rotating nonhospitalist faculty continued to provide some inpatient care after implementation of the hospitalist service. Patients were admitted to rotating faculty according to the same criteria used for hospitalist services. There were no differences in other inpatient care systems available to community, rotating, or hospitalist physicians (for example, level of housestaff coverage, computer systems, case managers, social workers, or nursing staff). Patients Between 1 July 1997 and 30 June 1999, 5907 patients 18 years of age or older were admitted to the medical service at Mount Zion Hospital. We excluded patients who were admitted for chemotherapy or as part of a research protocol (n = 167) and those for whom some data on primary diagnosis were missing (n = 30). The resulting cohort was composed of 5710 patients, of whom 3693 (65%) were cared for by community-based physicians, 1615 (28%) were cared for by hospitalists, and 402 (7%) were cared for by rotating faculty. Data Management At Mount Zion Hospital, data were drawn from TSI (Transition Systems, Inc., Boston, Massachusetts) administrative databases, a cost-accounting system that collects data abstracted from patient charts at discharge. These databases contain information on sociodemographic characteristics, principal diagnosis (in the form of International Classification of Diseases, 9th revision, codes), diagnosis-related group, length of stay, costs, number of consultations, and whether the patient was in an intensive care unit during hospitalization. Data were manually screened for validity of physician designation as hospitalist or community physician by using previously published definitions of hospitalist physician characteristics (1, 2). Discharge summaries of patients who died during hospitalization were examined to validate deaths. An additional 200 discharge summaries of randomly selected patients discharged alive were also reviewed, revealing no errors. Information regarding physician characteristics and board certification was obtained from hospital credentialing databases. Patient survival to points in time after hospitalization was determined by using data from the California State Death Index (for patients admitted before 1 January 1999) and Social Security death indexes (for patients admitted on or after 1 January 1999 and for those who did not reside in California). Statistical Analysis To satisfy normality requirements and stabilize variance of residuals, we explored two methods of transforming skewed data on cost and length of stay: logarithmic conversion and truncation at the mean + 3 SDs. Since both techniques yielded similar results, we chose to present results by using truncation, as has been done in previous studies of inpatient costs and utilization (4, 9-11). All costs were adjusted to 1999 U.S. dollars by using an annual inflation rate of 3% (4). Primary analyses compared 5308 patients cared for by community or hospital-based physicians; we excluded the few patients cared for by rotating physicians from core analyses. This method was chosen to maximize our ability to discern differences in rare outcomes (such as death or readmission), to determine trends in frequent outcomes (such as length of stay), and to maintain focus on our primary question: hospitalist-directed versus community physiciandirected inpatient care. For bivariable comparisons, we used the Fisher exact test or the Wilcoxon rank-sum test. Unadjusted survival rates were estimated by using KaplanMeier product-limit methods. We then used multivariable models to determine the independent effect of hospitalist care on patient outcomes. Using automated forward and stepwise selection techniques along with manually entered variables, we fit multivariable linear regression models to determine the independent association of hospitalist care with length of stay and costs. Items were selected on the basis of the statistical significance of their association with the outcome or on observed confounding with other independent variables, or to maintain face validity of the model. Similar methods were used in fitting logistic models of readmission; use of consultations; and Cox proportional-hazards models of survival to discharge, 30 days, and 60 days. All analyses were performed by using SAS software, version 8.0 for Windows (SAS Institute, Inc., Cary, North Carolina). Multivariable models contained adjustment for patient age, sex, ethnicity, insurance type, source of admission (for example, emergency department), site of discharge, whether a cardiovascular procedure was performed during hospitalization, whether the patient received care in an intensive care unit during hospitalization, and case-mix measures. For case-mix measures, specific diagnoses were defined by using International Classification of Diseases, 9th revision, codes for pneumonia, asthma, congestive heart failure, acute myocardial infarction, angina, unstable angina, chest pain, cancer, gastrointestinal hemorrhage, HIV infection, and cerebrovascular accident. Models also contained a variable indexed to admission date to adjust for secular trends. Trends in adjusted outcomes were tested by using variables dummy-coded to indicate service and year of admission. Because patients were not randomly assigned to hospitalists or community physicians, we performed secondary analyses using a propensity score (12, 13). In our analyses, the propensity score represents the likelihood that any given patient would be admitted to a hospitalist attending physician. The propensity score was calculated in a logistic regression model with attending designation [that is, hospitalist vs. community physician] as the dependent variable. The model contained all covariates in core models, as well as variables found to contribute to nonrandom allocation of patients to specialty care at a P value less than or equal to 0.20. The propensity score was then used in analyses of cost, length of stay, and mortality in two ways: 1) multivariable analyses stratified within tertiles of propensity score and 2) multivariable analyses using the score as a continuous adjustment variable. Results Physician Characteristics One hundred thirteen community physicians, 20 rotating physicians, and 5 hospitalist physicians admitted patients to Mount Zion Hospital during the 2 years of this study. The mean age was 34 years for hospital


Annals of Internal Medicine | 1999

An Introduction to the Hospitalist Model

Robert M. Wachter

Motivated by a search for improved quality and efficiency, increasing numbers of hospitals and physicians are moving from systems in which all primary care providers manage their own hospitalized p...


Annals of Internal Medicine | 2011

“July Effect”: Impact of the Academic Year-End Changeover on Patient Outcomes: A Systematic Review

John Q. Young; Sumant R Ranji; Robert M. Wachter; Connie M. Lee; Brian Niehaus; Andrew D. Auerbach

BACKGROUND It is commonly believed that the quality of health care decreases during trainee changeovers at the end of the academic year. PURPOSE To systematically review studies describing the effects of trainee changeover on patient outcomes. DATA SOURCES Electronic literature search of PubMed, Educational Research Information Center (ERIC), EMBASE, and the Cochrane Library for English-language studies published between 1989 and July 2010. STUDY SELECTION Title and abstract review followed by full-text review to identify studies that assessed the effect of the changeover on patient outcomes and that used a control group or period as a comparator. DATA EXTRACTION Using a standardized form, 2 authors independently abstracted data on outcomes, study setting and design, and statistical methods. Differences between reviewers were reconciled by consensus. Studies were then categorized according to methodological quality, sample size, and outcomes reported. DATA SYNTHESIS Of the 39 included studies, 27 (69%) reported mortality, 19 (49%) reported efficiency (length of stay, duration of procedure, hospital charges), 23 (59%) reported morbidity, and 6 (15%) reported medical error outcomes; all studies focused on inpatient settings. Most studies were conducted in the United States. Thirteen (33%) were of higher quality. Studies with higher-quality designs and larger sample sizes more often showed increased mortality and decreased efficiency at time of changeover. Studies examining morbidity and medical error outcomes were of lower quality and produced inconsistent results. LIMITATIONS The review was limited to English-language reports. No study focused on the effect of changeovers in ambulatory care settings. The definition of changeover, resident role in patient care, and supervision structure varied considerably among studies. Most studies did not control for time trends or level of supervision or use methods appropriate for hierarchical data. CONCLUSION Mortality increases and efficiency decreases in hospitals because of year-end changeovers, although heterogeneity in the existing literature does not permit firm conclusions about the degree of risk posed, how changeover affects morbidity and rates of medical errors, or whether particular models are more or less problematic. PRIMARY FUNDING SOURCE National Heart, Lung, and Blood Institute.


Annals of Internal Medicine | 2008

Public reporting of antibiotic timing in patients with pneumonia: lessons from a flawed performance measure.

Robert M. Wachter; Scott A. Flanders; Christopher Fee; Peter J. Pronovost

Improving health care quality depends on having valid ways to measure quality. Unfortunately, there are few validated quality outcome measurements, because valid and feasible case-mix adjustors are lacking and patients are difficult to follow over time for clinically important outcomes, such as death. Processes of care are easier to identify and measure, but some of these measures will be proven invalid or inappropriate because their scientific rationale was flawed from the start, unanticipated consequences emerge after implementation, or later studies undermine them. We review how these issues played out in the measure of time to first antibiotic dose (TFAD), also called door-to-needle time, for patients presenting to the hospital with community-acquired pneumonia (CAP). We also propose lessons that can be learned from the experience. TFAD as a Quality Measure Community-acquired pneumonia is one of the most common admitting diagnoses in U.S. hospitals, accounting for more than 1 million hospitalizations yearly (1), with short-term mortality rates ranging from 0.5% to 27.1% (2). Given its risk, frequency, and perceived outcome variations, CAP was an obvious candidate for quality measurement and improvement initiatives. Because outcome measurement in CAP was problematic for the usual reasons (data collection burden, case-mix adjustment, and need for posthospital follow-up), investigators sought process measures associated with higher quality. During the 1990s, the notion of time-based quality measures gained favor because evidence emerged that rapid treatment of myocardial infarction, and later trauma, stroke, and sepsis, improved outcomes (37). Naturally, investigators began to examine whether rapid administration of antibiotics might improve CAP outcomes. In 1997, a retrospective study of 14069 Medicare patients hospitalized for CAP found that, after adjustment for severity (2) and demographic factors, administration of antibiotics within 8 hours was associated with a lower 30-day mortality rate (odds ratio [OR], 0.85 [95% CI, 0.75 to 0.96]) (8). Patients were included if they had chest radiography results within 2 days of admission consistent with pneumonia and an initial working diagnosis of pneumonia. In 2004, a second retrospective study of 13771 Medicare patients (age 65 years) hospitalized for CAP (9) also found that, among the 75% of patients without evidence of prehospital receipt of antibiotics, administration of antibiotics within 4 hours was associated with a lower 30-day mortality rate (OR, 0.85 [CI, 0.76 to 0.95]). Extrapolating these data to a hypothetical national Medicare sample, the authors estimated that achieving TFAD by 4 hours after presentation to the hospital would save more than 1200 lives yearly. The 2 studies reported that patients who received their first dose of antibiotics in the first hour of their emergency department stay had a higher mortality rate than those who received antibiotics later; however, this finding was attributed to incomplete adjustment for severity of CAP and was therefore not felt to challenge the main conclusion about TFAD (8, 9). Two smaller studies of CAP found no association between early antibiotic administration and outcomes (10, 11). Nevertheless, the authors of the 2004 study (9) editorialized that the 4-hour TFAD quality measure was still valid (12, 13). Translation into a Performance Standard Almost exclusively on the basis of results from the 1997 study, the Medicare National Pneumonia Project endorsed first antibiotics within 8 hours of hospital arrival as a CAP quality measure in 1998. The Medicare National Pneumonia Project tightened its TFAD window to 4 hours in 2002 on the basis of the prepublication results of the 2004 study by Houck and colleagues (9, 12). In 2003, the Infectious Diseases Society of America (IDSA) also endorsed a 4-hour timeframe (14). With support from the Medicare National Pneumonia Project and IDSA and subsequent endorsement by the National Quality Forum, The Joint Commission and The Centers for Medicare & Medicaid Services (CMS) chose the 4-hour TFAD measure as 1 of their initial core measures of quality (measure PN-5b). Since 2002, this measure has been publicly reported for all U.S. hospitals. In 2006, it became part of a measure set tied to additional payments under several pilot pay-for-performance programs (15). The Response from Emergency Medicine The emergency medicine community began raising red flags about the TFAD measure soon after its formulation, and complaints from this community markedly increased after TFAD was publicly reported and became the subject of pay-for-performance programs (16). Published studies challenging the measure soon followed. Although some questioned the association itself, most focused on the issue of diagnostic uncertainty. One study found that 22% of 86 randomly selected patients with pneumonia had uncertain presentations and often lacked infiltrates on chest radiography, which could have appropriately led to delayed antibiotic administration (17). Another study documented cases that were labeled poor-quality care, in which delayed use of antibiotics was clinically appropriate (18), whereas still another found that maneuvers to improve TFAD were not very cost-effective (19). In fact, many eligible patients with a working diagnosis of CAP who did not receive antibiotics within 4 hours had no radiographic evidence of pneumonia in the emergency department and did not have a final emergency department diagnosis of CAP (20, 21). Moreover, other studies showed that TFAD measurement led to administration of antibiotics in many patients who proved not to have pneumonia or another infectious disease (22, 23). Finally, a recent systematic review concluded that evidence from observational studies fails to confirm decreased mortality with early administration of antibiotics in stable patients with [CAP] (24). On the basis of these studies, analyses, and considerable anecdotal evidence, editorials in the emergency medicine literature argued vigorously for relaxing the TFAD standards (25, 26), pointing out that the measure was skewing emergency department triage priorities and promoting unnecessary antibiotic use (18). The Response from Payers, Regulators, and Professional Societies Within months of the critical publications, The Joint Commission and CMS revisited measure PN-5b. In October 2006, patients eligible for the measure had to have a final emergency department diagnosis of pneumonia (rather than an initial working diagnosis) and objective radiographic findings sometime during the hospitalization. Unfortunately, although the revised criteria solved some of the problems associated with PN-5b, they created new ones. For example, Fee and colleagues (27) worried that the new measures would generate pressure to administer antibiotics before patients were sent for computed tomography to rule out pulmonary embolism (even in the face of nondiagnostic chest radiographs) or to avoid writing pneumonia as the final emergency department diagnosis. In March 2007, IDSA and the American Thoracic Society issued joint guidelines that abolished time-specific goals for CAP treatment, now recommending that patients receive their first dose of antibiotics as soon as possible after a definitive diagnosis of CAP, preferably in the emergency department (28). One month later, The Joint Commission created a test measure (PN-5c) that relaxed the antibiotic administration window to 6 hours. That same month, the National Quality Forum withdrew its endorsement of measure PN-5b and endorsed PN-5c, which became the publicly reported measure in April 2008. In addition, The Joint Commission created a new data element, diagnostic uncertainty, which may exclude patients from TFAD measurement (29, 30). Whether all of these revisions will solve the problems associated with measure PN-5b is unknown; no study has yet shown a benefit from a 6-hour rule, and the diagnostic uncertainty construct has not, to our knowledge, been field-tested and validated. Unanticipated Consequences of TFAD Measurement and Reporting Prompt administration of antibiotics to patients with documented pneumonia makes sense, and seeking ironclad evidence to prove its value might seem to be analogous to requiring proof of the value of parachutes (31). Moreover, a randomized trial that withheld early antibiotic treatment in some patients with CAP would be unethical. It was therefore inevitable that decisions about the timing of antibiotic administration in CAP would be based on imperfect retrospective studies, out of necessity (8, 9). However, the TFAD measure was enacted largely on the evidence derived from 2 large studies, in which conditions (retrospective review of patients with working diagnoses of pneumonia) replicate only in part the predicament that busy emergency medicine physicians face daily: evaluating scores of patients with cough, fever, dyspnea, weakness, dizziness, confusion, or abdominal pain. As Pines (26) has written, Most ED [emergency department] patients do not present at triage with a sign on their forehead that reads, I have pneumonia; give me antibiotics now! Unlike myocardial infarction, in which there is palpable clinical urgency to confirm the diagnosis and a series of tests (cardiac biomarker measurement and electrocardiography) available to reliably do so, no gold standard test for pneumonia exists. Although a triage rule of obtaining an electrocardiogram in any patient whose symptoms, signs, or risk factors make myocardial infarction even a remote possibility makes perfect sense, a similar strategy for chest radiography would be resource intensive, often confusing (given the relatively poor sensitivity and specificity of the test in CAP [32]), impractical, and even potentially harmful (because of radiation exposure). In the days before measurement of TFAD, patients with uncertain diagnoses would continue to be evaluated until th


Annals of Internal Medicine | 2011

Advancing the science of patient safety

Paul G. Shekelle; Peter J. Pronovost; Robert M. Wachter; Stephanie L. Taylor; Sydney M. Dy; Robbie Foy; Susanne Hempel; Kathryn M McDonald; John Øvretveit; Lisa V. Rubenstein; Alyce S. Adams; Peter B. Angood; David W. Bates; Leonard Bickman; Pascale Carayon; Liam Donaldson; Naihua Duan; Donna O. Farley; Trisha Greenhalgh; John Haughom; Eileen T. Lake; Richard Lilford; Kathleen N. Lohr; Gregg S. Meyer; Marlene R. Miller; D Neuhauser; Gery W. Ryan; Sanjay Saint; Kaveh G. Shojania; Stephen M. Shortell

Despite a decades worth of effort, patient safety has improved slowly, in part because of the limited evidence base for the development and widespread dissemination of successful patient safety practices. The Agency for Healthcare Research and Quality sponsored an international group of experts in patient safety and evaluation methods to develop criteria to improve the design, evaluation, and reporting of practice research in patient safety. This article reports the findings and recommendations of this group, which include greater use of theory and logic models, more detailed descriptions of interventions and their implementation, enhanced explanation of desired and unintended outcomes, and better description and measurement of context and of how context influences interventions. Using these criteria and measuring and reporting contexts will improve the science of patient safety.


JAMA | 2008

The Wisdom and Justice of Not Paying for “Preventable Complications”

Peter J. Pronovost; Christine A. Goeschel; Robert M. Wachter

FAR TOO MANY PATIENTS EXPERIENCE PREVENTABLE HARM from medical care in US hospitals. To promote quality and safety, many employers and insurers are linking financial incentives to clinical performance. These programs, often called pay for performance, use a carrot (pay more for better quality) or a stick (pay less for lower quality). To date, most pay-for-performance programs have encouraged physicians to use evidence-based interventions or improve patient satisfaction. The Centers for Medicare & Medicaid Services (CMS) has taken the lead, with many insurers following, in linking pay for performance to reducing harm. In October 2008, hospitals will no longer derive additional payments they sometimes receive when Medicare patients develop 1 of the following 8 preventable complications: objects (such as surgical instruments or sponges) left in patients after surgery, hospital-acquired urinary tract infections, central line–associated bloodstream infections, administration of incompatible blood products, air embolism, patient falls, mediastinitis after cardiac surgery, and pressure ulcers. In addition, CMS has published that conditions being considered for 2009 expansion of the list include ventilator-associated pneumonia, Staphylococcus aureus septicemia, and deep venous thrombosis or pulmonary embolism. The tacit assumption to the “not paid for preventable complications” approach is that an error occurred in a patient’s care that, if avoided, would have prevented the harm and ensuing costs. For one complication on the CMS list, foreign objects inadvertently left in patients after surgery, this is undeniably true. Linking errors to harm for the remaining complications is more complex. For strategies built around the “not paid for preventable complications” concept to be clinically and morally acceptable and to achieve the policy goal of improving quality of care, it must be certain that preventable complications are important and measurable and truly are preventable. In this Commentary, we discuss the CMS initiative in the context of these metrics. Complications Should Be Important and Measurable


BMJ Quality & Safety | 2011

What context features might be important determinants of the effectiveness of patient safety practice interventions

Stephanie L. Taylor; Sydney M. Dy; Robbie Foy; Susanne Hempel; Kathryn M McDonald; John Øvretveit; Peter J. Pronovost; Lisa V. Rubenstein; Robert M. Wachter; Paul G. Shekelle

Background Differences in contexts (eg, policies, healthcare organisation characteristics) may explain variations in the effects of patient safety practice (PSP) implementations. However, knowledge of which contextual features are important determinants of PSP effectiveness is limited and consensus is lacking on a taxonomy of which contexts matter. Methods Iterative, formal discussions were held with a 22-member technical expert panel composed of experts or leaders in patient safety, healthcare systems, and methods. First, potentially important contextual features were identified, focusing on five PSPs. Then, two surveys were conducted to determine the context likely to influence PSP implementations. Results The panel reached a consensus on a taxonomy of four broad domains of contextual features important for PSP implementations: safety culture, teamwork and leadership involvement; structural organisational characteristics (eg, size, organisational complexity or financial status); external factors (eg, financial or performance incentives or PSP regulations); and availability of implementation and management tools (eg, training organisational incentives). Panelists also tended to rate specific patient safety culture, teamwork and leadership contexts as high priority for assessing their effects on PSP implementations, but tended to rate specific organisational characteristic contexts as high priority only for use in PSP evaluations. Panelists appeared split on whether specific external factors and implementation/management tools were important for assessment or only description. Conclusion This work can guide research commissioners and evaluators on the contextual features of PSP implementations that are important to report or evaluate. It represents a first step towards developing guidelines on contexts in PSP implementation evaluations. However, the science of context measurement needs maturing.

Collaboration


Dive into the Robert M. Wachter's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John M. Luce

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lee Goldman

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arpana R. Vidyarthi

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge