Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stephen K. Plume is active.

Publication


Featured researches published by Stephen K. Plume.


JAMA | 1996

A regional intervention to improve the hospital mortality associated with coronary artery bypass graft surgery. The Northern New England Cardiovascular Disease Study Group.

Gerald T. O'Connor; Stephen K. Plume; Elaine M. Olmstead; Morton; Christopher T. Maloney; William C. Nugent; Felix Hernandez; Robert A. Clough; Bruce J. Leavitt; Laurence H. Coffin; Charles A. S. Marrin; Wennberg D; John D. Birkmeyer; David C. Charlesworth; David J. Malenka; Hebe B. Quinton; Kasper Jf

OBJECTIVEnTo determine whether an organized intervention including data feedback, training in continuous quality improvement techniques, and site visits to other medical centers could improve the hospital mortality rates associated with coronary artery bypass graft (CABG) surgery.nnnDESIGNnRegional intervention study. Patient demographic and historical data, body surface area, cardiac catheterization results, priority of surgery, comorbidity, and status at hospital discharge were collected on CABG patients in Northern New England between July 1, 1987, and July 31, 1993.nnnSETTINGnThis study included all 23 cardiothoracic surgeons practicing in Maine, New Hampshire, and Vermont during the study period.nnnPATIENTSnData were collected on 15,095 consecutive patients undergoing isolated CABG procedures in Maine, New Hampshire and Vermont during the study period.nnnINTERVENTIONSnA three-component intervention aimed at reducing CABG mortality was fielded in 1990 and 1991. The interventions included feedback of outcome data, training in continuous quality improvement techniques, and site visits to other medical centers.nnnMAIN OUTCOME MEASUREnA comparison of the observed and expected hospital mortality rates during the postintervention period.nnnRESULTSnDuring the postintervention period, we observed the outcomes for 6488 consecutive cases of CABG surgery. There were 74 fewer deaths than would have been expected. This 24% reduction in the hospital mortality rate was statistically significant (P = .001). This reduction in mortality rate was relatively consistent across patient subgroups and was temporally associated with the interventions.nnnCONCLUSIONnWe conclude that a multi-institutional, regional model for the continuous improvement of surgical care is feasible and effective. This model may have applications in other settings.


Annals of Internal Medicine | 1998

Building Measurement and Data Collection into Medical Practice

Eugene C. Nelson; Mark E. Splaine; Paul B. Batalden; Stephen K. Plume

Physicians are taught the scientific method in medical school, and they use it daily to care for patients as they observe and assimilate clinical data and recommend a course of action. Active engagement in the scientific method gives physicians the opportunity not only to deliver care effectively to individual patients but also to improve care for future patients by measuring results and considering whether better ways to measure them may exist. However, physicians often have little time to reflect on their practices and collect data systematically over time to enhance their understanding of the processes and outcomes of care. Nonetheless, improvement requires measurement. If physicians are not actively involved in data collection and measurement to improve the quality and value of their own work, who will be [1]? We present case examples of clinicians who used data for improvement, and we offer guidance for building measurement into daily practice. A Measurement Story Good measurement can help physicians improve the care they provide. The following case describes the experience of busy clinicians who used measurement for improvement. An internist in a multispecialty group practice with several locations learned about a colleague in another community who had used a telephone protocol to streamline care for women with recurrent urinary tract infection. The internist was curious and a little skeptical but decided to try the protocol out for herself. After consulting with a partner who had gathered several protocols, she brought together her clinical team to organize a small test of protocol care and measure the results. The internist and her team targeted a specific population (women 18 years of age or older who telephoned the office with symptoms of dysuria), established a broad aim (improvement of clinical outcomes and patient satisfaction and reduction of costs of care), and selected a balanced set of outcome measures to evaluate the protocol (clinical outcomes, including symptom resolution, side effects, and complications; costs, including those of urine cultures, office visits, and first-line antibiotics; and patient satisfaction). When they analyzed and discussed their existing care process, members of the team learned that different physicians handled similar patients in very different ways. For example, they varied in methods of risk assessment, use of diagnostic tests, choice of antibiotics, and approach to patient follow-up. The protocol that the team adopted was based on a combination of their experience, their colleagues work, and the scientific literature [2, 3]. The protocol divided women into high-risk and low-risk groups; low-risk patients received telephone treatment by a nurse-administered algorithm and a follow-up telephone call at 7 days to assess results. Before embarking upon full-scale testing, the protocol was tested by a single nurse on 10 patients and was revised on the basis of that experience. When the team studied their results for the first 130 consecutive patients with urinary tract infections (mean age, 55 years; high-risk patients, 52%), they found that they had used the protocol with 9 patients per month, that 21% of patients were given same-day office visits, that 44% of patients had received a urine culture (which had been universal procedure before), and that 60% of patients were treated with the first-line antibiotic suggested by the protocol. Telephone follow-up was achieved for 100 of the 130 patients. Of these patients, 87% had symptom resolution in 7 days, 11% had side effects of medication, and 1% had a clinical complication. All of the patients whom nurses managed by telephone with the protocol reported high satisfaction. Many patients volunteered that they were delighted to receive treatment without having to make an office visit. Measurement and Improvement Measurement and improvement are inextricably intertwined. The preceding case shows how measurement can support clinical improvement in local settings [4]. The urinary tract infection team began with curiosity about a novel clinical approach [5]. They wrote a broad aim statement that called for a balanced set of outcome measures and built a structured observation point into the patient follow-up routine to gather information on clinical outcomes, costs, and patient satisfaction. After promptly running a small pilot test, they began to use the new protocol and collected data as the change was taking place. They analyzed both qualitative and quantitative results to assess the impact of their innovation and to determine whether the new approach should be adopted, modified, or abandoned. Measurement and improvement are two sides of the same coin. The connections are evident in the simple model for improvement that was presented in the introductory paper in this series [6]. The model comprises three questions. Aim: What are we trying to accomplish? Measures: How will we know that a change is an improvement? Changes: What changes can we make that we think will lead to an improvement? The model also incorporates the Plan-Do-Study-Act cycle (plan the change, do the change, study the results, and act on the results on the basis of what has been learned). The second question in the model specifically calls for measurement, but data collection is also integral to all of the steps in the Plan-Do-Study-Act cycle. Measurement methods are described in the Plan step; data are gathered in the Do step; information is analyzed in the Study step; and key measures are monitored in the Act step. Principles of Measurement For measurement to be helpful in the improvement effort, a few simple principles can act as guides. Principle 1 Seek usefulness, not perfection, in the measurement. The urinary tract infection team focused on key clinical results and patient feedback, even though they could have chosen to cover more territory. They skipped baseline data collection and opted for a prompt feasibility test on 10 patients. This reflects an emphasis on practicality rather than comprehensiveness. It helps to begin with a small, useful data set that fits your work environment, time limitations, and cost constraints. The utility of data is directly related to the timeliness of feedback and the appropriateness of its level of detail for the persons who use it. The choice of measures should be strongly influenced by considering who will use the data and what they will use it for. It may be helpful to gather baseline data; however, gathering data over time is often sufficient to spot effects in time series analyses. The goal is continuous improvement with concurrent, ongoing measurement of impact. Principle 2 Use a balanced set of process, outcome, and cost measures. The urinary tract infection team wanted to do more than just cut costs. They sought a better way to treat infections that would yield better clinical outcomes, fewer side effects, higher patient satisfaction, and lower treatment costs. They wanted to measure clinical value: that is, outcomes in relation to costs [7]. Medical care systems comprise subprocesses that interact, flow into and out of one another, and contain feedback loops. They produce a fluid family of results that include clinical outcomes, functional status, risk level, patient satisfaction, and costs. This complexity has important implications for tracking attempts to make improvements; most important, it requires a mix or balance of measures to do it justice. Balanced measures may cover upstream processes and downstream outcomes to link causes with effects; anticipated positive outcomes and potential adverse outcomes; results of interest to different stakeholders (such as patient, family, employer, community, payor, and clinician) because participants have differing viewpoints on the relative importance of the many manifestations of care; and cumulative results related to the overall aim as well as specific outcomes for a particular change cycle [8]. More detailed explorations of balanced measures of quality and value have been published elsewhere [9, 10]. Principle 3 Keep measurement simple; think big, but start small. The urinary tract infection teams broad aim was to improve outcomes and lower costs, but they selected a sparse set of outcome measures. Principles 1 and 2 operate in different directions, creating the need for principle 3. Anyone who wants to improve a system in the real world must balance a fast start and lean measurement with a broader understanding of the complex web of causation [11]. We recommend that you recognize and discuss the true complexity of data collection, but when you are ready to make the data collection plan, strive for simplicity amidst the clutter and focus on a limited, manageable, meaningful set of starter measures. Principle 4 Use qualitative and quantitative data. The urinary tract infection team used quantitative data to create tension for change and to measure impact on clinical behavior. They used qualitative data to learn how the physicians, nurses, and patients felt about the new system. Data and measures are meant to reflect reality, but they are not reality itself. Reality has an objective and subjective face, and both are important. Quantitative measures are better at capturing the objective world, whereas qualitative measures are better at reflecting subjective issues [12]. Principle 5 Write down the operational definitions of the measures. The urinary tract infection team wrote operational definitions for clinical outcomes and medical costs. For example, to measure symptom resolution, nurses telephoned patients 7 days after their index date and asked, Are you still bothered by your urinary tract symptoms? Please answer yes or no. The clarity of the signal sent by measures depends on how well everyone doing the measurement understands operational definitions and on how consistently they are used [13]. An operational definition provides a clear method for scoring or


The Joint Commission journal on quality improvement | 1996

Improving Health Care, Part 1: The Clinical Value Compass

Eugene C. Nelson; Julie J. Mohr; Paul B. Batalden; Stephen K. Plume

UNLABELLEDnCLINICAL VALUE COMPASS APPROACH: The clinical Value Compass, named to reflect its similarity in layout to a directional compass, has at its four cardinal points (1) functional status, risk status, and well-being; (2) costs; (3) satisfaction with health care and perceived benefit; and (4) clinical outcomes. To manage and improve the value of health care services, providers will need to measure the value of care for similar patient populations, analyze the internal delivery processes, run tests of changed delivery processes, and determine if these changes lead to better outcomes and lower costs. GETTING STARTED--OUTCOMES AND AIM: In the case example, the teams aim is to find ways to continually improve the quality and value of care for AMI (acute myocardial infection) patients. VALUE MEASURES--SELECT A SET OF OUTCOME AND COST MEASURES: Four to 12 outcome and cost measures are sufficient to get started. In the case example, the team chose 1 or more measures for each quadrant of the value compass.nnnOPERATIONAL DEFINITION OF MEASURESnAn operational definition is a clearly specified method explaining how to measure a variable. Measures in the case example were based on information from the medical record, administrative and financial records, and patient reports and ratings at eight weeks postdischarge.nnnCOMMENTSnMeasurement systems that quantify the quality of processes and results of care are often add-ons to routine care delivery. However, the process of measurement should be intertwined with the process of care delivery so that front-line providers are involved in both managing the patient and measuring the process and related outcomes and costs.


Frontiers of health services management | 1998

Building a quality future.

Eugene C. Nelson; Paul B. Batalden; Julie J. Mohr; Stephen K. Plume

Summary How can healthcare leaders stay ahead of the curve? What can they do to see what the future holds and to secure a place for their employees and their organizations? They must begin doing today what they need to do to survive tomorrow. Furthermore, they must take wise action today or there will be no tomorrow. This article looks into the future and connects it with what we must see and do today. The article begins with a glimpse of the future and with an exploration of what people really want from health and healthcare. Next, it examines what appear to be inexorable megatrends and healthcare trends that are sweeping through society. This leads us to consider the quality and value imperatives that must be faced to secure a stake in the healthcare delivery. We will discuss a model for managing care for individual patients and small populations by focusing on where patients, populations, and caregivers meet— at the front lines of patient care. We conclude with some advice on how to build sustainable organizations by exploiting the inevitable.


The Joint Commission journal on quality improvement | 1995

Report Cards or Instrument Panels: Who Needs What?

Eugene C. Nelson; Paul B. Batalden; Stephen K. Plume; Nancy T. Mihevc; William Swartz

BACKGROUNDnThe report card movement in health care is a positive response to legitimate customer needs and requirements for comparative information on quality and costs. At the same time, providers have a legitimate concern about potential problems with gathering and using valid data in a prudent manner. Report cards have problems that often detract from their potentially constructive uses. In response to this concern, the authors propose that instrument panels--a newer concept in health care--compared to the static, judgmental image of report cards project an action-oriented, decision-making image.nnnEXAMPLESnDescriptions are given of three types of instrument panels based on work in progress in the Dartmouth-Hitchcock health care system, a regional, integrated delivery system that serves the population of New Hampshire and parts of Vermont and Massachusetts: a 450-physician group practice (The Hitchcock Clinic), which provides more than one million visits per year in more than 25 locations; a tertiary health care facility (Mary Hitchcock Memorial Hospital) with more than 300,000 patient days; and prepaid health plan (Matthew Thornton Health Plan) with approximately 120,000 members.nnnSUMMARYnIt would be wise and efficient for providers to design instrument panel data collection systems that can feed directly into report cards, leading to the triple benefit of enhancing accuracy, reducing total costs, and increasing overall utility to both providers and their customers.


Quality management in health care | 1997

Continually improving the health and value of health care for a population of patients: the panel management process.

Paul B. Batalden; Julie J. Mohr; Eugene C. Nelson; Stephen K. Plume; Baker Gr; John H. Wasson; Stoltz Pk; Mark E. Splaine; Wisniewski Jj

Todays primary care provider faces the challenge of caring for individual patients as well as caring for populations of patients. This article offers a model—the panel management process—for understanding and managing these activities and relationships. The model integrates some of the lessons learned during the past decade as we have worked to gain an understanding of the continual improvement of health care after we have understood that care as a process and system.


QRB - Quality Review Bulletin | 1992

A Methodology for QI in the Coronary Artery Bypass Grafting Procedure Involving Comparative Process Analysis

Joseph F. Kasper; Stephen K. Plume; Gerald T. O’Connor

This report describes an example of applying comparative process analysis to improve surgical procedures. This approach to health care quality improvement relies on combining techniques from the technical disciplines of systems analysis and systems engineering with concepts embodied in the philosophies of total quality management. Coronary artery bypass grafting (CABG) has been examined in a cooperative observational study involving an engineer, cardiac surgeons, perfusionists, nurses, and an anesthesiologist. A baseline process flow for the CABG procedure was developed, against which interinstitutional variations among the five participating medical centers have been identified. On the basis of analysis of the variations, efforts are under way to develop a strategy for incremental continuous improvement in the CABG procedure in each of the five institutions. On the basis of the perceived success of the first phase of the activity, a second phase, wider in scope, has been undertaken.


The Joint Commission journal on quality improvement | 1996

Improving Health Care, Part 3: Clinical Benchmarking for Best Patient Care

Julie J. Mohr; Christina C. Mahoney; Eugene C. Nelson; Paul B. Batalden; Stephen K. Plume

BACKGROUNDnBenchmarking, which shows that a much better way of doing something may be possible, stimulates local interest in changing and in making changes previously thought not possible.nnnA PLANNING WORKSHEETnThe Worksheet has five basic steps: Identify measures, determine resources needed to find the best of the best, design a data collection method and gather data, measure the best against own performance to determine gap, and identify the best practices producing best-in-class results. CASE EXAMPLE--BOWEL SURGERY: The Accelerating Clinical Improvement Bowel Surgery Team at Dartmouth-Hitchcock Medical Center (Lebanon, NH) was formed in November 1994 to improve the care of patients with diagnosis-related group (DRG) 148 or 149. Consulting two large, administrative databases and the medical literature, the team found that a substantial gap existed between the bowel surgery delivery process and the best results, as far as they were known, among comparable organizations. After flowcharting the delivery process, the team identified the high-leverage steps: same-day services, general surgery clinic, and routine care. The team then planned three successive PDCA (plan-do-check-act) cycles: utilization of same-day services for all elective surgery patients, establishment of a standardized preoperative bowel preparation, and utilization of pre- and postoperative routine care orders. These improvement cycles resulted in a reduction in length of stay from 9.66 to 8.29 days. Implementation of a critical pathway resulted in a further reduction to 5.04 days.nnnCONCLUSIONnBenchmarking can play an integral role in clinical improvement work and can stimulate wise clinical changes and promote measured improvements in quality and value.


Quality management in health care | 1997

Transforming patient feedback into strategic action plans.

April S. Levine; Stephen K. Plume; Eugene C. Nelson

Patients perceptions provide valuable insight into areas for improvement and opportunities for strategic planning. Using both quantitative and qualitative research methods, the topics of what drives patient satisfaction, what delights patients, and what disappoints patients were examined. A case study approach was used to develop strategic recommendations for two market segments. For primary care patients, recommendations revolve around “provider caring” and “choice.” For specialty patients, recommendations concentrate on “provider caring,” “provider competence,” and “office wait time.”


The Joint Commission journal on quality improvement | 1996

Improving Health Care, Part 2: A Clinical Improvement Worksheet and Users' Manual

Eugene C. Nelson; Paul B. Batalden; Stephen K. Plume; Julie J. Mohr

BACKGROUNDnSmall tests of change can be conducted in everyday clinical practice, thereby turning the health care delivery team into reflective practitioners who can learn from, and improve on, their work. CLINICAL IMPROVEMENT WORKSHEET AND USERS MANUAL--CASE STUDY: The worksheet has been designed as a simple tool for applying clinical improvement to the core clinical delivery process. A carpal tunnel surgery (CTS) team was formed to improve outcomes and reduce costs for patients and to promote improvements in quality and value. The team wanted to determine if surgical patients treated with local anesthesia in an ambulatory setting have superior satisfaction with care, comparable clinical and functional outcomes, and lower medical (and social) costs. For the first time, standardized assessments of patient case mix, treatment processes, and health outcomes were designed into the delivery process by gathering data from the patient and from the surgeon presurgery and 4 weeks and 12 weeks postsurgery. Results for the first 49 of 50 to 100 consecutive patients show improved outcomes and reductions in costs, from

Collaboration


Dive into the Stephen K. Plume's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul B. Batalden

The Dartmouth Institute for Health Policy and Clinical Practice

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Felix Hernandez

Eastern Maine Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge