Mark E. Splaine
Dartmouth College
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mark E. Splaine.
Annals of Internal Medicine | 1998
Eugene C. Nelson; Mark E. Splaine; Paul B. Batalden; Stephen K. Plume
Physicians are taught the scientific method in medical school, and they use it daily to care for patients as they observe and assimilate clinical data and recommend a course of action. Active engagement in the scientific method gives physicians the opportunity not only to deliver care effectively to individual patients but also to improve care for future patients by measuring results and considering whether better ways to measure them may exist. However, physicians often have little time to reflect on their practices and collect data systematically over time to enhance their understanding of the processes and outcomes of care. Nonetheless, improvement requires measurement. If physicians are not actively involved in data collection and measurement to improve the quality and value of their own work, who will be [1]? We present case examples of clinicians who used data for improvement, and we offer guidance for building measurement into daily practice. A Measurement Story Good measurement can help physicians improve the care they provide. The following case describes the experience of busy clinicians who used measurement for improvement. An internist in a multispecialty group practice with several locations learned about a colleague in another community who had used a telephone protocol to streamline care for women with recurrent urinary tract infection. The internist was curious and a little skeptical but decided to try the protocol out for herself. After consulting with a partner who had gathered several protocols, she brought together her clinical team to organize a small test of protocol care and measure the results. The internist and her team targeted a specific population (women 18 years of age or older who telephoned the office with symptoms of dysuria), established a broad aim (improvement of clinical outcomes and patient satisfaction and reduction of costs of care), and selected a balanced set of outcome measures to evaluate the protocol (clinical outcomes, including symptom resolution, side effects, and complications; costs, including those of urine cultures, office visits, and first-line antibiotics; and patient satisfaction). When they analyzed and discussed their existing care process, members of the team learned that different physicians handled similar patients in very different ways. For example, they varied in methods of risk assessment, use of diagnostic tests, choice of antibiotics, and approach to patient follow-up. The protocol that the team adopted was based on a combination of their experience, their colleagues work, and the scientific literature [2, 3]. The protocol divided women into high-risk and low-risk groups; low-risk patients received telephone treatment by a nurse-administered algorithm and a follow-up telephone call at 7 days to assess results. Before embarking upon full-scale testing, the protocol was tested by a single nurse on 10 patients and was revised on the basis of that experience. When the team studied their results for the first 130 consecutive patients with urinary tract infections (mean age, 55 years; high-risk patients, 52%), they found that they had used the protocol with 9 patients per month, that 21% of patients were given same-day office visits, that 44% of patients had received a urine culture (which had been universal procedure before), and that 60% of patients were treated with the first-line antibiotic suggested by the protocol. Telephone follow-up was achieved for 100 of the 130 patients. Of these patients, 87% had symptom resolution in 7 days, 11% had side effects of medication, and 1% had a clinical complication. All of the patients whom nurses managed by telephone with the protocol reported high satisfaction. Many patients volunteered that they were delighted to receive treatment without having to make an office visit. Measurement and Improvement Measurement and improvement are inextricably intertwined. The preceding case shows how measurement can support clinical improvement in local settings [4]. The urinary tract infection team began with curiosity about a novel clinical approach [5]. They wrote a broad aim statement that called for a balanced set of outcome measures and built a structured observation point into the patient follow-up routine to gather information on clinical outcomes, costs, and patient satisfaction. After promptly running a small pilot test, they began to use the new protocol and collected data as the change was taking place. They analyzed both qualitative and quantitative results to assess the impact of their innovation and to determine whether the new approach should be adopted, modified, or abandoned. Measurement and improvement are two sides of the same coin. The connections are evident in the simple model for improvement that was presented in the introductory paper in this series [6]. The model comprises three questions. Aim: What are we trying to accomplish? Measures: How will we know that a change is an improvement? Changes: What changes can we make that we think will lead to an improvement? The model also incorporates the Plan-Do-Study-Act cycle (plan the change, do the change, study the results, and act on the results on the basis of what has been learned). The second question in the model specifically calls for measurement, but data collection is also integral to all of the steps in the Plan-Do-Study-Act cycle. Measurement methods are described in the Plan step; data are gathered in the Do step; information is analyzed in the Study step; and key measures are monitored in the Act step. Principles of Measurement For measurement to be helpful in the improvement effort, a few simple principles can act as guides. Principle 1 Seek usefulness, not perfection, in the measurement. The urinary tract infection team focused on key clinical results and patient feedback, even though they could have chosen to cover more territory. They skipped baseline data collection and opted for a prompt feasibility test on 10 patients. This reflects an emphasis on practicality rather than comprehensiveness. It helps to begin with a small, useful data set that fits your work environment, time limitations, and cost constraints. The utility of data is directly related to the timeliness of feedback and the appropriateness of its level of detail for the persons who use it. The choice of measures should be strongly influenced by considering who will use the data and what they will use it for. It may be helpful to gather baseline data; however, gathering data over time is often sufficient to spot effects in time series analyses. The goal is continuous improvement with concurrent, ongoing measurement of impact. Principle 2 Use a balanced set of process, outcome, and cost measures. The urinary tract infection team wanted to do more than just cut costs. They sought a better way to treat infections that would yield better clinical outcomes, fewer side effects, higher patient satisfaction, and lower treatment costs. They wanted to measure clinical value: that is, outcomes in relation to costs [7]. Medical care systems comprise subprocesses that interact, flow into and out of one another, and contain feedback loops. They produce a fluid family of results that include clinical outcomes, functional status, risk level, patient satisfaction, and costs. This complexity has important implications for tracking attempts to make improvements; most important, it requires a mix or balance of measures to do it justice. Balanced measures may cover upstream processes and downstream outcomes to link causes with effects; anticipated positive outcomes and potential adverse outcomes; results of interest to different stakeholders (such as patient, family, employer, community, payor, and clinician) because participants have differing viewpoints on the relative importance of the many manifestations of care; and cumulative results related to the overall aim as well as specific outcomes for a particular change cycle [8]. More detailed explorations of balanced measures of quality and value have been published elsewhere [9, 10]. Principle 3 Keep measurement simple; think big, but start small. The urinary tract infection teams broad aim was to improve outcomes and lower costs, but they selected a sparse set of outcome measures. Principles 1 and 2 operate in different directions, creating the need for principle 3. Anyone who wants to improve a system in the real world must balance a fast start and lean measurement with a broader understanding of the complex web of causation [11]. We recommend that you recognize and discuss the true complexity of data collection, but when you are ready to make the data collection plan, strive for simplicity amidst the clutter and focus on a limited, manageable, meaningful set of starter measures. Principle 4 Use qualitative and quantitative data. The urinary tract infection team used quantitative data to create tension for change and to measure impact on clinical behavior. They used qualitative data to learn how the physicians, nurses, and patients felt about the new system. Data and measures are meant to reflect reality, but they are not reality itself. Reality has an objective and subjective face, and both are important. Quantitative measures are better at capturing the objective world, whereas qualitative measures are better at reflecting subjective issues [12]. Principle 5 Write down the operational definitions of the measures. The urinary tract infection team wrote operational definitions for clinical outcomes and medical costs. For example, to measure symptom resolution, nurses telephoned patients 7 days after their index date and asked, Are you still bothered by your urinary tract symptoms? Please answer yes or no. The clarity of the signal sent by measures depends on how well everyone doing the measurement understands operational definitions and on how consistently they are used [13]. An operational definition provides a clear method for scoring or
Quality of Life Research | 2002
Robert J. Ferguson; Amy B. Robinson; Mark E. Splaine
The SF-36 Health Survey is the most widely used self-report measure of functional health. It is commonly used in both randomized controlled trials (RCT) and non-controlled evaluation of medical or other health services. However, determining a clinically significant change in SF-36 outcomes from pre-to-post-intervention, in contrast to statistically significant differences, is often not a focus of medical outcomes research. We propose use of the Reliable Change Index (RCI) in combination with SF-36 norms as one method for researchers, provider groups, and health care policy makers to determine clinically significant healthcare outcomes when the SF-36 is used as a primary measure. The RCI is a statistic that determines the magnitude of change score necessary of a given self-report measure to be considered statistically reliable. The RCI has been used to determine clinically significant change in mental health and behavioral medicine outcomes research, but is not widely applied to medical outcomes research. A usable table of RCIs for the SF-36 has been calculated and is presented. Instruction and a case illustration of how to use the RCI table is also provided. Finally, limitations and cautionary guidelines on using SF-36 norms and the RCI to determine clinically significant outcome are discussed.
Quality management in health care | 2002
Paul B. Batalden; Mark E. Splaine
We have witnessed the separation of the care for an individual patient and the concern for the health of a population. As we anticipate the twenty-first century, we see the wisdom of reconnecting these concerns. The knowledge and skills that we will address can help bridge the gap. First, we offer background to seminal work during the twentieth century that set the foundation for the improvement of health care. Next, we describe two major challenges for the continual improvement of health care that lie ahead. Third, we suggest an approach that leaders might use to address major challenges. Fourth, we offer a set of knowledge domains that outline the knowledge and skills that leaders of the improvement of health care will need. Finally, we provide two special issues that require additional mention and should not be overlooked. We believe that the combination of these ideas can provide a framework for knowledge building, action taking, and reflection needed by health care leaders in the coming century.
Implementation Science | 2013
Theodore Speroff; Paul V Miles; Denise Dougherty; Brian S. Mittman; Mark E. Splaine
Nearly two decades ago, the International Scientific Symposium on Improving the Quality and Value of Health Care was created to foster the mission of building a scholarly foundation for quality improvement education and research. The program of presentations at the inaugural symposium was comprised almost entirely of non-experimental, before-after studies that are subject to significant problems with reliability and validity. The passing years have seen an evolving trend toward robust study designs with more rigorous statistical methods such as process control and interrupted time-series analyses. The foundation for quality improvement established over the past twenty years has created a repository of evidence. Quality improvement has a proliferation of publications appearing in peer-reviewed journals. Discipline specific outlets have emerged such as the British Medical Journal of Quality and Safety in Health Care, Implementation Science, Joint Commission Journal on Quality and Patient Safety, Journal for Healthcare Quality, and Journal of Nursing Care Quality. However, the methodology in quality improvement and implementation science must become on par with the methods in health economics and related areas of health services research if quality improvement research is to materialize into evidence-based health care and contribute to health care reform. Practitioners and researchers in the field of quality improvement recognize the need to elevate the relevance and value of their research. The potential contribution of implementation science as a strategic approach for obtaining safe, efficient, and effective health care is too easily overlooked without a solid record of publication in the scientific literature. The need to generate the best evidence for systematic quality improvement was the driving factor for a conference on Advancing the Methods for Healthcare Quality Improvement Research, held on May 7-8, 2012 in Arlington, Virginia. The aims of the conference were 1) to reflect on the past two decades of quality improvement and quality improvement research and appreciate the emergence of methods and techniques in the literature today, and 2) to define quality improvement research, characterize its strengths, and propose future directions for quality improvement research and implementation science. This conference set out to further the goal of elevating quality improvement research to a higher level of validity and value. The conference presented information on the conduct of quality improvement research, including study design, data registries, comparative effectiveness, and the issue of health care disparities. In addition, there were several talks on sophisticated methods including robust quasi-experimental designs, hybrid research designs, and risk-adjusted statistical process control. By the use of key speakers, a call for abstracts for podium and poster presentations, and case-based learning using examples of best methods in the literature, this conference was a forum on the current state and future needs of quality improvement research and its methodological and technical issues. The proceedings of this conference are presented in this supplement to provide a series of concise papers that describe the key messages of the presentations and a commentary about the contribution to advancing the field of quality improvement and implementation science. Session one was a set of three talks on the architecture of study design for doing quality improvement research presented by Dr. Duncan Neuhauser, Dr. Mark Bauer, and Dr. Peter Margolis. Session two comprised two talks on the implementation of a practice-based learning registry for quality improvement (Dr. Richard Colletti) and advanced statistical process control methods for reporting findings from quality surveillance registries (Dr. Michael Matheny). Sessions three and four introduced new frontiers for quality improvement research in comparative effectiveness (Dr. David Atkins) disparities research (Dr. Donald Goldmann), implementation science (Dr. John Ovretveit), and the contribution to government contract initiatives (Dr. Joanne Lynn). In session five, Dr. Brian Mittman delivered an eloquent presentation on what the future holds for quality improvement research. Sessions six and seven concluded the conference with audience roundtable discussions on evaluating strengths and weaknesses of quality improvement research using two examples of best practices and making recommendations for the future direction of quality improvement and research. Our objective for this program was to formulate the types of questions relevant to advancing quality improvement and implementation science and their appropriate study designs. Our overall goal is to bring about more effective application of quality improvement science in health care delivery and to learn new areas for application of quality improvement research. With this conference proceeding, we disseminate the highlights. Voice-over recordings of the full presentations are available at the Academy for Healthcare Improvement web site (http://www.A4HI.org) as well as a listing of recommended readings. By sharing the knowledge gained from this conference, we hope to inspire the spirit of initiative, innovation, motivation, and thinking that fuels the passion for improving health and delivery of health care.
The Journal of ambulatory care management | 1998
Arlene S. Bierman; Magari Es; Jette Am; Mark E. Splaine; John H. Wasson
Understanding the barriers to obtaining care that the population of people age 80 and older (80+) experiences is one of the first steps toward developing organizational and clinical strategies aimed at improving care. This article reviews the data from the 80+ Projects survey to assess the prevalence of barriers to care and Identify the characteristics that place the 80+ population at risk. Barriers to access for older adults occur on many levels. Ultimately, the ability to improve health outcomes through reducing barriers to care is dependent on the effectiveness and quality of care received. By recognizing the barriers to care that limit access, health care professionals can begin to develop strategies to eliminate these barriers and improve the health care of older adult patients.
Quality management in health care | 1997
Paul B. Batalden; Julie J. Mohr; Eugene C. Nelson; Stephen K. Plume; Baker Gr; John H. Wasson; Stoltz Pk; Mark E. Splaine; Wisniewski Jj
Todays primary care provider faces the challenge of caring for individual patients as well as caring for populations of patients. This article offers a model—the panel management process—for understanding and managing these activities and relationships. The model integrates some of the lessons learned during the past decade as we have worked to gain an understanding of the continual improvement of health care after we have understood that care as a process and system.
Quality management in health care | 2002
Mark E. Splaine; David C. Aron; Robert S. Dittus; Catarina I. Kiefe; C. Seth Landefeld; Gary E. Rosenthal; William B. Weeks; Paul B. Batalden
In 1998, the Veterans Health Administration invested in the creation of the Veterans Administration National Quality Scholars Fellowship Program (VAQS) to train physicians in new ways to improve the quality of health care. We describe the curriculum for this program and the lessons learned from our experience to date. The VAQS Fellowship program has developed a core improvement curriculum to train postresidency physicians in the scholarship, research, and teaching of the improvement of health care. The curriulum covers seven domains of knowledge related to improvement: health care as a process; variation and measurement; customer/beneficiary knowledge; leading, following, and making changes in health care; collaboration; social context and accountability; and developing new, locally useful knowledge. We combine specific knowledge about the improvement of health care with the use of adult learning strategies, interactive video, and development of learner competencies. Our program provides insights for medical education to better prepare physicians to participate in and lead the improvement of health care.
Quality management in health care | 2004
Eugene C. Nelson; Mark E. Splaine; Stephen K. Plume; Paul B. Batalden
Purpose: To provide guidance on using measurement to support the conduct of local quality improvement projects that will strengthen the evaluation of results and increase their potential for publication. Target Group: Individuals leading quality improvement efforts who wish to enhance their use of measurement. Procedures to Promote Good Measurement: Eleven procedures are offered to promote intelligent measurement in quality improvement research that may become publishable:Start with an important topicDevelop a clear aim statementTurn the aim statement into key questionsDevelop a theory about causes and effects, process changes and predictable sources of variationConstruct a research design and accompanying dummy data displays to answer your primary research questionsDevelop and use operational definitions for each variable needed to make your dummy data displaysDesign a data collection plan to gather information on each variable that will enable you to generate reliable, valid, and sensitive measures related to each research questionPilot test the data collection plan, construct preliminary data displays, and revise your methods based on what you learnStay close to the data collection process as the data plan goes from idea to executionPerform data analysis and display results in a way that answers your key questionsReview and document the strengths and limitations of your measurement work and use this knowledge to guide intelligent interpretation of the observed results.
BMJ Quality & Safety | 2012
Jeremiah R. Brown; Peter A. McCullough; Mark E. Splaine; Louise Davies; Cathy S. Ross; Harold L. Dauerman; John F. Robb; Richard A Boss; David Goldberg; Frank Fedele; Mirle A. Kellett; William Phillips; Peter Ver Lee; Eugene C. Nelson; Todd A. MacKenzie; Gerald T. O'Connor; Mark J. Sarnak; David J. Malenka
Objectives This study evaluates the variation in practice patterns associated with contrast-induced acute kidney injury (CI-AKI) and identifies clinical practices that have been associated with a reduction in CI-AKI. Background CI-AKI is recognised as a complication of invasive cardiovascular procedures and is associated with cardiovascular events, prolonged hospitalisation, end-stage renal disease, and all-cause mortality. Reducing the risk of CI-AKI is a patient safety objective set by the National Quality Forum. Methods This study prospectively collected quantitative and qualitative data from 10 centres, which participate in the Northern New England Cardiovascular Disease Study Group PCI Registry. Quantitative data were collected from the PCI Registry. Qualitative data were obtained through clinical team meetings to map care processes related to CI-AKI and focus groups to understand attitudes towards CI-AKI prophylaxis. Fixed and random effects modelling were conducted to test the differences across centres. Results Significant variation in rates of CI-AKI were found across 10 medical centres. Both fixed effects and mixed effects logistic regression demonstrated significant variability across centres, even after adjustment for baseline covariates (p<0.001 for both modelling approaches). Patterns were found in reported processes and clinical leadership that were attributable to centres with lower rates of CI-AKI. These included reducing nil by mouth (NPO) time to 4 h prior to case, and standardising volume administration protocols in combination with administering three to four high doses of N-acetylcysteine (1200 mg) for each patient. Conclusions These data suggest that clinical leadership and institution-focused efforts to standardise preventive practices can help reduce the incidence of CI-AKI.
Circulation-cardiovascular Quality and Outcomes | 2014
Jeremiah R. Brown; Richard Solomon; Mark J. Sarnak; Peter A. McCullough; Mark E. Splaine; Louise Davies; Cathy S. Ross; Harold L. Dauerman; Janette Stender; Sheila M. Conley; John F. Robb; Kristine Chaisson; Richard A Boss; Peggy Lambert; David Goldberg; Deborah Lucier; Frank Fedele; Mirle A. Kellett; Susan R. Horton; William Phillips; Cynthia Downs; Alan Wiseman; Todd A. MacKenzie; David J. Malenka
Background—Contrast-induced acute kidney injury (CI-AKI) is associated with increased morbidity and mortality after percutaneous coronary interventions and is a patient safety objective of the National Quality Forum. However, no formal quality improvement program to prevent CI-AKI has been conducted. Therefore, we sought to determine whether a 6-year regional multicenter quality improvement intervention could reduce CI-AKI after percutaneous coronary interventions. Methods and Results—We conducted a prospective multicenter quality improvement study to prevent CI-AKI (serum creatinine increase ≥0.3 mg/dL within 48 hours or ≥50% during hospitalization) among 21 067 nonemergent patients undergoing percutaneous coronary interventions at 10 hospitals between 2007 and 2012. Six intervention hospitals participated in the quality improvement intervention. Two hospitals with significantly lower baseline rates of CI-AKI, which served as benchmark sites and were used to develop the intervention, and 2 hospitals not receiving the intervention were used as controls. Using time series analysis and multilevel poisson regression clustering to the hospital level, we calculated adjusted risk ratios for CI-AKI comparing the intervention period to baseline. Adjusted rates of CI-AKI were significantly reduced in hospitals receiving the intervention by 21% (risk ratio, 0.79; 95% confidence interval: 0.67–0.93; P=0.005) for all patients and by 28% in patients with baseline estimated glomerular filtration rate <60 mL/min per 1.73 m2 (risk ratio, 0.72; 95% confidence interval: 0.56–0.91; P=0.007). Benchmark hospitals had no significant changes in CI-AKI. Key qualitative system factors associated with improvement included multidisciplinary teams, limiting contrast volume, standardized fluid orders, intravenous fluid bolus, and patient education about oral hydration. Conclusions—Simple cost-effective quality improvement interventions can prevent ⩽1 in 5 CI-AKI events in patients with undergoing nonemergent percutaneous coronary interventions.
Collaboration
Dive into the Mark E. Splaine's collaboration.
The Dartmouth Institute for Health Policy and Clinical Practice
View shared research outputs