Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul B. Batalden is active.

Publication


Featured researches published by Paul B. Batalden.


The Joint Commission journal on quality improvement | 2002

Microsystems in Health Care: Part 1. Learning from High-Performing Front-Line Clinical Units

Eugene C. Nelson; Paul B. Batalden; Thomas P. Huber; Julie J. Mohr; Marjorie M. Godfrey; Linda A. Headrick; John H. Wasson

BACKGROUND Clinical microsystems are the small, functional, front-line units that provide most health care to most people. They are the essential building blocks of larger organizations and of the health system. They are the place where patients and providers meet. The quality and value of care produced by a large health system can be no better than the services generated by the small systems of which it is composed. METHODS A wide net was cast to identify and study a sampling of the best-quality, best-value small clinical units in North America. Twenty microsystems, representing different component parts of the health system, were examined from December 2000 through June 2001, using qualitative methods supplemented by medical record and finance reviews. RESULTS The study of the 20 high-performing sites generated many best practice ideas (processes and methods) that microsystems use to accomplish their goals. Nine success characteristics were related to high performance: leadership, culture, macro-organizational support of microsystems, patient focus, staff focus, interdependence of care team, information and information technology, process improvement, and performance patterns. These success factors were interrelated and together contributed to the microsystems ability to provide superior, cost-effective care and at the same time create a positive and attractive working environment. CONCLUSIONS A seamless, patient-centered, high-quality, safe, and efficient health system cannot be realized without the transformation of the essential building blocks that combine to form the care continuum.


Quality & Safety in Health Care | 2008

Publication guidelines for quality improvement in health care: evolution of the SQUIRE project

Frank Davidoff; Paul B. Batalden; D Stevens; Greg Ogrinc

In 2005, draft guidelines were published for reporting studies of quality improvement interventions as the initial step in a consensus process for development of a more definitive version. This article contains the full revised version of the guidelines, which the authors refer to as SQUIRE (Standards for QUality Improvement Reporting Excellence). This paper also describes the consensus process, which included informal feedback from authors, editors and peer reviewers who used the guidelines; formal written commentaries; input from a group of publication guideline developers; ongoing review of the literature on the epistemology of improvement and methods for evaluating complex social programmes; a two-day meeting of stakeholders for critical discussion and debate of the guidelines’ content and wording; and commentary on sequential versions of the guidelines from an expert consultant group. Finally, the authors consider the major differences between SQUIRE and the initial draft guidelines; limitations of and unresolved questions about SQUIRE; ancillary supporting documents and alternative versions that are under development; and plans for dissemination, testing and further development of SQUIRE.


The Joint Commission journal on quality improvement | 1993

A Framework for the Continual Improvement of Health Care: Building and Applying Professional and Improvement Knowledge to Test Changes in Daily Work

Paul B. Batalden; Patricia K. Stoltz

We seem to lack a well-defined, comprehensive, and shared understanding of what is required for the continual improvement of health care--at the organizational and the industry levels. This article presents a framework that defines the new body of knowledge which, when joined with the professional knowledge of health care workers, can make continual improvement possible; and gives requirements for building and applying this knowledge to bring about improvement in health care organizations.


Quality & Safety in Health Care | 2005

Toward stronger evidence on quality improvement. Draft publication guidelines: the beginning of a consensus project

Frank Davidoff; Paul B. Batalden

In contrast with the primary goals of science, which are to discover and disseminate new knowledge, the primary goal of improvement is to change performance. Unfortunately, scholarly accounts of the methods, experiences, and results of most medical quality improvement work are not published, either in print or electronic form. In our view this failure to publish is a serious deficiency: it limits the available evidence on efficacy, prevents critical scrutiny, deprives staff of the opportunity and incentive to clarify thinking, slows dissemination of established improvements, inhibits discovery of innovations, and compromises the ethical obligation to return valuable information to the public.The reasons for this failure are many: competing service responsibilities of and lack of academic rewards for improvement staff; editors’ and peer reviewers’ unfamiliarity with improvement goals and methods; and lack of publication guidelines that are appropriate for rigorous, scholarly improvement work. We propose here a draft set of guidelines designed to help with writing, reviewing, editing, interpreting, and using such reports. We envisage this draft as the starting point for collaborative development of more definitive guidelines. We suggest that medical quality improvement will not reach its full potential unless accurate and transparent reports of improvement work are published frequently and widely.


BMJ Quality & Safety | 2016

SQUIRE 2.0 (Standards for QUality Improvement Reporting Excellence): revised publication guidelines from a detailed consensus process

Greg Ogrinc; Louise Davies; Daisy Goodman; Paul B. Batalden; Frank Davidoff; David P. Stevens

Since the publication of Standards for QUality Improvement Reporting Excellence (SQUIRE 1.0) guidelines in 2008, the science of the field has advanced considerably. In this manuscript, we describe the development of SQUIRE 2.0 and its key components. We undertook the revision between 2012 and 2015 using (1) semistructured interviews and focus groups to evaluate SQUIRE 1.0 plus feedback from an international steering group, (2) two face-to-face consensus meetings to develop interim drafts and (3) pilot testing with authors and a public comment period. SQUIRE 2.0 emphasises the reporting of three key components of systematic efforts to improve the quality, value and safety of healthcare: the use of formal and informal theory in planning, implementing and evaluating improvement work; the context in which the work is done and the study of the intervention(s). SQUIRE 2.0 is intended for reporting the range of methods used to improve healthcare, recognising that they can be complex and multidimensional. It provides common ground to share these discoveries in the scholarly literature (http://www.squire-statement.org).


Quality & Safety in Health Care | 2002

Improving safety on the front lines: the role of clinical microsystems

Julie J. Mohr; Paul B. Batalden

The clinical microsystem puts medical error and harm reduction into the broader context of safety and quality of care by providing a framework to assess and evaluate the structure, process, and outcomes of care. Eight characteristics of clinical microsystems emerged from a qualitative analysis of interviews with representatives from 43 microsystems across North America. These characteristics were used to develop a tool for assessing the function of microsystems. Further research is needed to assess microsystem performance, outcomes, and safety, and how to replicate “best practices” in other settings.


BMJ Quality & Safety | 2016

Coproduction of healthcare service

Maren Batalden; Paul B. Batalden; Peter A. Margolis; Michael Seid; Gail Armstrong; Lisa Opipari-Arrigan; Hans Hartung

Efforts to ensure effective participation of patients in healthcare are called by many names—patient centredness, patient engagement, patient experience. Improvement initiatives in this domain often resemble the efforts of manufacturers to engage consumers in designing and marketing products. Services, however, are fundamentally different than products; unlike goods, services are always ‘coproduced’. Failure to recognise this unique character of a service and its implications may limit our success in partnering with patients to improve health care. We trace a partial history of the coproduction concept, present a model of healthcare service coproduction and explore its application as a design principle in three healthcare service delivery innovations. We use the principle to examine the roles, relationships and aims of this interdependent work. We explore the principles implications and challenges for health professional development, for service delivery system design and for understanding and measuring benefit in healthcare services.


BMJ | 2009

Publication guidelines for quality improvement studies in health care : evolution of the SQUIRE project

Frank Davidoff; Paul B. Batalden; David P. Stevens; Greg Ogrinc; Susan E Mooney

In 2005 we published draft guidelines for reporting studies of quality improvement, as the initial step in a consensus process for development of a more definitive version. The current article contains the revised version, which we refer to as standards for quality improvement reporting excellence (SQUIRE). This narrative progress report summarises the special features of improvement that are reflected in SQUIRE, and describes major differences between SQUIRE and the initial draft guidelines. It also briefly describes the guideline development process; considers the limitations of and unresolved questions about SQUIRE; describes ancillary supporting documents and alternative versions under development; and discusses plans for dissemination, testing, and further development of SQUIRE.


Annals of Internal Medicine | 1998

Building Measurement and Data Collection into Medical Practice

Eugene C. Nelson; Mark E. Splaine; Paul B. Batalden; Stephen K. Plume

Physicians are taught the scientific method in medical school, and they use it daily to care for patients as they observe and assimilate clinical data and recommend a course of action. Active engagement in the scientific method gives physicians the opportunity not only to deliver care effectively to individual patients but also to improve care for future patients by measuring results and considering whether better ways to measure them may exist. However, physicians often have little time to reflect on their practices and collect data systematically over time to enhance their understanding of the processes and outcomes of care. Nonetheless, improvement requires measurement. If physicians are not actively involved in data collection and measurement to improve the quality and value of their own work, who will be [1]? We present case examples of clinicians who used data for improvement, and we offer guidance for building measurement into daily practice. A Measurement Story Good measurement can help physicians improve the care they provide. The following case describes the experience of busy clinicians who used measurement for improvement. An internist in a multispecialty group practice with several locations learned about a colleague in another community who had used a telephone protocol to streamline care for women with recurrent urinary tract infection. The internist was curious and a little skeptical but decided to try the protocol out for herself. After consulting with a partner who had gathered several protocols, she brought together her clinical team to organize a small test of protocol care and measure the results. The internist and her team targeted a specific population (women 18 years of age or older who telephoned the office with symptoms of dysuria), established a broad aim (improvement of clinical outcomes and patient satisfaction and reduction of costs of care), and selected a balanced set of outcome measures to evaluate the protocol (clinical outcomes, including symptom resolution, side effects, and complications; costs, including those of urine cultures, office visits, and first-line antibiotics; and patient satisfaction). When they analyzed and discussed their existing care process, members of the team learned that different physicians handled similar patients in very different ways. For example, they varied in methods of risk assessment, use of diagnostic tests, choice of antibiotics, and approach to patient follow-up. The protocol that the team adopted was based on a combination of their experience, their colleagues work, and the scientific literature [2, 3]. The protocol divided women into high-risk and low-risk groups; low-risk patients received telephone treatment by a nurse-administered algorithm and a follow-up telephone call at 7 days to assess results. Before embarking upon full-scale testing, the protocol was tested by a single nurse on 10 patients and was revised on the basis of that experience. When the team studied their results for the first 130 consecutive patients with urinary tract infections (mean age, 55 years; high-risk patients, 52%), they found that they had used the protocol with 9 patients per month, that 21% of patients were given same-day office visits, that 44% of patients had received a urine culture (which had been universal procedure before), and that 60% of patients were treated with the first-line antibiotic suggested by the protocol. Telephone follow-up was achieved for 100 of the 130 patients. Of these patients, 87% had symptom resolution in 7 days, 11% had side effects of medication, and 1% had a clinical complication. All of the patients whom nurses managed by telephone with the protocol reported high satisfaction. Many patients volunteered that they were delighted to receive treatment without having to make an office visit. Measurement and Improvement Measurement and improvement are inextricably intertwined. The preceding case shows how measurement can support clinical improvement in local settings [4]. The urinary tract infection team began with curiosity about a novel clinical approach [5]. They wrote a broad aim statement that called for a balanced set of outcome measures and built a structured observation point into the patient follow-up routine to gather information on clinical outcomes, costs, and patient satisfaction. After promptly running a small pilot test, they began to use the new protocol and collected data as the change was taking place. They analyzed both qualitative and quantitative results to assess the impact of their innovation and to determine whether the new approach should be adopted, modified, or abandoned. Measurement and improvement are two sides of the same coin. The connections are evident in the simple model for improvement that was presented in the introductory paper in this series [6]. The model comprises three questions. Aim: What are we trying to accomplish? Measures: How will we know that a change is an improvement? Changes: What changes can we make that we think will lead to an improvement? The model also incorporates the Plan-Do-Study-Act cycle (plan the change, do the change, study the results, and act on the results on the basis of what has been learned). The second question in the model specifically calls for measurement, but data collection is also integral to all of the steps in the Plan-Do-Study-Act cycle. Measurement methods are described in the Plan step; data are gathered in the Do step; information is analyzed in the Study step; and key measures are monitored in the Act step. Principles of Measurement For measurement to be helpful in the improvement effort, a few simple principles can act as guides. Principle 1 Seek usefulness, not perfection, in the measurement. The urinary tract infection team focused on key clinical results and patient feedback, even though they could have chosen to cover more territory. They skipped baseline data collection and opted for a prompt feasibility test on 10 patients. This reflects an emphasis on practicality rather than comprehensiveness. It helps to begin with a small, useful data set that fits your work environment, time limitations, and cost constraints. The utility of data is directly related to the timeliness of feedback and the appropriateness of its level of detail for the persons who use it. The choice of measures should be strongly influenced by considering who will use the data and what they will use it for. It may be helpful to gather baseline data; however, gathering data over time is often sufficient to spot effects in time series analyses. The goal is continuous improvement with concurrent, ongoing measurement of impact. Principle 2 Use a balanced set of process, outcome, and cost measures. The urinary tract infection team wanted to do more than just cut costs. They sought a better way to treat infections that would yield better clinical outcomes, fewer side effects, higher patient satisfaction, and lower treatment costs. They wanted to measure clinical value: that is, outcomes in relation to costs [7]. Medical care systems comprise subprocesses that interact, flow into and out of one another, and contain feedback loops. They produce a fluid family of results that include clinical outcomes, functional status, risk level, patient satisfaction, and costs. This complexity has important implications for tracking attempts to make improvements; most important, it requires a mix or balance of measures to do it justice. Balanced measures may cover upstream processes and downstream outcomes to link causes with effects; anticipated positive outcomes and potential adverse outcomes; results of interest to different stakeholders (such as patient, family, employer, community, payor, and clinician) because participants have differing viewpoints on the relative importance of the many manifestations of care; and cumulative results related to the overall aim as well as specific outcomes for a particular change cycle [8]. More detailed explorations of balanced measures of quality and value have been published elsewhere [9, 10]. Principle 3 Keep measurement simple; think big, but start small. The urinary tract infection teams broad aim was to improve outcomes and lower costs, but they selected a sparse set of outcome measures. Principles 1 and 2 operate in different directions, creating the need for principle 3. Anyone who wants to improve a system in the real world must balance a fast start and lean measurement with a broader understanding of the complex web of causation [11]. We recommend that you recognize and discuss the true complexity of data collection, but when you are ready to make the data collection plan, strive for simplicity amidst the clutter and focus on a limited, manageable, meaningful set of starter measures. Principle 4 Use qualitative and quantitative data. The urinary tract infection team used quantitative data to create tension for change and to measure impact on clinical behavior. They used qualitative data to learn how the physicians, nurses, and patients felt about the new system. Data and measures are meant to reflect reality, but they are not reality itself. Reality has an objective and subjective face, and both are important. Quantitative measures are better at capturing the objective world, whereas qualitative measures are better at reflecting subjective issues [12]. Principle 5 Write down the operational definitions of the measures. The urinary tract infection team wrote operational definitions for clinical outcomes and medical costs. For example, to measure symptom resolution, nurses telephoned patients 7 days after their index date and asked, Are you still bothered by your urinary tract symptoms? Please answer yes or no. The clarity of the signal sent by measures depends on how well everyone doing the measurement understands operational definitions and on how consistently they are used [13]. An operational definition provides a clear method for scoring or


The Joint Commission Journal on Quality and Patient Safety | 2008

Clinical Microsystems, Part 1. The Building Blocks of Health Systems

Eugene C. Nelson; Marjorie M. Godfrey; Paul B. Batalden; Scott A. Berry; Albert E. Bothe; Karen E. McKinley; Craig N. Melin; Stephen E. Muething; L. Gordon Moore; Thomas W. Nolan; John H. Wasson

BACKGROUND Wherever, however, and whenever health care is delivered-no matter the setting or population of patients-the body of knowledge on clinical microsystems can guide and support innovation and peak performance. Many health care leaders and staff at all levels of their organizations in many countries have adapted microsystem knowledge to their local settings. CLINICAL MICROSYSTEMS A PANORAMIC VIEW: HOW DO CLINICAL MICROSYSTEMS FIT TOGETHER? As the patients journey of care seeking and care delivery takes place over time, he or she will move into and out of an assortment of clinical microsystems, such as a family practitioners office, an emergency department, and an intensive care unit. This assortment of clinical microsystems-combined with the patients own actions to improve or maintain health--can be viewed as the patients unique health system. This patient-centric view of a health system is the foundation of second-generation development for clinical microsystems. LESSONS FROM THE FIELD These lessons, which are not comprehensive, can be organized under the familiar commands that are used to start a race: On Your Mark, Get Set, Go! ... with a fourth category added-Reflect: Reviewing the Race. These insights are intended as guidance to organizations ready to strategically transform themselves. CONCLUSION Beginning to master and make use of microsystem principles and methods to attain macrosystem peak performance can help us knit together care in a fragmented health system, eschew archipelago building in favor of nation-building strategies, achieve safe and efficient care with reliable handoffs, and provide the best possible care and attain the best possible health outcomes.

Collaboration


Dive into the Paul B. Batalden's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Frank Davidoff

The Dartmouth Institute for Health Policy and Clinical Practice

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David P. Stevens

The Dartmouth Institute for Health Policy and Clinical Practice

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge