Greg Ogrinc
Dartmouth College
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Greg Ogrinc.
Quality & Safety in Health Care | 2008
Frank Davidoff; Paul B. Batalden; D Stevens; Greg Ogrinc
In 2005, draft guidelines were published for reporting studies of quality improvement interventions as the initial step in a consensus process for development of a more definitive version. This article contains the full revised version of the guidelines, which the authors refer to as SQUIRE (Standards for QUality Improvement Reporting Excellence). This paper also describes the consensus process, which included informal feedback from authors, editors and peer reviewers who used the guidelines; formal written commentaries; input from a group of publication guideline developers; ongoing review of the literature on the epistemology of improvement and methods for evaluating complex social programmes; a two-day meeting of stakeholders for critical discussion and debate of the guidelines’ content and wording; and commentary on sequential versions of the guidelines from an expert consultant group. Finally, the authors consider the major differences between SQUIRE and the initial draft guidelines; limitations of and unresolved questions about SQUIRE; ancillary supporting documents and alternative versions that are under development; and plans for dissemination, testing and further development of SQUIRE.
BMJ Quality & Safety | 2016
Greg Ogrinc; Louise Davies; Daisy Goodman; Paul B. Batalden; Frank Davidoff; David P. Stevens
Since the publication of Standards for QUality Improvement Reporting Excellence (SQUIRE 1.0) guidelines in 2008, the science of the field has advanced considerably. In this manuscript, we describe the development of SQUIRE 2.0 and its key components. We undertook the revision between 2012 and 2015 using (1) semistructured interviews and focus groups to evaluate SQUIRE 1.0 plus feedback from an international steering group, (2) two face-to-face consensus meetings to develop interim drafts and (3) pilot testing with authors and a public comment period. SQUIRE 2.0 emphasises the reporting of three key components of systematic efforts to improve the quality, value and safety of healthcare: the use of formal and informal theory in planning, implementing and evaluating improvement work; the context in which the work is done and the study of the intervention(s). SQUIRE 2.0 is intended for reporting the range of methods used to improve healthcare, recognising that they can be complex and multidimensional. It provides common ground to share these discoveries in the scholarly literature (http://www.squire-statement.org).
BMJ | 2009
Frank Davidoff; Paul B. Batalden; David P. Stevens; Greg Ogrinc; Susan E Mooney
In 2005 we published draft guidelines for reporting studies of quality improvement, as the initial step in a consensus process for development of a more definitive version. The current article contains the revised version, which we refer to as standards for quality improvement reporting excellence (SQUIRE). This narrative progress report summarises the special features of improvement that are reflected in SQUIRE, and describes major differences between SQUIRE and the initial draft guidelines. It also briefly describes the guideline development process; considers the limitations of and unresolved questions about SQUIRE; describes ancillary supporting documents and alternative versions under development; and discusses plans for dissemination, testing, and further development of SQUIRE.
Journal of General Internal Medicine | 2004
Greg Ogrinc; Linda A. Headrick; Laura J. Morrison; Tina C. Foster
We designed, implemented, and evaluated a 4-week practice-based learning and improvement (PBLI) elective. Eleven internal medicine residents from 2 separate residency programs participated in the PBLI elective and 22 other residents comprised a comparison group. Residents in each group had similar pretest Quality Improvement Knowledge Application Tool scores; but after the PBLI elective, participant scores were significantly higher. Also, participants’ self-assessed ratings of PBLI skills increased after the rotation and remained elevated 6 months afterward. In this curriculum, residents completed a project to improve patient care and demonstrated their knowledge on an evaluation tool in a way that was superior to nonparticipants.
BMJ Quality & Safety | 2011
Paul Glasziou; Greg Ogrinc; Steve Goodman
The considerable gap between what we know from research and what is done in clinical practice is well known. Proposed responses include the Evidence-Based Medicine (EBM) and Clinical Quality Improvement. EBM has focused more on ‘doing the right things’—based on external research evidence—whereas Quality Improvement (QI) has focused more on ‘doing things right’—based on local processes. However, these are complementary and in combination direct us how to ‘do the right things right’. This article examines the differences and similarities in the two approaches and proposes that by integrating the bedside application, the methodological development and the training of these complementary disciplines both would gain.
The Joint Commission Journal on Quality and Patient Safety | 2003
Julia Neily; Greg Ogrinc; Peter D. Mills; Rodney Williams; Erik Stalhandske; James P. Bagian; William B. Weeks
The authors describe use of aggregate root cause analysis, which provides a systematic process for analyzing high-priority, frequent events.
Journal of Nursing Education | 2009
Greg Ogrinc; Paul B. Batalden
Health professions education researchers continually search for tools to measure, evaluate, and disseminate the findings from educational interventions. Clinical teaching, particularly teaching about the improvement of care and systems, is marked by complexity and is invariably influenced by the context into which the intervention is placed. The traditional research framework states that interventions should be adjudicated through a yes or no decision to determine effectiveness. In reality, educational interventions and the study of the interventions rarely succumb to such a simple yes or no question. The realist evaluation framework from Pawson and Tilley provides an explanatory model that links the context, mechanisms, and outcome patterns that are discovered during implementation of a project. This article describes the unique qualities of the realist evaluation, the basic components and steps in a realist evaluation, and an example that uses this technique to evaluate teaching about improvement of care in a clinical setting.
Academic Medicine | 2014
Mamta Singh; Greg Ogrinc; Karen R. Cox; Mary A. Dolansky; Julie Brandt; Laura J. Morrison; Beth G. Harwood; Greg Petroski; Al West; Linda A. Headrick
Purpose Quality improvement (QI) has been part of medical education for over a decade. Assessment of QI learning remains challenging. The Quality Improvement Knowledge Application Tool (QIKAT), developed a decade ago, is widely used despite its subjective nature and inconsistent reliability. From 2009 to 2012, the authors developed and assessed the validation of a revised QIKAT, the “QIKAT-R.” Method Phase 1: Using an iterative, consensus-building process, a national group of QI educators developed a scoring rubric with defined language and elements. Phase 2: Five scorers pilot tested the QIKAT-R to assess validity and inter- and intrarater reliability using responses to four scenarios, each with three different levels of response quality: “excellent,” “fair,” and “poor.” Phase 3: Eighteen scorers from three countries used the QIKAT-R to assess the same sets of student responses. Results Phase 1: The QI educators developed a nine-point scale that uses dichotomous answers (yes/no) for each of three QIKAT-R subsections: Aim, Measure, and Change. Phase 2: The QIKAT-R showed strong discrimination between “poor” and “excellent” responses, and the intra- and interrater reliability were strong. Phase 3: The discriminative validity of the instrument remained strong between excellent and poor responses. The intraclass correlation was 0.66 for the total nine-point scale. Conclusions The QIKAT-R is a user-friendly instrument that maintains the content and construct validity of the original QIKAT but provides greatly improved interrater reliability. The clarity within the key subsections aligns the assessment closely with QI knowledge application for students and residents.
Academic Medicine | 2005
Patricia A. Carney; Greg Ogrinc; Beth G. Harwood; Jennifer S. Schiffman; Nancy E. Cochran
Purpose Many medical schools have revised their curricula to include longitudinal clinical training in the first and second years, placing an extra burden on academic teaching faculty and expanding the use of community-based preceptors for clinical teaching. Little is known about the impact of different learning settings on clinical skills development. Method In 2002–03 and 2003–04, the authors evaluated the clinical skills of two sequential cohorts of second-year medical students at Dartmouth Medical School (n = 155) at the end of a two-year longitudinal clinical course designed to prepare them for their clerkship year. Students’ objective structured clinical examination (OSCE) scores were compared on a cardiopulmonary and an endocrine case according to precepting sites (academic medical center [AMC] clinics, AMC-affiliated office-based clinics, or community-based primary care offices) and core communication, history taking, physical examination, and patient education skills were assessed. Study groups were compared using descriptive statistics and analysis of variance (mixed model). Results Ninety-five students (61%) had community-based preceptors, 31 (20%) AMC clinic-based preceptors, and 29 (19%) AMC-affiliated office-based preceptors. Students’ performances did not differ among clinical learning sites with overall scores in the cardiopulmonary case of 61.2% in AMC clinics, 63.3% in office-based AMC-affiliated clinics, and 64.9% in community-based offices (p = .20). Scores in the endocrine case similarly did not differ with overall scores of 65.5% in AMC clinics, 68.5% in office-based AMC-affiliated clinics, and 66.4% in community-based offices (p = .59). Conclusions Students’ early clinical skill development is not influenced by educational setting. Thus, using clinicians for early clinical training in any of these settings is appropriate.
American Journal of Medical Quality | 2015
Greg Ogrinc; Louise Davies; Daisy Goodman; Paul B. Batalden; Frank Davidoff; David P. Stevens
In the past several years, the science of health care improvement has advanced considerably. In this article, we describe the development of SQUIRE 2.0 and its key components. We undertook the revision between 2012 and 2015 using (1) interviews and focus groups to evaluate SQUIRE 1.0 plus feedback from an international steering group, (2) face-to-face consensus meetings to develop interim drafts, and (3) pilot testing with authors and a public comment period. SQUIRE 2.0 emphasizes 3 key components of systematic efforts to improve the quality, value, and safety of health care: formal and informal theory in planning, implementing, and evaluating improvement work; the context in which the work is done; and the study of the intervention(s). SQUIRE 2.0 is intended for reporting the range of methods used to improve health care, recognizing that they can be complex and multidimensional. It provides common ground to share these discoveries in the scholarly literature (www.squire-statement.org).
Collaboration
Dive into the Greg Ogrinc's collaboration.
The Dartmouth Institute for Health Policy and Clinical Practice
View shared research outputsThe Dartmouth Institute for Health Policy and Clinical Practice
View shared research outputsThe Dartmouth Institute for Health Policy and Clinical Practice
View shared research outputs