Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eric S. Holmboe is active.

Publication


Featured researches published by Eric S. Holmboe.


Medical Teacher | 2010

Competency-based medical education: theory to practice

Jason R. Frank; Linda Snell; Olle ten Cate; Eric S. Holmboe; Carol Carraccio; Susan R. Swing; Peter Harris; Nicholas Glasgow; Craig Campbell; Deepak Dath; Ronald M. Harden; William Iobst; Donlin M. Long; Rani Mungroo; Denyse Richardson; Jonathan Sherbino; Ivan Silver; Sarah Taber; Martin Talbot; Kenneth A. Harris

Although competency-based medical education (CBME) has attracted renewed interest in recent years among educators and policy-makers in the health care professions, there is little agreement on many aspects of this paradigm. We convened a unique partnership – the International CBME Collaborators – to examine conceptual issues and current debates in CBME. We engaged in a multi-stage group process and held a consensus conference with the aim of reviewing the scholarly literature of competency-based medical education, identifying controversies in need of clarification, proposing definitions and concepts that could be useful to educators across many jurisdictions, and exploring future directions for this approach to preparing health professionals. In this paper, we describe the evolution of CBME from the outcomes movement in the 20th century to a renewed approach that, focused on accountability and curricular outcomes and organized around competencies, promotes greater learner-centredness and de-emphasizes time-based curricular design. In this paradigm, competence and related terms are redefined to emphasize their multi-dimensional, dynamic, developmental, and contextual nature. CBME therefore has significant implications for the planning of medical curricula and will have an important impact in reshaping the enterprise of medical education. We elaborate on this emerging CBME approach and its related concepts, and invite medical educators everywhere to enter into further dialogue about the promise and the potential perils of competency-based medical curricula for the 21st century.


JAMA | 2009

Tools for direct observation and assessment of clinical skills of medical trainees: a systematic review.

Jennifer R. Kogan; Eric S. Holmboe; Karen E. Hauer

CONTEXTnDirect observation of medical trainees with actual patients is important for performance-based clinical skills assessment. Multiple tools for direct observation are available, but their characteristics and outcomes have not been compared systematically.nnnOBJECTIVESnTo identify observation tools used to assess medical trainees clinical skills with actual patients and to summarize the evidence of their validity and outcomes.nnnDATA SOURCESnElectronic literature search of PubMed, ERIC, CINAHL, and Web of Science for English-language articles published between 1965 and March 2009 and review of references from article bibliographies.nnnSTUDY SELECTIONnIncluded studies described a tool designed for direct observation of medical trainees clinical skills with actual patients by educational supervisors. Tools used only in simulated settings or assessing surgical/procedural skills were excluded. Of 10 672 citations, 199 articles were reviewed and 85 met inclusion criteria.nnnDATA EXTRACTIONnTwo authors independently abstracted studies using a modified Best Evidence Medical Education coding form to inform judgment of key psychometric characteristics. Differences were reconciled by consensus.nnnRESULTSnA total of 55 tools were identified. Twenty-one tools were studied with students and 32 with residents or fellows. Two were used across the educational continuum. Most (n = 32) were developed for formative assessment. Rater training was described for 26 tools. Only 11 tools had validity evidence based on internal structure and relationship to other variables. Trainee or observer attitudes about the tool were the most commonly measured outcomes. Self-assessed changes in trainee knowledge, skills, or attitudes (n = 9) or objectively measured change in knowledge or skills (n = 5) were infrequently reported. The strongest validity evidence has been established for the Mini Clinical Evaluation Exercise (Mini-CEX).nnnCONCLUSIONnAlthough many tools are available for the direct observation of clinical skills, validity evidence and description of educational outcomes are scarce.


Annals of Internal Medicine | 2004

Effects of training in direct observation of medical residents' clinical competence: a randomized trial.

Eric S. Holmboe; Richard E. Hawkins; Stephen J. Huot

Context Reliable methods for evaluating clinical competence of medical trainees are needed. Contribution This cluster randomized trial involving 16 internal medicine programs evaluated a 4-day course that taught faculty direct observation methods for evaluating clinical competence. Eight months later, course participants reported greater comfort with direct observation to evaluate residents and rated videotaped clinical encounters between standardized residents and patients more stringently than did faculty not taking the course. Cautions Larger studies that use ratings of actual residents and that include resident feedback are needed to establish the transportability and efficacy of the program. The Editors Medical educators have a major responsibility to evaluate the clinical competence of medical students and residents and to provide them with timely, useful feedback to ensure continued progress and correction of deficiencies. Despite tremendous advances in technology, the clinical skills of interviewing, physical examination, and counseling remain essential to the successful care of patients. The Association of American Medical Colleges (AAMC), Accreditation Council for Graduate Medical Education (ACGME), and American Board of Internal Medicine (ABIM) strongly endorse the evaluation of students and residents in these clinical skills through direct observation (1-3). Numerous studies continue to document substantial deficiencies in the clinical skills of medical interviewing, physical examination, communication, and counseling among students, residents, and practicing physicians (4-24). For example, Mangione and Nieman (19) demonstrated that students and residents could successfully identify fewer than one third of 12 important cardiac sounds, and Braddock and colleagues (10) found that practicing physicians performed core elements of informed decision making in fewer than 10% of patient encounters. Direct observation of a student or resident performing these skills is mandatory for reliable and valid assessment and is essential for providing formative feedback to improve these skills. Residency is the last structured experience to ensure that young physicians have sufficient clinical skills, but evaluation of these skills by faculty is often neglected. The AAMC, for example, visited 97 medical schools between 1993 and 1998 and found that faculty rarely observed student interactions with patients and that most of a students evaluation was based on faculty and resident impressions of student presentation skills and knowledge (3). Similar findings have been reported for residents (25-27). Furthermore, when faculty observe the clinical skills of trainees, the quality of the evaluation is often poor. In a study of faculty ratings using the long form of the clinical evaluation exercise (CEX), faculty failed to detect 68% of errors committed by a resident when observing a videotape scripted to depict marginal performance (28). The use of checklists prompting faculty to look for specific skills increased error detection from 32% to 64% but did not improve overall accuracy; approximately two thirds of faculty still rated the overall performance of the marginal resident as satisfactory or superior. Other studies have also attested to the low reliability of faculty observations (28-33). Perhaps the most notable finding in these studies is that brief faculty training interventions (for example, a 30-minute description of the CEX and its purpose) failed to improve the quality of faculty evaluation (28-31). Given the growing concerns over patient safety and quality of care, both public and professional organizations are calling for a renewed emphasis on the teaching and evaluation of clinical skills. To address these concerns, better methods for training faculty in evaluation of clinical competence are urgently needed. The purpose of our study was to evaluate a novel approach to training faculty in direct observation skillsdirect observation of competence trainingin 3 domains: faculty satisfaction with the training, change in comfort in performing direct observation, and changes in faculty rating behavior. Methods Study Design Our study was a cluster-designed randomized, controlled trial of an intensive 4-day faculty development course in evaluation of clinical competence (34). Twenty-one programs were approached, and 16 were enrolled (Figure). Invited programs were chosen from 2 regionsthe Northeast (Connecticut, Massachusetts, and Rhode Island) and the Mid-Atlantic (Maryland, Virginia, and the District of Columbia)and were balanced to represent both university and community-based programs. Of the 5 excluded programs, 3 declined to participate, 1 could not identify at least 2 faculty members to participate, and 1 was in the process of closing. The final cohort consisted of 5 university and 11 community-based programs. Randomization was stratified by program type and location and was blinded by using sealed envelopes. One smaller university program in the Mid-Atlantic group was randomly assigned along with the community-based programs because of the unbalanced number of university programs (n = 3) in this region. Figure. Flow of participants through the study. Participants Forty faculty members participated in the study. Each program director was required to identify a minimum of 2 and a maximum of 4 faculty members before randomization, and program directors were encouraged to participate. Choice of study participants was left to the discretion of the program directors, but the study guidelines asked directors to choose faculty who were active in teaching and evaluation and who the program director believed would be willing to serve as agents to promote change in their local programs evaluation practices. Program directors were not informed of other institutional allocations before participation. The Yale University Human Investigation Committee and the Uniformed Services University of the Health Sciences Institutional Review Board approved the study. Participants gave informed consent on the morning of the baseline assessment and before any research activities. Evaluation Videotapes Two sets of 9 videotapes were specifically produced for assessment of direct observation skills, 1 set each for the baseline and follow-up assessments. Scripts were written for standardized patients and standardized residents for each of 3 clinical skills: history taking, physical examination, and counseling. The baseline history skill videotapes depicted a male resident interviewing a 64-year-old woman presenting to the emergency department with acute shortness of breath and chest pain due to a pulmonary embolism. The physical examination skill videotapes depicted a male resident examining a 69-year-old man with progressive shortness of breath secondary to ischemic cardiomyopathy. The counseling skill videotapes portrayed a female resident counseling a 48-year-old man about treatment options for his recently diagnosed hypertension. The videotapes were scripted to demonstrate 3 levels of competence, and all contained some errors in clinical performance; none were designed to be a gold standard. Level 1 videotapes were scripted to represent unequivocally poor performance and contained the most errors of omission and commission. Level 3 videotapes depicted the fewest number of errors. For example, on the baseline level 1 videotape for history taking, the resident fails to introduce himself and fails to ask about key thromboembolic risk factors and the patients symptoms of leg swelling. In the physical examination videotapes, examples of errors include poor technique in pulmonary auscultation and failure to assess jugular venous distention. For each clinical skill, the average number of errors per videotape was 12 for level 1 videotapes, 6 for level 2 videotapes, and 2 for level 3 videotapes. The scripts were developed by 1 of the authors and were then reviewed and edited independently by the other 2 authors. The scripts were revised until consensus was reached among the 3 authors. Scripts were then sent to an outside reviewer for additional comments before the final edits. The standardized resident and patient rehearsed the scenarios before the final videotapes were made. For the follow-up assessment, the medical interview videotapes depicted a male resident interacting with a 69-year-old man who presented to an emergency department with chest pain secondary to coronary artery disease. The physical examination videotapes showed a different male resident examining a 55-year-old man with cough and shortness of breath secondary to right-middle-lobe pneumonia, and the counseling videotape showed a female resident counseling a 69-year-old woman about therapeutic options for hyperlipidemia. The same standardized residents from the baseline videotapes appeared on the medical interview and counseling follow-up videotapes, while all patients in the follow-up videotapes were different. Baseline Assessment In October 2001, all participants observed and rated the 9 baseline clinical encounter videotapes in random order. Participants rated the videotapes using a modified version of the ABIMs 9-point mini-CEX form, where 1 to 3 denotes unsatisfactory performance, 4 to 6 denotes satisfactory performance, and 7 to 9 denotes superior performance. Participants were instructed to rate any of the 7 dimensions of clinical competence listed on the mini-CEX form if they felt the videotape had given them sufficient information to do so. After completion of the baseline assessment, all participants received a comprehensive 3-inch notebook toolkit that included reviews of several evaluation methods, the ABIMs mini-CEX evaluation booklet, a CD-ROM and paper copy of the ABIMs portfolio, and a floppy disk with multiple prefabricated evaluation tools and forms. No specific instruction on use of the comprehensive notebook was provided to either group. The control g


Medical Teacher | 2010

Competency-based medical education in postgraduate medical education

William Iobst; Jonathan Sherbino; Olle ten Cate; Denyse Richardson; Deepak Dath; Susan R. Swing; Peter Harris; Rani Mungroo; Eric S. Holmboe; Jason R. Frank

With the introduction of Tomorrows Doctors in 1993, medical education began the transition from a time- and process-based system to a competency-based training framework. Implementing competency-based training in postgraduate medical education poses many challenges but ultimately requires a demonstration that the learner is truly competent to progress in training or to the next phase of a professional career. Making this transition requires change at virtually all levels of postgraduate training. Key components of this change include the development of valid and reliable assessment tools such as work-based assessment using direct observation, frequent formative feedback, and learner self-directed assessment; active involvement of the learner in the educational process; and intensive faculty development that addresses curricular design and the assessment of competency.


Academic Medicine | 2016

Entrustment Decision Making in Clinical Training

Olle ten Cate; Danielle Hart; Felix Ankel; Jamiu O. Busari; Robert Englander; Nicholas Glasgow; Eric S. Holmboe; William Iobst; Elise Lovell; Linda Snell; Claire Touchie; Elaine Van Melle

The decision to trust a medical trainee with the critical responsibility to care for a patient is fundamental to clinical training. When carefully and deliberately made, such decisions can serve as significant stimuli for learning and also shape the assessment of trainees. Holding back entrustment decisions too much may hamper the trainee’s development toward unsupervised practice. When carelessly made, however, they jeopardize patient safety. Entrustment decision-making processes, therefore, deserve careful analysis. Members (including the authors) of the International Competency-Based Medical Education Collaborative conducted a content analysis of the entrustment decision-making process in health care training during a two-day summit in September 2013 and subsequently reviewed the pertinent literature to arrive at a description of the critical features of this process, which informs this article. The authors discuss theoretical backgrounds and terminology of trust and entrustment in the clinical workplace. The competency-based movement and the introduction of entrustable professional activities force educators to rethink the grounds for assessment in the workplace. Anticipating a decision to grant autonomy at a designated level of supervision appears to align better with health care practice than do most current assessment practices. The authors distinguish different modes of trust and entrustment decisions and elaborate five categories, each with related factors, that determine when decisions to trust trainees are made: the trainee, supervisor, situation, task, and the relationship between trainee and supervisor. The authors’ aim in this article is to lay a theoretical foundation for a new approach to workplace training and assessment.


Medical Education | 2014

Seeing the ‘black box’ differently: assessor cognition from three research perspectives

Andrea Gingerich; Jennifer R. Kogan; Peter Yeates; Marjan J. B. Govaerts; Eric S. Holmboe

Performance assessments, such as workplace‐based assessments (WBAs), represent a crucial component of assessment strategy in medical education. Persistent concerns about rater variability in performance assessments have resulted in a new field of study focusing on the cognitive processes used by raters, or more inclusively, by assessors.


Medical Teacher | 2011

Twelve tips for implementing tools for direct observation of medical trainees’ clinical skills during patient encounters

Karen E. Hauer; Eric S. Holmboe; Jennifer R. Kogan

Background: Direct observation of medical trainees by their supervisors with actual patients is essential for trainees to develop clinical skills competence. Despite the many available tools for direct observation of trainees by supervisors, it is unclear how educators should identify an appropriate tool for a particular clinical setting and implement the tool to maximize educational benefits for trainees in a manner that is feasible for faculty. Aims and methods: Based on our previous systematic review of the literature, we provide 12 tips for selecting and incorporating a tool for direct observation into a medical training program. We focus specifically on direct observation that occurs in clinical settings with actual patients. Results: Educators should focus on the existing tools for direct observation that have evidence of validity. Tool implementation must be a component of an educational program that includes faculty development about rating performance, providing meaningful feedback, and developing action plans collaboratively with learners. Conclusions: Educators can enhance clinical skills education with strategic incorporation of tools for direct observation into medical training programs. Identification of a psychometrically sound instrument and attention to faculty development and the feedback process are critical to the success of a program of direct observation.


Medical Education | 2015

Implementation of competency‐based medical education: are we addressing the concerns and challenges?

Richard E. Hawkins; Catherine M. Welcher; Eric S. Holmboe; Lynne M. Kirk; John J. Norcini; Kenneth B. Simons; Susan E. Skochelak

Competency‐based medical education (CBME) has emerged as a core strategy to educate and assess the next generation of physicians. Advantages of CBME include: a focus on outcomes and learner achievement; requirements for multifaceted assessment that embraces formative and summative approaches; support of a flexible, time‐independent trajectory through the curriculum; and increased accountability to stakeholders with a shared set of expectations and a common language for education, assessment and regulation.


Medical Teacher | 2010

Competency-based continuing professional development

Craig Campbell; Ivan Silver; Jonathan Sherbino; Olle ten Cate; Eric S. Holmboe

Competence is traditionally viewed as the attainment of a static set of attributes rather than a dynamic process in which physicians continuously use their practice experiences to “progress in competence” toward the attainment of expertise. A competency-based continuing professional development (CPD) model is premised on a set of learning competencies that include the ability to (a) use practice information to identify learning priorities and to develop and monitor CPD plans; (b) access information sources for innovations in development and new evidence that may potentially be integrated into practice; (c) establish a personal knowledge management system to store and retrieve evidence and to select and manage learning projects; (d) construct questions, search for evidence, and record and track conclusions for practice; and (e) use tools and processes to measure competence and performance and develop action plans to enhance practice. Competency-based CPD emphasizes self-directed learning processes and promotes the role of assessment as a professional expectation and obligation. Various approaches to defining general competencies for practice require the creation of specific performance metrics to be meaningful and relevant to the lifelong learning strategies of physicians. This paper describes the assumptions, advantages, and challenges of establishing a CPD system focused on competencies that improve physician performance and the quality and safety of patient care. Implications for competency-based CPD are discussed from an individual and organizational perspective, and a model to bridge the transition from residency to practice is explored.


JAMA Internal Medicine | 2014

The association between residency training and internists' ability to practice conservatively

Brenda E. Sirovich; Rebecca S. Lipner; Mary M. Johnston; Eric S. Holmboe

IMPORTANCEnGrowing concern about rising costs and potential harms of medical care has stimulated interest in assessing physicians ability to minimize the provision of unnecessary care.nnnOBJECTIVEnTo assess whether graduates of residency programs characterized by low-intensity practice patterns are more capable of managing patients care conservatively, when appropriate, and whether graduates of these programs are less capable of providing appropriately aggressive care.nnnDESIGN, SETTING, AND PARTICIPANTSnCross-sectional comparison of 6639 first-time takers of the 2007 American Board of Internal Medicine certifying examination, aggregated by residency program (nu2009=u2009357).nnnEXPOSURESnIntensity of practice, measured using the End-of-Life Visit Index, which is the mean number of physician visits within the last 6 months of life among Medicare beneficiaries 65 years and older in the residency programs hospital referral region.nnnMAIN OUTCOMES AND MEASURESnThe mean score by program on the Appropriately Conservative Management (ACM) (and Appropriately Aggressive Management [AAM]) subscales, comprising all American Board of Internal Medicine certifying examination questions for which the correct response represented the least (or most, respectively) aggressive management strategy. Mean scores on the remainder of the examination were used to stratify programs into 4 knowledge tiers. Data were analyzed by linear regression of ACM (or AAM) scores on the End-of-Life Visit Index, stratified by knowledge tier.nnnRESULTSnWithin each knowledge tier, the lower the intensity of health care practice in the hospital referral region, the better residency program graduates scored on the ACM subscale (Pu2009<u2009.001 for the linear trend in each tier). In knowledge tier 4 (poorest), for example, graduates of programs in the lowest-intensity regions had a mean ACM score in the 38th percentile compared with the 22nd percentile for programs in the highest-intensity regions; in tier 2, ACM scores ranged from the 75th to the 48th percentile in regions from lowest to highest intensity. Graduates of programs in low-intensity regions tended, more weakly, to score better on the AAM subscale (in 3 of 4 knowledge tiers).nnnCONCLUSIONS AND RELEVANCEnRegardless of overall medical knowledge, internists trained at programs in hospital referral regions with lower-intensity medical practice are more likely to recognize when conservative management is appropriate. These internists remain capable of choosing an aggressive approach when indicated.

Collaboration


Dive into the Eric S. Holmboe's collaboration.

Top Co-Authors

Avatar

Rebecca S. Lipner

American Board of Internal Medicine

View shared research outputs
Top Co-Authors

Avatar

William Iobst

American Board of Internal Medicine

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Karen E. Hauer

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Steven J. Durning

Uniformed Services University of the Health Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Furman S. McDonald

American Board of Internal Medicine

View shared research outputs
Top Co-Authors

Avatar

Jennifer R. Kogan

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Richard E. Hawkins

Uniformed Services University of the Health Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge