Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter J. Katsufrakis is active.

Publication


Featured researches published by Peter J. Katsufrakis.


Academic Medicine | 2009

Remediation of the Deficiencies of Physicians Across the Continuum From Medical School to Practice: A Thematic Review of the Literature

Karen E. Hauer; Andrea Ciccone; Thomas R. Henzel; Peter J. Katsufrakis; Stephen H. Miller; William A. Norcross; Maxine A. Papadakis; David M. Irby

Despite widespread endorsement of competency-based assessment of medical trainees and practicing physicians, methods for identifying those who are not competent and strategies for remediation of their deficits are not standardized. This literature review describes the published studies of deficit remediation at the undergraduate, graduate, and continuing medical education levels. Thirteen studies primarily describe small, single-institution efforts to remediate deficient knowledge or clinical skills of trainees or below-standard-practice performance of practicing physicians. Working from these studies and research from the learning sciences, the authors propose a model that includes multiple assessment tools for identifying deficiencies, individualized instruction, deliberate practice followed by feedback and reflection, and reassessment. The findings of the study reveal a paucity of evidence to guide best practices of remediation in medical education at all levels. There is an urgent need for multiinstitutional, outcomes-based research on strategies for remediation of less than fully competent trainees and physicians with the use of long-term follow-up to determine the impact on future performance.


Academic Medicine | 2011

Faculty development in assessment: the missing link in competency-based medical education.

Eric S. Holmboe; Denham S. Ward; Richard K. Reznick; Peter J. Katsufrakis; Karen Leslie; Vimla L. Patel; Donna D. Ray; Elizabeth A. Nelson

As the medical education community celebrates the 100th anniversary of the seminal Flexner Report, medical education is once again experiencing significant pressure to transform. Multiple reports from many of medicines specialties and external stakeholders highlight the inadequacies of current training models to prepare a physician workforce to meet the needs of an increasingly diverse and aging population. This transformation, driven by competency-based medical education (CBME) principles that emphasize the outcomes, will require more effective evaluation and feedback by faculty.Substantial evidence suggests, however, that current faculty are insufficiently prepared for this task across both the traditional competencies of medical knowledge, clinical skills, and professionalism and the newer competencies of evidence-based practice, quality improvement, interdisciplinary teamwork, and systems. The implication of these observations is that the medical education enterprise urgently needs an international initiative of faculty development around CBME and assessment. In this article, the authors outline the current challenges and provide suggestions on where faculty development efforts should be focused and how such an initiative might be accomplished. The public, patients, and trainees need the medical education enterprise to improve training and outcomes now.


Medical Teacher | 2009

Assessment of medical professionalism: Who, what, when, where, how, and ... why?

Richard E. Hawkins; Peter J. Katsufrakis; Matthew C. Holtman; Brian E. Clauser

Medical professionalism is increasingly recognized as a core competence of medical trainees and practitioners. Although the general and specific domains of professionalism are thoroughly characterized, procedures for assessing them are not well-developed. This article outlines an approach to designing and implementing an assessment program for medical professionalism that begins and ends with asking and answering a series of critical questions about the purpose and nature of the program. The process of exposing an assessment program to a series of interrogatives that comprise an integrated and iterative framework for thinking about the assessment process should lead to continued improvement in the quality and defensibility of that program.


Academic Medicine | 2010

The quality of written comments on professional behaviors in a developmental multisource feedback program.

Colleen Canavan; Matthew C. Holtman; Margaret Richmond; Peter J. Katsufrakis

Background Written feedback on professional behaviors is an important part of medical training, but little attention has been paid to the quality of written feedback and its expected impact on learning. A large body of research on feedback suggests that feedback is most beneficial when it is specific, clear, and behavioral. Analysis of feedback comments may reveal opportunities to improve the value of feedback. Method Using a directed content analysis, the authors coded and analyzed feedback phrases collected as part of a pilot of a developmental multisource feedback program. The authors coded feedback on various dimensions, including valence (positive or negative) and whether feedback was directed at the level of the self or behavioral performance. Results Most feedback comments were positive, self-oriented, and lacked actionable information that would make them useful to learners. Conclusions Comments often lack effective feedback characteristics. Opportunities exist to improve the quality of comments provided in multisource feedback.


Advances in Health Sciences Education | 2012

Validity Considerations in the Assessment of Professionalism.

Brian E. Clauser; Melissa J. Margolis; Matthew C. Holtman; Peter J. Katsufrakis; Richard E. Hawkins

During the last decade, interest in assessing professionalism in medical education has increased exponentially and has led to the development of many new assessment tools. Efforts to validate the scores produced by tools designed to assess professionalism have lagged well behind the development of these tools. This paper provides a structured framework for collecting evidence to support the validity of assessments of professionalism. The paper begins with a short history of the concept of validity in the context of psychological assessment. It then describes Michael Kane’s approach to validity as a structured argument. The majority of the paper then focuses on how Kane’s framework can be applied to assessments of professionalism. Examples are provided from the literature, and recommendations for future investigation are made in areas where the literature is deficient.


JAMA | 2013

The Evolution of the United States Medical Licensing Examination (USMLE): Enhancing Assessment of Practice-Related Competencies

Steven A. Haist; Peter J. Katsufrakis; Gerard F. Dillon

The United States Medical Licensing Examination (USMLE) is the only route to medical licensure for graduates of the Liaison Committee on Medical Education (LCME)–accreditedmedicalschoolsintheUnitedStatesand forallgraduatesofinternationalmedicalschools.Itcurrently consists of 4 examinations: Step 1, assessing application of foundational science; Step 2 Clinical Skills, assessing communication, physical examination, and data interpretation skills; Step 2 Clinical Knowledge, assessing knowledge of clinical medicine; and Step 3, assessing application of clinicalknowledgeandpatientmanagement.Intheearly1990s, the USMLE replaced the National Board of Medical Examiners(NBME)certificationexaminationsandtheFederation LicensingExamination(FLEX)program.Sinceits inception, the USMLE has undergone gradual evolution in design and format, with some major changes that include computerized examination delivery and the use of computerized patient simulations in 1999, and standardized patients introduced in 2004 to assess clinical skills. In 2004, USMLE undertook an in-depth review of the program’s purpose, design, and format. The review resultedin5majorrecommendationsadoptedin2009:(1)to focus on assessments that support state licensing authorities’ decisions about a physician’s readiness to provide patient care at entry into supervised practice and entry into unsupervised practice; (2) to adopt a general competencies schema consistent with national standards for the overall design, development, and scoring of USMLE; (3) to emphasizethescientificfoundationsofmedicineinallcomponents of USMLE; (4) to continue and enhance the assessmentofclinicalskills importanttomedicalpractice;and (5) to introduce assessment of an examinee’s ability to obtain, interpret, and apply scientific and clinical information. Why change the USMLE? Medicine and clinical practice evolve, mandating revision in education and assessment. These have included increasing the use of clinical cases to facilitate problem-based learning early in the educational process,1 widespread use of standardized patients to teach and assess trainees,2 revisiting the basic sciences during the fourth-year of medical school,3 and evolving technology resulting in increased adoption of high fidelity simulations for teaching and assessment.4,5 The mandate to USMLE was that to remain relevant to evolving practice requiresanevolutioninfocusanddesignofassessmentthat parallels educational change. The USMLE review also identified unintended consequencesofthecurrentexaminationprogram.Asexamples, many medical students prepared for Step 1 with a “binge and purge” mentality; because students failed to recognize the value of the basic sciences in medical practice, many memorized information for short-term retention. Planned changes to emphasize basic sciences throughout USMLE (the third recommendation) strive to change this mentality by reinforcing a physician’s ability to apply foundational science in patient care throughout the USMLE. In the Step 2ClinicalSkillsexamination,scoringofthestandardizedpatient cases via a history checklist caused many examinees toaskasmanyquestionsaspossibleintheshortestamount oftime.This“shotgun”approachtohistory-takingdoesnot resemble how medical educators and communication experts teach patient communication and does not reflect a behavior that is in the best interest of patients. Consistent with the first recommendation for developmentof licensureandcertificationexaminationsthatreflect practice readiness,6 the NBME undertook a series of analyses to inform changes to the USMLE. Subsequent activities of successful examinees compiled via questionnaires, analyses of health care records, and direct observationareusedinpracticeanalyses,alongwithexpertjudgment. Five national databases were analyzed and surveys were conducted of beginning interns7 and newly licensed physicians. Only 15% of the interns’ experiences were ambulatory-based; interns were required to perform a variety of procedures, often with general attending supervision. Also common were complex communication tasks; information retrieval, evaluation, and integration; and ordering and interpreting as well as performing a variety of procedures. Moonlighting activities among residents were prevalent. The practice analyses suggest modifying the practice setting reflected in the examinations, assessing knowledge about a variety of procedures, and expanding represented competencies to include complex communication tasks and evidence-based medicine skills. In response to the second recommendation, the USMLE adopted the Accreditation Council for Graduate Medical Education general medical competency-based schema. Under this schema, the 6 competencies (medical knowledge,patientcare,communicationandinterpersonal skills, practice-based learning and improvement, professionalism,andsystems-basedpractice)andassociatedsubcompetenciesareusedtoguidetestdesign,contentdevelopment, and score reporting, and to organize content within examinations. The competencies will provide a framework for feedback to examinees and schools and will shape the USMLE research agenda. Recent and future changes to USMLE are outlined in the Table. In response to the fourth recommendation, the Step 2 Clinical Skills examination continues to evolve. The apVIEWPOINT


Academic Medicine | 2013

Enhancement of the Assessment of Physician–patient Communication Skills in the United States Medical Licensing Examination

Ruth B. Hoppe; Ann M. King; Kathleen M. Mazor; Gail E. Furman; Penelope Wick-Garcia; Heather Corcoran-Ponisciak; Peter J. Katsufrakis

The National Board of Medical Examiners (NBME) reviewed all components of the United States Medical Licensing Examination as part of a strategic planning activity. One recommendation generated from the review called for enhancements of the communication skills component of the Step 2 Clinical Skills (Step 2 CS) examination. To address this recommendation, the NBME created a multidisciplinary team that comprised experts in communication content, communication measurement, and implementation of standardized patient (SP)-based examinations. From 2007 through 2012, the team reviewed literature in physician–patient communication, examined performance characteristics of the Step 2 CS exam, observed case development and quality assurance processes, interviewed SPs and their trainers, and reviewed video recordings of examinee–SP interactions. The authors describe perspectives gained by their team from the review process and outline the resulting enhancements to the Step 2 CS exam, some of which were rolled out in June 2012.


Academic Medicine | 2016

The Residency Application Process: Pursuing Improved Outcomes Through Better Understanding of the Issues.

Peter J. Katsufrakis; Tara Uhler; Lee D. Jones

The residency application process requires that applicants, their schools, and residency programs exchange and evaluate information to accomplish successful matching of applicants to postgraduate training positions. The different motivations of these stakeholders influence both the types of information provided by medical schools and the perceived value and completeness of information received by residency programs. National standards have arisen to shape the type and format of information reported by medical schools about their students, though criticisms about the candor and completeness of the information remain. Growth in the number of applicants without proportional expansion of training positions and continued increases in the number of applications submitted by each applicant contribute to increases in the absolute number of applications each year, as well as the difficulty of evaluating applicants. Few standardized measures exist to facilitate comparison of applicants, and the heterogeneous nature of provided information limits its utility. Residency programs have been accused of excluding qualified applicants through use of numerical screening methods, such as United States Medical Licensing Examination (USMLE) Step 1 scores. Applicant evaluation includes review of standardized measurements such as USMLE Step 1 scores and other surrogate markers of future success. Proposed potential improvements to the residency application process include limiting applications; increasing the amount and/or types of information provided by applicants and by residency programs; shifting to holistic review, with standardization of metrics for important attributes; and fundamental reanalysis of the residency application process. A solution remains elusive, but these approaches may merit further consideration.


Academic Medicine | 2011

The relationship between direct observation, knowledge, and feedback: results of a national survey

Kathleen M. Mazor; Matthew M. Holtman; Yakov Shchukin; Janet Mee; Peter J. Katsufrakis

Background Multisource feedback can provide a comprehensive picture of a medical trainees performance. The utility of a multisource feedback system could be undermined by lack of direct observation and accurate knowledge. Method The National Board of Medical Examiners conducted a national survey of medical students, interns, residents, chief residents, and fellows to learn the extent to which certain behaviors were observed, to examine beliefs about knowledge of each others performance, and to assess feedback. Results Increased direct observation is associated with the perception of more accurate knowledge, which is associated with increased feedback. Some evaluators provide feedback in the absence of accurate knowledge of a trainees performance, and others who have accurate knowledge miss opportunities for feedback. Conclusions Direct observation is a key component of an effective multisource feedback system. Medical educators and residency directors may be well advised to establish explicit criteria specifying a minimum number of observations for evaluations.


Journal of Graduate Medical Education | 2011

Feasibility of Implementing a Standardized Multisource Feedback Program in the Graduate Medical Education Environment

Margaret Richmond; Colleen Canavan; Matthew C. Holtman; Peter J. Katsufrakis

BACKGROUND Multisource feedback (MSF) is emerging as a central assessment method for several medical education competencies. Planning and resource requirements for a successful implementation can be significant. Our goal is to examine barriers and challenges to a successful multisite MSF implementation, and identify the benefits of MSF as perceived by participants. METHODS We analyzed the 2007-2008 field trial implementation of the Assessment of Professional Behaviors, an MSF program of the National Board of Medical Examiners, conducted with 8 residency and fellowship programs at 4 institutions. We use a multimethod analysis that draws on quantitative process indicators and qualitative participant experience data. Process indicators include program attrition, completion of implementation milestones, number of participants at each site, number of MSF surveys assigned and completed, and adherence to an experimental rater training protocol. Qualitative data include communications with each program and semistructured interviews conducted with key field trial staff to elicit their experiences with implementation. RESULTS Several implementation challenges are identified, including communication gaps and difficulty scheduling implementation and training workshops. Participant interviews indicate several program changes that should enhance feasibility, including increasing communication and streamlining the training process. CONCLUSIONS Multisource feedback is a complex educational intervention that has the potential to provide users with a better understanding of performance expectations in the graduate medical education environment. Standardization of the implementation processes and tools should reduce the burden on program administrators and participants. Further study is warranted to broaden our understanding of the resource requirements for a successful MSF implementation and to show how outcomes change as MSF gains broader acceptance.

Collaboration


Dive into the Peter J. Katsufrakis's collaboration.

Top Co-Authors

Avatar

Matthew C. Holtman

National Board of Medical Examiners

View shared research outputs
Top Co-Authors

Avatar

Gerard F. Dillon

National Board of Medical Examiners

View shared research outputs
Top Co-Authors

Avatar

Steven A. Haist

National Board of Medical Examiners

View shared research outputs
Top Co-Authors

Avatar

Brian E. Clauser

National Board of Medical Examiners

View shared research outputs
Top Co-Authors

Avatar

Kathleen M. Mazor

University of Massachusetts Medical School

View shared research outputs
Top Co-Authors

Avatar

Margaret Richmond

National Board of Medical Examiners

View shared research outputs
Top Co-Authors

Avatar

David M. Irby

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Donald E. Melnick

National Board of Medical Examiners

View shared research outputs
Top Co-Authors

Avatar

Donna D. Ray

University of South Carolina

View shared research outputs
Researchain Logo
Decentralizing Knowledge