Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rebecca S. Lipner is active.

Publication


Featured researches published by Rebecca S. Lipner.


Annals of Internal Medicine | 2008

Performance during Internal Medicine Residency Training and Subsequent Disciplinary Action by State Licensing Boards

Maxine A. Papadakis; Gerald K. Arnold; Linda L. Blank; Eric S. Holmboe; Rebecca S. Lipner

The American Board of Internal Medicine (ABIM) sets the standards for, and certifies the competence of, physicians who train in internal medicine and its subspecialties. Residency program directors annually assess medical residents performance, and medical knowledge is further assessed by the ABIM certification examination. The validity of these assessments for predicting performance in professionalism in practicing physicians is assumed but has not been tested. The Accreditation Council for Graduate Medical Education (ACGME) has historically accredited residency programs on the basis of their ability to educate residents. In 1999, the ACGME endorsed the measurement of a programs accomplishments by residents success in attaining educational outcomes. The organization designated 6 competencies as measures of a residency programs effectiveness, one of which was professionalism (1). Much thought has gone into how best to teach and measure professionalism across specialties during graduate medical education (212). Once a physician is in clinical practice, the maintenance of certification is a measurement of professionalism (1315). In previous studies (16, 17), we have shown that physicians who are disciplined by state licensing boards are more likely to have demonstrated unprofessional behavior in medical school. Thus, for some students, patterns of unprofessional behavior are recognized early and are long-lived. To investigate whether similar predictors of future problems could be found during residency training, we studied a cohort of all physicians who entered internal medicine residency training in the United States between 1990 and 2000 and subsequently became diplomates. We took advantage of the fact that the same ABIM criteria and instruments are used to assess medical residents performance throughout the United States and that the ABIM gathers these assessments. In addition, internal medicine residents receive a grade for professionalism, unlike medical students, whose professionalism component may be embedded in the overall grade of their clerkship. A program directors specific assessment of a residents professionalism imparts confidence that a considered judgment has been made on this competence. Methods We performed a retrospective cohort study of internal medicine residents to determine whether measures of performance during residency training were associated with disciplinary action by state licensing boards after the residents became diplomates and practicing physicians. Our sample comprised 66171 residents who were trainees from 1990 to 2000 in any of the approximately 425 ACGME-accredited internal medicine residency programs. We excluded physicians in preliminary or transitional internship programs, 109 physicians who received disciplinary actions by state licensing boards before or during residency training (because disciplinary action would precede the performance indicator variables), and nondiplomates (physicians who entered an internal medicine residency but did not receive specialty certification). Measurements Performance Predictor Variables We used predictor variables from the longitudinal records maintained by the ABIM to measure residents performance. These included ratings in 6 components of the ABIM Residents Evaluation Summary, score on the first attempt of the ABIM internal medicine certification examination, years of residency training, and number of attempts on the ABIM internal medicine certification examination. We obtained information on the residents sex, age, country of birth, country of medical school, and internal medicine subspecialty certification (for example, gastroenterology or nephrology). ABIM Residents Evaluation Summary The ABIM Residents Evaluation Summary is a standardized, Web-based, global rating of clinical competence. Program directors must submit this evaluation annually to the ABIM. The components of the form changed during the study interval; however, the following 6 components were present throughout: medical interviewing, physical examination skills, procedural skills, medical knowledge, professionalism, and overall clinical competence. Each component has descriptive anchors that enumerate characteristics of best and worst performance and a 9-point scale in which residents are rated as unsatisfactory (score of 1 to 3), satisfactory (score of 4 to 6), or superior (score of 7 to 9). For example, in the 1997 and 2000 versions of the Residents Evaluation Summary, the description of the lowest rating of the professionalism component was lacks altruism, accountability, integrity, commitment to excellence, duty, service, honor; disrespectful to other health care professionals; irresponsible; unreliable; not punctual; ineffective communicator; disruptive; disorganized; records tardy and/or illegible. The description for the highest rating was aspires to altruism, accountability, excellence, duty, service, honor, integrity and respect for others; is responsive, reliable, punctual, and cooperative; displays initiative; provides effective leadership; maintains legible and timely records. At the completion of residency, a satisfactory rating in all components is required to take the ABIM certification examination. The reliability and validity of these ratings correlate with certification examination scores and physician peer ratings (18, 19). Specifically, overall clinical competence ratings from program directors correlate with physician peer ratings of competence (r= 0.25; P< 0.010). In addition, examinees who did not pass the internal medicine certification examination on their initial attempt received lower ratings of clinical competence, on average, than other examinees. An internal assessment by the ABIM found that although a rating of 4 allowed the examinee to sit for the examination, program directors viewed it as a marginal rather than a satisfactory rating. This process of internal assessment included feedback from the ABIM Visit Program, discussions with Association of Program Directors in Internal Medicine (APDIM) and program directors, joint ABIM/APDIM workshops on problem residents, and comprehensive policy discussions by the ABIM Committee on Evaluation of Clinical Competence. We therefore defined a rating of 4 or less as low for the competencies on the ABIM Residents Evaluation Summary. ABIM Certification Examination Score We made scores on the internal medicine certification examination comparable across examination years by using the Tucker linear equating process, a statistical procedure used in standardized testing to ensure that scores from multiple test administrations can be used interchangeably (20). The equated certification scores were transformed into standardized scores (z scores) and then entered as a continuous variable into the models that predicted future disciplinary action. Outcome Variable The outcome variable was disciplinary action by a state medical licensing board (Table 1). Table 1. Basis for Disciplinary Actions Taken by State Licensing Boards Disciplinary Action and Basis Categories We examined U.S. state licensing board disciplinary (prejudicial) actions against physicians from 1 January 1990 through 20 November 2006. The study follow-up period for physicians who received disciplinary action began on the date of entry into residency training and ended on the date of the last disciplinary action before 20 November 2006. For those without disciplinary action, the follow-up period began on the date of entry into residency training and extended to 20 November 2006. Information about disciplined physicians and other public information concerning the disciplinary mandates by state medical boards were supplied to the ABIM by the American Board of Medical Specialties, which obtains its data from the Federation of State Medical Boards. No investigator outside of the ABIM had access to the names of the study physicians. The reason that a physician is disciplined by a state licensing board is called the basis for disciplinary action. Common examples include inappropriate prescribing of controlled substances, fraudulent billing practices, or failure to meet continuing medical education requirements. State licensing boards may impose penalties of varying degrees of severity. Category A, the most severe type of disciplinary action, is loss of the physicians license (21). Category B actions are restrictions of the physicians medical licensefor example, in the form of probation. Category C actions are usually monetary fines, such as for failure to comply with continuing medical education requirements. Two investigators who were familiar with the designations of the state licensing boards and were blinded to each diplomates predictor variables reviewed the information on each physicians disciplinary action and designated whether it demonstrated unprofessional behavior, incompetence, or neither/undeterminable (Table 1) (17). If a physician had only 1 disciplinary action, we designated the basis for that action as unprofessional behavior, incompetence, or neither/undeterminable. If a physician had more than 1 disciplinary action, we reviewed the basis for the action in the most severe category and made the designation. For physicians who received more than 1 action in the most severe category, we designated the most representative basis for action in that category. We further categorized whether the bases for disciplinary action reflected diminished quality of patient care or affected patient safety, such as inappropriate prescribing, negligence, or sexual misconduct. The statistic for these judgments showed a high level of agreement between investigators (0.96 [CI, 0.95 to 0.96]). An independent expert in patient safety adjudicated designation disagreements. Statistical Analysis We first compared characteristics of internal medicine diplomates with and without disciplinary actions by using chi-square tests for proportions, C


JAMA Internal Medicine | 2008

Association Between Maintenance of Certification Examination Scores and Quality of Care for Medicare Beneficiaries

Eric S. Holmboe; Yun Wang; Thomas P. Meehan; Janet P. Tate; Shih-Yieh Ho; Katie S. Starkey; Rebecca S. Lipner

BACKGROUNDnThe relationship between physicians cognitive skill and the delivery of evidence-based processes of care is not well characterized.Therefore, we set out to determine associations between general internists performance on the American Board of Internal Medicine maintenance of certification examination and the receipt of important processes of care by Medicare patients.nnnMETHODSnPhysicians were grouped into quartiles based on their performance on the American Board of Internal Medicine examination. Hierarchical generalized linear models examined associations between examination scores and the receipt of processes of care by Medicare patients. The main outcome measures were the associations between diabetes care, using a composite measure of hemoglobin A(1c), and lipid testing and retinal screening, mammography, and lipid testing in patients with cardiovascular disease and the physicians performance on the American Board of Internal Medicine examination, adjusted for the number of Medicare patients with diabetes and cardiovascular disease in a physicians practice panel; frequency of visits; patient comorbidity, age, and ethnicity; and physician training history and type of practice.nnnRESULTSnPhysicians scoring in the top quartile were more likely to perform processes of care for diabetes (composite measure odds ratio [OR], 1.17; 95% confidence interval [CI], 1.07-1.27) and mammography screening (OR, 1.14; 95% CI, 1.08-1.21) than physicians in the lowest physician quartile, even after adjustment for multiple factors. There was no significant difference among the groups in lipid testing of patients with cardiovascular disease (OR, 1.00; 95% CI, 0.91-1.10).nnnCONCLUSIONnOur findings suggest that physician cognitive skills, as measured by a maintenance of certification examination, are associated with higher rates of processes of care for Medicare patients.


Academic Medicine | 2002

The value of patient and peer ratings in recertification.

Rebecca S. Lipner; Linda L. Blank; Brian F. Leas; Gregory S. Fortna

Recertification of practicing physicians, also termed ‘‘maintenance of certification,’’ is now at the forefront of activities for virtually all member boards of the American Board of Medical Specialties. The goal of recertification is to maintain high standards of medical practice that protect the public by using fair, valid, and reliable methods to assess professional competence. To fulfill this goal, a comprehensive framework that integrates self-evaluation and practice improvement with a secure, proctored examination defines the recertification process for the 21st century. Concern about the inability of proctored examinations to assess the full spectrum of clinical competence, including humanistic qualities, professionalism, and communication skills, stimulated the American Board of Internal Medicine (ABIM) to introduce the ‘‘patient and peer assessment module,’’ a practice-based assessment tool, into its new recertification program called Continuous Professional Development (CPD). Whereas residents seeking initial Board certification are required by the ABIM to achieve satisfactory ratings of the core components of clinical competence from their program directors, there is no parallel method for practicing physicians who seek recertification. The ABIM’s CPD program is composed of three components: self-evaluation, a secure examination, and verification of credentials; the physician pays the fee for the program. The first component, self-evaluation, comprises a series of modular examinations taken at home. Its purpose is both to stimulate study in the disciplines of internal medicine and to encourage improvement of one’s practice. The second component, the secure examination, is a traditional proctored examination featuring single-best-answer questions designed to evaluate clinical knowledge and judgment about essential aspects of patient care that a physician should have without reference to medical resources. The third component, verification of credentials, requires both good standing in a hospital or health care delivery system and maintenance of an unchallenged, unrestricted license to practice medicine. As part of the self-evaluation component, physicians may select, as an elective, the patient and peer assessment module, which incorporates confidential, anonymous surveys of patient and peer ratings pertaining to physician–patient communication and peer assessment of clinical performance. The module also requires completing self-rating surveys and a quality improvement plan (QUIP). The ratings are administered through a touch-tone telephone, using a toll-free number and an automated voice-response system. Once the self-ratings and the required number of patient and peer ratings are achieved, ABIM provides performance feedback; there is no passing standard associated with this module. After submitting the QUIP, the diplomate receives credit for the module. The feedback and QUIP are intended to stimulate diplomates to selfreflect and improve the quality of the medical care they provide. Prior to implementation, a pilot study assessed the feasibility of the module using 100 volunteers. Participants highly approved of the survey questions, and more than two thirds agreed that the module was a valuable learning experience. The technology used to record the survey ratings performed well. The purpose of this study was to assess the value of the patient and peer assessment module. Specifically, we raised four measurement questions:


Academic Medicine | 2000

Certification and specialization: do they matter in the outcome of acute myocardial infarction?

John J. Norcini; Harry R. Kimball; Rebecca S. Lipner

Purpose To learn whether there are differences among certified and self-designated cardiologists, internists, and family practitioners in terms of the mortality of their patients with acute myocardial infarction (AMI). Method Data on all patients admitted with AMI were collected for calendar year 1993 by the Pennsylvania Health Care Cost Containment Council and analyzed. Certified and self-designated family practitioners, internists, and cardiologists (n = 4,546) were compared with respect to the characteristics of their patients illnesses. In addition, a regression model was fitted in which mortality was the dependent measure and the independent variables were the probability of death, hospital characteristics (location and the availability of advanced cardiac care), and physician characteristics (patient volume, years since graduation from medical school, specialty, and certification status). Results On average, cardiologists treated more patients than did generalists, and their patients were less severely ill. In the regression analysis, all variables were statistically significant except the availability of advanced cardiac care. Holding all other variables constant, treatment by a certified physician was associated with a 15% reduction in mortality among patients with AMI. Conclusions Less patient mortality was associated with treatment by physicians who were cardiologists, cared for larger numbers of AMI patients, were closer to their graduation from medical school, and were certified.


Annals of Internal Medicine | 2006

Who Is Maintaining Certification in Internal Medicine—and Why? A National Survey 10 Years after Initial Certification

Rebecca S. Lipner; Wayne H. Bylsma; Gerald K. Arnold; Gregory S. Fortna; John Tooker; Christine K. Cassel

Context Maintenance of certification (MOC) by the American Board of Internal Medicine (ABIM) requires participation in its Continuous Professional Development program. Understanding the attitudes and perceptions of internists regarding the MOC process would be helpful in increasing participation in quality improvement efforts. Contribution Diplomates whose ABIM certificates were dated to expire by December 2002 were surveyed regarding reasons for participating or not participating in the program. The most common reasons for participation were to improve professional image and to update knowledge. Nonparticipants perceived MOC as too time-consuming. Implications In general, physicians seem to value the MOC process for its effort to improve quality of care and patient safety. The Editors Improving the quality of patient care dominates the health care agenda (1-4). Recently, a great deal of attention has focused on redesigning health care delivery systems to make them more fail-safe, but there is no denying that state-of-the-art knowledge on the part of the individual physician remains a key factor in ensuring quality care (5). Professional societies and certifying boards exist to improve and assess the quality of health care provided by an individual physician. Professional societies, such as the American College of Physicians (ACP), provide continuing education to translate medical knowledge into best practices and strive to foster excellence and professionalism in the practice of medicine. The 24 certifying boards of the American Board of Medical Specialties (ABMS) now issue time-limited certificates to physicians who meet rigorous standards through a process that recognizes that medical knowledge and practice must be renewed to demonstrate ongoing competence in an environment with rapidly changing medical information and technology (6-9). The American Board of Internal Medicine (ABIM), the ABMS certifying board that issues the largest number of certificates, offers certificates in general internal medicine, 9 subspecialties, and 5 areas of added qualifications. In 2002, the ABMS adopted a framework in conjunction with the Accreditation Council for Graduate Medical Educations Outcome Project (10) and the General Competencies Project (11) for all boards to evaluate physician competence at the conclusion of training (initial certification) and throughout their careers (Maintenance of Certification [MOC]). The overarching goal for certification and MOC is to protect the public and patients by attesting to the quality, safety and effectiveness of U.S. medical practitioners (6). In the 1970s and 1980s, the ABIM had a program for voluntary recertification of lifetime certificates, which drew relatively few participants. Consequently, in 1990, the ABIM began issuing certificates with a 10-year duration. These certificates must be renewed through the MOC program to remain valid. The ABIMs MOC program, called Continuous Professional Development (CPD), began in 1995. As of December 2003, 77% of physicians holding 10-year certificates in internal medicine only (general internists) had enrolled in the program. Eighty-six percent of physicians with 10-year certificates in both internal medicine and a subspecialty or added qualifications (subspecialists) enrolled in the program for their subspecialty, and 60% of this same cohort enrolled for their internal medicine certificate. Because board-certified physicians (called diplomates) lose their certification status after 10 years, both the ABIM, who administers the program, and ACP, whose membership encompasses approximately 119000 internal medicine generalists, subspecialists, and students, wished to understand why 23% of general internists and 40% of subspecialists are not renewing their internal medicine certificate and why 14% of subspecialists are not renewing their subspecialty or added qualifications certificate. Because little is known about the forces that drive participation in MOC, the ABIM and ACP conducted a national survey of ABIM diplomates who earned certificates in internal medicine, a subspecialty, or an area of added qualifications in 1990, 1991, or 1992. This group represents an early cohort of diplomates with 10-year certificates who had had sufficient time to renew them. This study aimed to identify factors that influence participation in MOC and explore how diplomates perceive the value of the MOC process. We describe practice characteristics, perceptions, and attitudes about MOC and reasons for maintaining or not maintaining certification. We compare attitudes of general internists with those of subspecialists and of diplomates who have completed, have enrolled in but have not completed, or have never enrolled in MOC. We conclude with implications for MOC programs and the quality movement. Methods Program Description The ABIMs MOC program has 3 components: 1) verification of credentials, 2) proctored examination, and 3) self-evaluation (12). Verification of credentials means physicians must have a valid and unrestricted license and provide a recommendation from an officer of a hospital or health care organization about their professional standing in the community. The proctored examination measures medical knowledge in a discipline, requires a passing grade, is given at computer testing sites, and may be taken as early as 5 years before a certificate expires. Self-evaluations encourage lifelong learning in medical knowledge or skills and practice-based performance and improvement. During the period of the study, most diplomates completed self-evaluations consisting of open-book, take-home modules of 60 multiple-choice questions in internal medicine, a subspecialty, or an area of added qualifications. As the MOC program evolves, there are a greater number of options and more flexibility. Physicians are encouraged to complete the program over 10 years. Continuing medical education (CME) credit accompanies successful completion of the proctored examination and self-evaluation modules. On average, diplomates receive 120 CME credits for completing the program requirements (ABIM internal report, November 2004. Unpublished data.). Study Design and Participants The sampling frame of 23108 diplomates included those initially certified by ABIM in 1990 or afterward whose certificate would expire by December 2002. These diplomates held a total of 24344 time-limited certificates as of 24 February 2004. To ensure a representative sample of participants who completed the MOC, each diplomate was assigned to 1 of 39 internal medicine, subspecialty, or added qualifications groups on the basis of the certificate or certificates earned in 1990, 1991, or 1992; the kind of MOC sought; and status in MOC at the time of the survey. The 3 kinds of MOC include 1) general internists eligible to renew a time-limited internal medicine certificate5898 diplomates who earned an internal medicine certificate in 1990, 1991, or 1992 and no other certificates in later years; 2) subspecialists eligible to renew a time-limited internal medicine certificate7367 diplomates who earned an internal medicine certificate in 1990, 1991, or 1992 and a subspecialty or added qualifications certificate in later years; and 3) subspecialists eligible to renew a time-limited subspecialty or added qualifications certificate9843 diplomates who earned a subspecialty or added qualifications certificate in 1990, 1991, or 1992 (most possess an internal medicine certificate without an expiration date). Status in MOC was also divided into 3 categories: 1) 13455 physicians who completed the program, 2) 3656 who enrolled but had not completed the program, and 3) 5997 who had never enrolled. Diplomates who could have enrolled for multiple areas (for example, diplomates who earned an internal medicine and 2 different subspecialty certificates) were randomly assigned to 1 of their possible groups. A stratified random sample of 3500 diplomates was selected so that percentage-point estimates within each kind or status group would have only a 5% margin of error. Some subspecialty groups were oversampled to ensure a 95% probability of collecting responses from at least 2 physicians in each group. Those not enrolled in MOC were oversampled because they were regarded as being less likely to respond. Sample size requirements and oversampling rates were determined by using the Power Analysis and Sample Size (PASS 2000) software (13) and the PROBHYPR (cumulative hypergeometric function) in SAS, version 9.0 (SAS Institute, Cary, North Carolina). Detailed analyses of the 95% CIs for all estimates show that the accuracy of the primary estimates was within the range limits originally planned for the study. (Sample size calculations used in the planning stage of the study were based on the assumption that survey percentage estimates around 50% would have 95% CIs of 5 percentage points [that is, 10% of the estimate]. As expected, the median 95% CI for estimates between 45% and 55% [n= 88] was 5% with a range of 2% to 12%. For estimates ranging between 25% and 75% [n= 552], the median 95% CI was 6% with a range of 2% to 18% based on an estimate of 50%. Of primary concern was precision of estimates related to the major objectives of the study. These estimates [n= 478] were taken from the questions related to the reasons why physicians did or did not participate in the MOC program [questions 4, 5, 11, and 12] and whether physicians planned to participate in future MOC programs [questions 6 and 13]. The median 95% CI for these survey objectives estimates based on a sample estimate of 50% was 6% with a range of 1% to 85%.) Survey data were collected between mid-March and 6 August 2004. A prenotification letter was sent to the entire sample on 2 March 2004. A 4-page self-administered questionnaire was mailed on 12 March 2004, followed by a postcard reminder on 19 March, second and third questionnaires on 22 April and 7 June, respectiv


JAMA | 2008

Assessing Quality of Care: Knowledge Matters

Eric S. Holmboe; Rebecca S. Lipner; Ann Greiner

SIGNIFICANT QUALITY-OF-CARE GAPS ARE WELL DOCUmented in the United States. These reports have focused mostly on underuse of performance measures of important processes of care, and some outcomes of care. Others have argued that the cause of underuse of these evidence-based processes of care is usually not deficient physician knowledge about whether to perform the examination or order the test, but rather poorly designed, dysfunctional microsystems of care unable to deliver effective, efficient, and reliable care. Consequently, much of the recent work in quality improvement has focused on changing microsystems of care “to deliver the right care for the right patient at the right time, all the time.” What is often overlooked in quality improvement, but equally important, is that effective microsystems must have highly competent clinicians, who possess sufficient knowledge and clinical skills to make and execute evidence-based decisions, exercise informed clinical judgment, and deal effectively with uncertainty. Clinical judgment and the ability to deal with uncertainty are especially critical with respect to misuse and overuse of processes of care. Misuse and overuse of processes of care (eg, overprescribing antibiotics and unnecessary imaging and procedures) put patients at greater risk for unnecessary complications. Physician knowledge and clinical judgment also are central to making correct diagnoses. The majority of current performance measures assume a correct diagnosis, but more than that current measures cover only a fraction of the myriad health problems seen by physicians on a daily basis and likely will never address unusual or less common but no less important or serious conditions. Furthermore, many symptoms and signs that prompt patients to see physicians are often not well-defined and a diagnosis often remains uncertain after the initial visit. Clinical judgment is crucial in determining when further intervention is necessary or when watchful waiting may be the best approach. Even when an accurate diagnosis is made, prudent clinical judgment is necessary to determine appropriate care, including the correct diagnostic tests, critical to the efficiency and effectiveness aspects of quality. Our objectives in this Commentary are to discuss the relationship between medical knowledge and quality and how the secure examination component of specialty board certification—with its primary focus on assessing physician knowledge, diagnostic acumen, and clinical judgment—is an important complement to current performance measures. Recognizing this importance, in 2006 the American Board of Internal Medicine instituted a new requirement for all physicians with time-limited certificates to evaluate their performance in practice to address physician competence in practice-based learning and improvement and systemsbased practice. We hope this discussion will stimulate dialogue about the need for more comprehensive physician performance measurement in the era of public reporting.


Journal of Continuing Education in The Health Professions | 2013

specialty Board Certification in the United States: Issues and Evidence

Rebecca S. Lipner; Brian J. Hess; Robert L. Phillips

Background: The American Board of Medical Specialties (ABMS) certification and maintenance of certification (MOC) programs strive to provide the public with guidance about a physicians competence. This study summarizes the literature on the effectiveness of these programs. Method: A literature search was conducted for studies published between 1986 and April 2013 and limited to ABMS certification. A modified version of Kirkpatricks 4 levels of program evaluation included the reaction of stakeholders to certification, the extent to which physicians are encouraged to improve, the relationship between performance in the programs and nonclinical external measures of physician competence, and the relationship of performance in the programs with clinical quality measures. Results: Patients and hospitals value of board certification and physician participation in MOC are high. Physicians are conflicted as to whether the effort involved is worth its value. Self‐reported evidence shows improvement in knowledge, practice infrastructure, communication with patients and peers, and clinical care. Certification performance is generally related to nonclinical external measures such as types of training, practice characteristics, demographics, and disciplinary actions. In general, physicians who are board certified provide better patient care, albeit the results have modest effect sizes and are not unequivocal. Conclusions: Certification boards should continuously try to improve their programs in response to feedback from stakeholders, changes in the way physicians practice, as well as the growth in the fields of measurement and technology. Keeping pace with these changes in a responsible and evidence‐based way is important.


Journal of Continuing Education in The Health Professions | 2013

American Board of Medical Specialties Maintenance of Certification: Theory and Evidence Regarding the Current Framework

Richard E. Hawkins; Rebecca S. Lipner; Hazen P. Ham; Robin Wagner; Eric S. Holmboe

&NA; The American Board of Medical Specialties Maintenance of Certification Program (ABMS MOC) is designed to provide a comprehensive approach to physician lifelong learning, self‐assessment, and quality improvement (QI) through its 4‐part framework and coverage of the 6 competencies previously adopted by the ABMS and the Accreditation Council for Graduate Medical Education (ACGME). In this article, the theoretical rationale and exemplary empiric data regarding the MOC program and its individual parts are reviewed. The value of each part is considered in relation to 4 criteria about the relationship of the competencies addressed within that part to (1) patient outcomes, (2) physician performance, (3) validity of the assessment or educational methods utilized, and (4) learning or improvement potential. Overall, a sound theoretical rationale and a respectable evidence base exists to support the current structure and elements of the MOC program. However, it is incumbent on the ABMS and ABMS member boards to continue to examine their programs moving forward to assure the public and the profession that they are meeting expectations, are clinically relevant, and provide value to patients and participating physicians, and to refine and improve them as ongoing research indicates.


Annals of Internal Medicine | 1985

An Analysis of the Knowledge Base of Practicing Internists as Measured by the 1980 Recertification Examination

John J. Norcini; Rebecca S. Lipner; John A. Benson; George D. Webster

The performance of practicing internists on the American Board of Internal Medicines 1980 Recertification Examination was examined in two studies. In the first study, a psychometric common-item equating technique was used to compare the performance of 1980 recertification candidates with that of 1979 certification candidates. Results showed that the knowledge base of practicing internists was similar to that of residents completing training. The second study analyzed the performance of 1980 recertification candidates to determine whether being certified or having an interest in a subspecialty affects a physicians performance on items in that area. The results showed that subspecialists do significantly better than general internists on items pertaining to their area of specialization. Similar outcomes were found for internists with a special interest in a subspecialty area. These findings establish the importance of continued periodic evaluation and support the development of an evaluation tool tailored to the physicians area of concentration.


Simulation in healthcare : journal of the Society for Simulation in Healthcare | 2010

A technical and cognitive skills evaluation of performance in interventional cardiology procedures using medical simulation.

Rebecca S. Lipner; John C. Messenger; Roberta Kangilaski; Donald S. Baim; David R. Holmes; David O. Williams; Spencer B. King

Introduction: Interventional cardiology, with large numbers of complex procedures and potentially serious complications, stands out as an obvious discipline in which to apply simulation to help prevent medical errors. The objective of the study was to determine whether it is feasible to develop a valid and reliable evaluation approach using medical simulation to assess technical and cognitive skills of physicians performing coronary interventions. Methods: Clinical case scenarios were developed by a committee of subject matter experts, who defined key decision nodes, such as stent positioning, and introduced unanticipated complications, such as coronary perforation. Subjects were 115 physicians from 10 U.S. healthcare institutions at three levels of expertise: novice, skilled, or expert. Subjects completed a questionnaire, one practice case and six test cases on a SimSuite simulator (Medical Simulation Corporation, Denver, CO), and an opinion survey. Clinical specialists rated subjects procedural skills. Results: A technical and cognitive skills evaluation of performance in interventional cardiology procedures using medical simulation yielded results that distinguished between a novice group and skilled or expert groups (P < 0.001) and scores correlated moderately with clinical specialist ratings of subjects procedural skills and with number and complexity of procedures performed in practice during the previous year. Approximately 90% of subjects generally thought that the cases were well simulated and presented situations encountered in practice. Conclusions: This study suggests that an evaluation approach using high-fidelity medical simulation to assess technical and cognitive skills of physicians performing interventional cardiology procedures can be used to identify physicians who are extremely poor performers and not likely to be providing appropriate patient care. We believe that use of a high-fidelity simulator incorporating situations with multiple events, immediate feedback, and high sensory load would complement the results of traditional written examinations of medical knowledge to provide a more comprehensive assessment of physician ability in interventional cardiology.

Collaboration


Dive into the Rebecca S. Lipner's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brian J. Hess

American Board of Internal Medicine

View shared research outputs
Top Co-Authors

Avatar

Weifeng Weng

American Board of Internal Medicine

View shared research outputs
Top Co-Authors

Avatar

Lorna A. Lynn

American Board of Internal Medicine

View shared research outputs
Top Co-Authors

Avatar

Bradley M. Gray

American Board of Internal Medicine

View shared research outputs
Top Co-Authors

Avatar

Gerald K. Arnold

American Board of Internal Medicine

View shared research outputs
Top Co-Authors

Avatar

Jonathan L. Vandergrift

American Board of Internal Medicine

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Steven A. Haist

National Board of Medical Examiners

View shared research outputs
Top Co-Authors

Avatar

Steven J. Durning

Uniformed Services University of the Health Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge