Janet Mee
National Board of Medical Examiners
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Janet Mee.
Academic Medicine | 2006
Melissa J. Margolis; Brian E. Clauser; Monica M. Cuddy; Andrea Ciccone; Janet Mee; Polina Harik; Richard E. Hawkins
Background Multivariate generalizability analysis was used to investigate the performance of a commonly used clinical evaluation tool. Method Practicing physicians were trained to use the mini-Clinical Skills Examination (CEX) rating form to rate performances from the United States Medical Licensing Examination Step 2 Clinical Skills examination. Results Differences in rater stringency made the greatest contribution to measurement error; more raters rating each examinee, even on fewer occasions, could enhance score stability. Substantial correlated error across the competencies suggests that decisions about one scale unduly influence those on others. Conclusions Given the appearance of a halo effect across competencies, score interpretations that assume assessment of distinct dimensions of clinical performance should be made with caution. If the intention is to produce a single composite score by combining results across competencies, the presence of these effects may be less critical.
Academic Medicine | 2011
Mark R. Raymond; Janet Mee; Ann King; Steven A. Haist; Marcia L. Winward
Background Studies completed over the past decade suggest the presence of a gap between what students learn during medical school and their clinical responsibilities as first-year residents. The purpose of this survey was to verify on a large scale the responsibilities of residents during their initial months of training. Method Practice analysis surveys were mailed in September 2009 to 1,104 residency programs for distribution to an estimated 8,793 first-year residents. Surveys were returned by 3,003 residents from 672 programs; 2,523 surveys met inclusion criteria and were analyzed. Results New residents performed a wide range of activities, from routine but important communications (obtain informed consent) to complex procedures (thoracentesis), often without the attending physician present or otherwise involved. Conclusions Medical school curricula and the content of competence assessments prior to residency should consider more thorough coverage of the complex knowledge and skills required early in residency.
Journal of Continuing Education in The Health Professions | 2009
Richard E. Hawkins; Beatrix Roemheld-Hamm; Andrea Ciccone; Janet Mee; Alfred F. Tallia
Introduction: Deficiencies in physician competence play an important role in medical errors and poor‐quality health care. National trends toward implementation of continuous assessment of physicians hold potential for significant impact on patient care because minor deficiencies can be identified before patient safety is threatened. However, the availability of assessment methods and the quality of existing tools vary, and a better understanding of the types of deficiencies seen in physicians is required to prioritize the development and enhancement of assessment and remediation methods. Methods: Surveys of physicians and licensing authorities and analysis of the Federation of State Medical Boards (FSMB) Board Action Data Bank were used to collect information describing the nature and types of problems seen in practicing physicians. Focus groups, depth interviews with key professional stakeholders, and state medical board site visits provided additional information about deficiencies in physician competence. Results: Quantitative and qualitative analyses identified (1) communication skills as a priority target for assessment approaches that also should focus on professional behaviors, knowledge, clinical judgment, and health‐care quality; and (2) differences between regulatory approaches of licensing and certifying bodies contribute to a culture that limits effective self‐assessment and continuous quality improvement. System problems impacting physician performance emerged as an important theme in the qualitative analysis. Discussion: Considering alternative perspectives from the regulatory, education, and practice communities helps to define assessment priorities for physicians, facilitating development of a coherent and defensible approach to assessment and continuing professional development that promises to provide a more comprehensive solution to problems of health‐care quality in the United States.
International Journal of Testing | 2013
Brian E. Clauser; Janet Mee; Melissa J. Margolis
This study investigated the extent to which the performance data format impacted data use in Angoff standard setting exercises. Judges from two standard settings (a total of five panels) were randomly assigned to one of two groups. The full-data group received two types of data: (1) the proportion of examinees selecting each option and (2) plots showing the proportion of examinees selecting the correct answer by deciles defined by total test score. The options-only group received only the option data. Results indicated that judgments in the full-data group were in substantially closer alignment with the empirical data than those in the options-only group. This suggests that either the decile data alone or the combination of both pieces of data leads to a greater reliance on the data. The results are discussed from the perspective of the validity/credibility of the resulting cut scores.
Academic Medicine | 2008
Brian E. Clauser; Polina Harik; Melissa J. Margolis; Janet Mee; Kimberly A. Swygert; Thomas Rebbecchi
Background This research examined various sources of measurement error in the documentation score component of the United States Medical Licensing Examination (USMLE) Step 2 Clinical Skills examination. Method A generalizability theory framework was employed to examine the documentation ratings for 847 examinees who completed the USMLE Step 2 Clinical Skills examination during an eight-day period in 2006. Each patient note was scored by two different raters allowing for a persons-crossed-with-raters-nested-in-cases design. Results The results suggest that inconsistent performance on the part of raters makes a substantially greater contribution to measurement error than case specificity. Double scoring the notes significantly increases precision. Conclusions The results provide guidance for improving operational scoring of the patient notes. Double scoring of the notes may produce an increase in the precision of measurement equivalent to that achieved by lengthening the test by more than 50%. The study also cautions researchers that when examining sources of measurement error, inappropriate data-collection designs may result in inaccurate inferences.
Academic Medicine | 2009
Brian E. Clauser; Kevin P. Balog; Polina Harik; Janet Mee; Nilufer Kahraman
Background In clinical skills, closely related skills are often combined to form a composite score. For example, history-taking and physical examination scores are typically combined. Interestingly, there is relatively little research to support this practice. Method Multivariate generalizability theory was employed to examine the relationship between history-taking and physical examination scores from the United States Medical Licensing Examination (USMLE) Step 2 Clinical Skills examination. These two proficiencies are currently combined into a data-gathering score. Results The physical examination score is less generalizable than the score for history taking, and there is only a modest to moderate relationship between these two proficiencies. Conclusions A decision about combining physical examination and history-taking proficiencies into one composite score, as well as the weighting of these components, should be driven by the intended use of the score. The choice of weights in combining physical examination and history taking makes a substantial difference in the precision of the resulting score.
Evaluation & the Health Professions | 2004
Howard Wainer; Janet Mee
A primary question that must be resolved in the development of tasks to assess the quality of physicians’ clinical judgment is, “What is the outcome variable?” One natural choice would seem to be the correctness of the clinical decision. In this article, we use data on the diagnosis of urinary tract infections among young girls to illustrate why, in many clinical situations, this is not a useful variable. We propose instead a judgment weighted by the relative costs of an error. This variable has the disadvantage of requiring expert judgment for scoring, but the advantage of measuring the construct of interest.
Journal of Medical Regulation | 2014
Mark R. Raymond; Janet Mee; Steven A. Haist; Aaron Young; Gerard F. Dillon; Peter J. Katsufrakis; Suzanne M. McEllhenney; David A. Johnson
ABSTRACT To investigate the practice characteristics of newly licensed physicians for the purpose of identifying the knowledge and skills expected of those holding the general, unrestricted license to practice medicine, a questionnaire was mailed in May 2012 to 8,001 U.S. physicians who had been granted an unrestricted license to practice medicine between 2007 and 2011. The questionnaire requested information on stage of training, moonlighting, and practice setting; it also listed 58 clinical procedures and asked respondents to indicate whether they had ordered, performed, or interpreted the results of each procedure since obtaining their unrestricted license. A strategy was implemented to identify the relevance of each clinical activity for undifferentiated medical practice. The response rate was 37%. More than two-thirds of newly licensed physicians still practiced within a training environment; nearly one-half of those in training reported moonlighting, mostly in inpatient settings or emergency departm...
Journal of Educational Measurement | 2009
Brian E. Clauser; Janet Mee; Su G. Baldwin; Melissa J. Margolis; Gerard F. Dillon
Educational Measurement: Issues and Practice | 2013
Janet Mee; Brian E. Clauser; Melissa J. Margolis