Stephen G. Clyman
National Board of Medical Examiners
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Stephen G. Clyman.
Computers in Human Behavior | 1999
Ronald H. Stevens; J Ikeda; Adrian M. Casillas; J Palacio-Cayetano; Stephen G. Clyman
Abstract We have explored the ability of artificial neural network technologies to generate performance models of complex problem-solving tasks without the detailed a priori knowledge of the nature of the task. To test the generalizibility of this approach we applied this analysis to two diverse content domains—high school genetics and clinical patient management. In both domains, the artificial neural networks, using only the sequence of actions taken while performing the task, generated multiple classification groups defining different levels of competence. The validity of these neural network performance groupings was further established by the good concordance of these classifications with independently derived expert ratings.
Academic Medicine | 2011
Orit Karnieli-Miller; T. Robert Vu; Richard M. Frankel; Matthew C. Holtman; Stephen G. Clyman; Siu L. Hui; Thomas S. Inui
Purpose To examine the relationship between learner experience in the “hidden curriculum” and student attribution of such experiences to professionalism categories. Method Using the output of a thematic analysis of 272 consecutive narratives recorded by 135 students on a medical clerkship from June through November 2007, the authors describe the frequency of these experiences within and across student-designated Association of American Medical Colleges–National Board of Medical Examiners professionalism categories and employ logistic regression to link varieties of experience to specific professionalism categories. Results Thematic analysis uncovered two main domains of student experience: medical–clinical interaction and teaching-and-learning experiences. From a student perspective the critical incident stories evoked all professionalism categories. Most frequently checked off categories were caring/compassion/communication (77%) and respect (69%). Logistic regression suggested that student experiences within the teaching-and-learning environment were associated with professionalism categories of excellence, leadership, and knowledge and skills, whereas those involving medical–clinical interactions were associated with respect, responsibility and accountability, altruism, and honor and integrity. Experiences of communicating and working within teams had the broadest association with learning about professionalism. Conclusions Student narratives touched on all major professionalism categories as well as illuminating the contexts in which critical experiences emerged. Linked qualitative and quantitative analysis identified those experiences that were associated with learning about particular aspects of professionalism. Experiences of teamwork were especially relevant to student learning about professionalism in action.
Academic Medicine | 2002
Gerard F. Dillon; Stephen G. Clyman; Brian E. Clauser; Melissa J. Margolis
In the early to mid-1990s, the National Board of Medical Examiners (NBME) examinations were replaced by the United States Medical Licensing Examination (USMLE). The USMLE, which was designed to have three components or Steps, was administered as a paper-and-pencil test until the late 1990s, when it moved to a computer-based testing (CBT) format. The CBT format provided the opportunity to realize the results of simulation research and development that had occurred during the prior two decades. A milestone in this effort occurred in November 1999 when, with the implementation of the computer-delivered USMLE Step 3 examination, the Primumt Computer-based Case Simulations (CCSs) were introduced. In the year preceding this introduction and the more than two years of operational use since the introduction, numerous challenges have been addressed. Preliminary results of this initial experience have been promising. This paper introduces the relevant issues, describes some pertinent research findings, and identifies next steps for research.
Advances in Health Sciences Education | 2000
Adrian M. Casillas; Stephen G. Clyman; Yihua V. Fan; Ronald H. Stevens
This study applied an unsupervised neural network modeling process to test data of the National Board of Medical Examiners (NBME) Computer-based Clinical Scenarios (CCS) to identify new performance categories and validate this process as a scoring technique. The classifications resulting from this neural network modeling were consistent with the NBME model in that highly rated NMBE performances (ratings of 7 or 8)were clustered together on the neural network output grid. Very low performance ratings appeared to share few common features and were accordingly classified at isolated nodes. This clustering was reproducible across three separately trained networks with greater than 80% agreement in two of the three network strained. However, the neural network also contained performance clusters where disparate NBME-based ratings ranged from 1 (worst) to 8 (best). Here,agreement between networks was less than 60%. Through visualization of the search strategies (search path mapping), this neural network clustering was found to be sensitive to quantitative and qualitative test selections such as excessive usage of irrelevant tests reflecting broader behavioral classification in some instances. A disparity between NBME ratings and an independent human rating system was detected by the neural network model since disagreement among raters was also reflected by a lack of neural network performance clustering. Agreement between rating systems, however, was correlated with neural network clustering for 92% of the highly rated performances.
Academic Medicine | 1996
Brian E. Clauser; David B. Swanson; Stephen G. Clyman
No abstract available.
Academic Medicine | 1997
Brian E. Clauser; Linette P. Ross; Ronald J. Nungester; Stephen G. Clyman
No abstract available.
Academic Medicine | 1996
Brian E. Clauser; Stephen G. Clyman; Melissa J. Margolis; Linette P. Ross
No abstract available.
Academic Pediatrics | 2014
Patricia J. Hicks; Alan Schwartz; Stephen G. Clyman; David G. Nichols
From the Department of Clinical Pediatrics, The Children’s Hospital of Philadelphia, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pa (Dr Hicks); Department of Medical Education, Department of Pediatrics, University of Illinois, Chicago, Ill (Dr Schwartz); Center for Innovation at the National Board of Medical Examiners (Dr Clyman); and American Board of Pediatrics (Dr Nichols) The views expressed in this report are those of the authors and do not necessarily represent those of the Accreditation Council for Graduate Medical Education, the American Board of Pediatrics, the Association of Pediatric Program Directors, or the Academic Pediatric Association. The authors declare that they have no conflict of interest. Publication of this article was supported by the American Board of Pediatrics Foundation and the Association of Pediatric Program Directors. Address correspondence to Patricia J. Hicks, MD, MHPE, Children’s Hospital of Philadelphia, 34th & Civic Center Blvd, 12NW96, Philadelphia, PA 19104 (e-mail: [email protected]).
Academic Medicine | 2003
Kimberly Swygert; Melissa J. Margolis; Ann King; Tim Siftar; Stephen G. Clyman; Richard E. Hawkins; Brian E. Clauser
Problem Statement and Background. The purpose of the present study was to examine the extent to which an automated scoring procedure that emulates expert ratings with latent semantic analysis could be used to score the written patient note component of the proposed clinical skills examination (CSE). Method. Human ratings for four CSE cases collected in 2002 were compared to automated holistic scores and to regression-based scores based on automated holistic and component scores. Results and Conclusions. Regression-based scores account for approximately half of the variance in the human ratings and are more highly correlated with the ratings than the scores produced from the automated algorithm. Implications of this study and suggestions for follow-up research are discussed.
Academic Medicine | 2016
Patricia J. Hicks; Melissa J. Margolis; Sue E. Poynter; Christa N. Chaffinch; Rebecca Tenney-Soeiro; Teri L. Turner; Linda A. Waggoner-Fountain; Robin Lockridge; Stephen G. Clyman; Alan Schwartz
Purpose To report on the development of content and user feedback regarding the assessment process and utility of the workplace-based assessment instruments of the Pediatrics Milestones Assessment Pilot (PMAP). Method One multisource feedback instrument and two structured clinical observation instruments were developed and refined by experts in pediatrics and assessment to provide evidence for nine competencies based on the Pediatrics Milestones (PMs) and chosen to inform residency program faculty decisions about learners’ readiness to serve as pediatric interns in the inpatient setting. During the 2012–2013 PMAP study, 18 U.S. pediatric residency programs enrolled interns and subinterns. Faculty, residents, nurses, and other observers used the instruments to assess learner performance through direct observation during a one-month rotation. At the end of the rotation, data were aggregated for each learner, milestone levels were assigned using a milestone classification form, and feedback was provided to learners. Learners and site leads were surveyed and/or interviewed about their experience as participants. Results Across the sites, 2,338 instruments assessing 239 learners were completed by 630 unique observers. Regarding end-of-rotation feedback, 93% of learners (128/137) agreed the assessments and feedback “helped me understand how those with whom I work perceive my performance,” and 85% (117/137) agreed they were “useful for constructing future goals or identifying a developmental path.” Site leads identified several benefits and challenges to the assessment process. Conclusions PM-based instruments used in workplace-based assessment provide a meaningful and acceptable approach to collecting evidence of learner competency development. Learners valued feedback provided by PM-based assessment.