Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stephen J. Lurie is active.

Publication


Featured researches published by Stephen J. Lurie.


Academic Medicine | 2009

Measurement of the general competencies of the accreditation council for graduate medical education: a systematic review.

Stephen J. Lurie; Christopher J. Mooney; Jeffrey M. Lyness

Purpose To evaluate published evidence that the Accreditation Council for Graduate Medical Educations six general competencies can each be measured in a valid and reliable way. Method In March 2008, the authors conducted searches of Medline and ERIC using combinations of search terms “ACGME,” “Accreditation Council for Graduate Medical Education,” “core competencies,” “general competencies,” and the specific competencies “systems-based practice” (SBP) and “practice based learning and improvement (PBLI).” Included were all publications presenting new qualitative or quantitative data about specific assessment modalities related to the general competencies since 1999; opinion pieces, review articles, and reports of consensus conferences were excluded. The search yielded 127 articles, of which 56 met inclusion criteria. Articles were subdivided into four categories: (1) quantitative/psychometric evaluations, (2) preliminary studies, (3) studies of SBP and PBLI, and (4) surveys. Results Quantitative/psychometric studies of evaluation tools failed to develop measures reflecting the six competencies in a reliable or valid way. Few preliminary studies led to published quantitative data regarding reliability or validity. Only two published surveys met quality criteria. Studies of SBP and PBLI generally operationalized these competencies as properties of systems, not of individual trainees. Conclusions The peer-reviewed literature provides no evidence that current measurement tools can assess the competencies independently of one another. Because further efforts are unlikely to be successful, the authors recommend using the competencies to guide and coordinate specific evaluation efforts, rather than attempting to develop instruments to measure the competencies directly.


Medical Education | 2012

History and practice of competency‐based assessment

Stephen J. Lurie

Medical Education 2012: 46: 49–57


Academic Medicine | 2009

Social network analysis as a method of assessing institutional culture: three case studies.

Stephen J. Lurie; Thomas T. Fogg

Purpose To describe the basic concepts of social network analysis (SNA), which assesses the unique structure of interrelationships among individuals and programs, and introduce some applications of this technique in assessing aspects of institutional culture at a medical center. Method The authors applied SNA to three settings at their institution: team function in the intensive care unit, interdisciplinary composition of advisory committees for 53 federal career development awardees, and relationships between key function directors at an institution-wide Clinical Translational Sciences Institute (CTSI). (Key functions are the major administrative units of the CTSI.) Results In the ICU setting, SNA provides interpretable summaries of aspects of clinical team functioning. When applied to membership on mentorship committees, it allows for summary descriptions of the degree of interdisciplinarity of various clinical departments. Finally, when applied to relationships among leaders of an institution-wide research enterprise, it highlights potential problem areas in relationships among academic departments. In all cases, data collection is relatively rapid and simple, thereby allowing for the possibility of frequent repeated analyses over time. Conclusions SNA provides a useful and standardized set of tools for measuring important aspects of team function, interdisciplinarity, and organizational culture that may otherwise be difficult to measure in an objective way.


Medical Education | 2006

Temporal and group-related trends in peer assessment amongst medical students

Stephen J. Lurie; Anne C. Nofziger; Sean Meldrum; Christopher J. Mooney; Ronald M. Epstein

Context  Peer assessment has been increasingly recommended as a way to evaluate the professional competencies of medical trainees. Prior studies have only assessed single groups measured at a single timepoint. Thus, neither the longitudinal stability of such ratings nor differences between groups using the same peer‐assessment instrument have been reported previously.


Academic Medicine | 2011

Commentary: pitfalls in assessment of competency-based educational objectives.

Stephen J. Lurie; Christopher J. Mooney; Jeffrey M. Lyness

Requirements for accreditation of medical professionals are increasingly cast in the language of general competencies. Because the language of these competencies is generally shaped by negotiations among stakeholders, however, it has proven difficult to attain consensus on precise definitions. This lack of clarity is amplified when attempting to measure these essentially political constructs in individual learners. The authors of this commentary frame these difficulties within modern views of test validity. The most significant obstacle to valid measurement is not necessarily a lack of useful tools but, rather, a general unwillingness to question whether the competencies themselves represent valid measurement constructs. Although competencies may prove useful in defining an overall social mission for organizations, such competencies should not be mistaken for measurable and distinct attributes that people can demonstrate in the context of their actual work.


Journal of General Internal Medicine | 2007

Relationship Between Peer Assessment During Medical School, Dean’s Letter Rankings, and Ratings by Internship Directors

Stephen J. Lurie; David R. Lambert; Anne C. Nofziger; Ronald M. Epstein; Tana A. Grady-Weliky

BackgroundIt is not known to what extent the dean’s letter (medical student performance evaluation [MSPE]) reflects peer-assessed work habits (WH) skills and/or interpersonal attributes (IA) of students.ObjectiveTo compare peer ratings of WH and IA of second- and third-year medical students with later MSPE rankings and ratings by internship program directors.Design and ParticipantsParticipants were 281 medical students from the classes of 2004, 2005, and 2006 at a private medical school in the northeastern United States, who had participated in peer assessment exercises in the second and third years of medical school. For students from the class of 2004, we also compared peer assessment data against later evaluations obtained from internship program directors.ResultsPeer-assessed WH were predictive of later MSPE groups in both the second (F = 44.90, P < .001) and third years (F = 29.54, P < .001) of medical school. Interpersonal attributes were not related to MSPE rankings in either year. MSPE rankings for a majority of students were predictable from peer-assessed WH scores. Internship directors’ ratings were significantly related to second- and third-year peer-assessed WH scores (r = .32 [P = .15] and r = .43 [P = .004]), respectively, but not to peer-assessed IA.ConclusionsPeer assessment of WH, as early as the second year of medical school, can predict later MSPE rankings and internship performance. Although peer-assessed IA can be measured reliably, they are unrelated to either outcome.


Medical Education | 2006

Effects of rater selection on peer assessment among medical students

Stephen J. Lurie; Anne C. Nofziger; Sean Meldrum; Christopher J. Mooney; Ronald M. Epstein

Context  Although peer‐assessment appears promising as a method to assess interpersonal skills among medical students, results may be biased by method of peer selection, particularly if different kinds of classmates are assigned systematically by different methods. It is also unclear whether students with lower interpersonal skills may be more negative towards their classmates than students with higher levels of interpersonal skills and, if so, how much bias this may introduce into the results of peer assessment. It is also unclear whether low‐rated students are more likely to ask to rate one another.


Teaching and Learning in Medicine | 2007

Relationship Between Dean's Letter Rankings and Later Evaluations by Residency Program Directors

Stephen J. Lurie; David R. Lambert; Tana A. Grady-Weliky

Background: It is not known how well deans letter rankings predict later performance in residency. Purpose: To assess the accuracy of deans letter rankings to predict clinical performance in internship. Method: Participants were medical students who graduated from the University of Rochester School of Medicine and Dentistry in the classes of 2003 and 2004. In their Deans Letter, each student was ranked as either “Outstanding” (upper quartile), “Excellent” (second quartile), “Very good” (lower 2 quartiles), or “Good” (lowest few percentile). We compared these deans letter rankings against results of questionnaires sent to program directors 9 months after graduation. Results: Response rate to the questionnaire was 58.9% (109 of 185 eligible graduates). There were no differences in response rate across the four deans letter ranking categories. Program directors rated students in the top two categories of deans letter rankings significantly higher than those in the very good group. Students in all three groups were rated significantly higher than those in the good group, F (3, 105) = 13.37, p < .001. Students in the very good group were most variable in their ratings by program directors, with many receiving similarly high ratings as students in the upper 2 groups. There were no differences by gender or specialty. Conclusion: Deans letter rankings are a significant predictor of later performance in internship among graduates of our medical school. Students in the bottom half of the class are most likely either to underperform or overperform in internship.


Academic Medicine | 2010

Standardizing and Personalizing Science in Medical Education

David R. Lambert; Stephen J. Lurie; Jeffrey M. Lyness; Denham S. Ward

In the century since the initial publication of the Flexner Report, medical education has emphasized a broad knowledge of science and a fundamental understanding of the scientific method, which medical educators believe are essential to the practice of medicine. The enormous growth of scientific knowledge that underlies clinical practice has challenged medical schools to accommodate this new information within the curricula. Although innovative educational modalities and new curricula have partly addressed this growth, the authors argue for a systematic restructuring of the content and structure of science education from the premedical setting through clinical practice. The overarching goal of science education is to provide students with a broad, solid foundation applicable to medicine, a deep understanding of the scientific method, and the attitudes and skills needed to apply new knowledge to patient care throughout their careers. The authors believe that to accomplish this successfully, the following changes must occur across the three major stages of medical education: (1) a reshaping of the scientific preparation that all students complete before medical school, (2) an increase in individualized science education during medical school, and (3) an emphasis on knowledge acquisition skills throughout graduate medical education and beyond to assure lifelong scientific learning. As students progress through the educational continuum, the balance of standardized and personalized scientific knowledge will shift toward personalization. Greater personalization demands that physicians possess well-refined skills in information acquisition, interpretation, and application for optimal lifelong learning and effective clinical practice.


Journal of General Internal Medicine | 2009

Association Between Hand-off Patients and Subject Exam Performance in Medicine Clerkship Students

Valerie J. Lang; Christopher J. Mooney; Alec B. O’Connor; Donald R. Bordley; Stephen J. Lurie

ABSTRACTBACKGROUNDTeaching hospitals increasingly rely on transfers of patient care to another physician (hand-offs) to comply with duty hour restrictions. Little is known about the impact of hand-offs on medical students.OBJECTIVETo evaluate the impact of hand-offs on the types of patients students see and the association with their subsequent Medicine Subject Exam performance.DESIGNObservational study over 1 year.PARTICIPANTSThird-year medical students in an Inpatient Medicine Clerkship at five hospitals with night float systems.MEASUREMENTSPrimary outcome: Medicine Subject Exam at the end of the clerkship; explanatory variables: number of fresh (without prior evaluation) and hand-off patients, diagnoses, subspecialty patients, and full evaluations performed during the clerkship, and United Stated Medical Licensing Examination (USMLE) Step I scores.MAIN RESULTSOf the 2,288 patients followed by 89 students, 990 (43.3%) were hand-offs. In a linear regression model, the only variables significantly associated with students’ Subject Exam percentile rankings were USMLE Step I scores (B = 0.26, P < 0.001) and the number of full evaluations completed on fresh patients (B =0.20,  P = 0.048; model r2 = 0.58). In other words, for each additional fresh patient evaluated, Subject Exam percentile rankings increased 0.2 points. For students in the highest quartile of Subject Exam percentile rankings, only Step I scores showed a significant association (B = 0.22, P = 0.002; r2 = 0.5). For students in the lowest quartile, only fresh patient evaluations demonstrated a significant association (B = 0.27, P = 0.03; r2 = 0.34).CONCLUSIONSHand-offs constitute a substantial portion of students’ patients and may have less educational value than “fresh” patients, especially for lower performing students.

Collaboration


Dive into the Stephen J. Lurie's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sean Meldrum

University of Rochester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge