Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Louis J. Grosso is active.

Publication


Featured researches published by Louis J. Grosso.


Assessment & Evaluation in Higher Education | 1987

ASSESSMENT OF CLINICAL COMPETENCE: WRITTEN AND COMPUTER‐BASED SIMULATIONS

David B. Swanson; John J. Norcini; Louis J. Grosso

ABSTRACT Written and computer‐based clinical simulations have been used in the health professions to assess aspects of clinical competence for many years. However, this review of the dozens of studies of their psychometric characteristics fmds little evidence to justify their continued use. While studies of the fidelity of simulations have demonstrated that examinees feel they are realistic and have good face validity, reliability studies have repeatedly shown that scores are too imprecise for meaningful interpretation, unless unpractically large numbers of simulations are included in a test. Validity studies have demonstrated that simulations have the expected relationships with a host of criterion measures, but it appears that similar assessment information can be obtained using clinically‐oriented multiple choice questions in much less testing time. Some common methodological weaknesses in study design and analysis are identified, and some research directions are suggested to improve the psychometric c...


Medical Education | 1985

Reliability, validity and efficiency of multiple choice question and patient management problem item formats in assessment of clinical competence

John J. Norcini; David B. Swanson; Louis J. Grosso; George D. Webster

Summary. Despite a lack of face validity, there continues to be heavy reliance on objective paper‐and‐pencil measures of clinical competence. Among these measures, the most common item formats are patient management problems (PMPs) and three types of multiple choice questions (MCQs): one‐best‐answer (A‐types); matching questions (M‐types); and multiple true/false questions (X‐types). The purpose of this study is to compare the reliability, validity and efficiency of these item formats with particular focus on whether MCQs and PMPs measure different aspects of clinical competence. Analyses revealed reliabilities of 0.72 or better for all item formats; the MCQ formats were most reliable. Similarly, efficiency analyses (reliability per unit of testing time) demonstrated the superiority of MCQs. Evidence for validity obtained through correlations of both programme directors’ ratings and criterion group membership with item format scores also favoured MCQs. More important, however, is whether MCQs and PMPs measure the same or different aspects of clinical competence. Regression analyses of the scores on the validity measures (programme directors’ ratings and criterion group membership) indicated that MCQs and PMPs seem to be measuring predominantly the same thing. MCQs contribute a small unique variance component over and above PMPs, while PMPs make the smallest unique contribution. As a whole, these results indicate that MCQs are more efficient, reliable and valid than PMPs.


Journal of General Internal Medicine | 1987

The relationship between features of residency training and ABIM certifying examination performance

John J. Norcini; Louis J. Grosso; Judy A. Shea; George D. Webster

The purpose of this study was to provide data concerning the relationship between features of residency training and a test of cognitive achievement gathered at the end of residency. To accomplish this, data collected in the late 1970s by three national organizations were joined and analyzed with the aid of experts in internal medicine. Although graduate medical education has evolved since this information was gathered, it does provide a baseline for assessing the impact of changes on the cognitive skills of residents. The findings suggest that better program performance on the examination is associated with attracting more knowledgeable residents to begin with and that programs are able to maintain the advantage of their residents throughout training. Moreover, program characteristics have an impact on the cognitive skills of residents over and above what would be predicted by test scores at the end of medical school. Programs with better examination performance tend to provide residents an extensive, well-supervised educational experience stressing ambulatory care.


Journal of General Internal Medicine | 1990

Residents’ perception of evaluation procedures used by their training program

Susan C. Day; Louis J. Grosso; John J. Norcini; Linda L. Blank; David B. Swanson; Muriel H. Horne

Objective:To determine the methods of evaluation used routinely by training programs and to obtain information concerning the frequencies with which various evaluation methods were used.Design:Survey of residents who had recently completed internal medicine training.Participants:5,693 respondents who completed residencies in 1987 and 1988 and were registered as first-time takers for the 1988 Certifying Examination in Internal Medicine. This constituted a 76% response rate.Main results:Virtually all residents were aware that routine evaluations were submitted on inpatient rotations, but were more uncertain about the evaluation process in the outpatient setting and the methods used to assess their bumanistic qualities. Most residents had undergone a Clinical Evaluation Exercise (CEX); residents’ clinical skills were less likely to be evaluated by direct observation of history or physical examination skills. Resident responses were aggregated within training programs to determine the pattern of evaluation across programs. The majority of programs used Advanced Cardiac Life Support (ACLS) certification, medical record audit, and the national In-Training Examination to assess most of their residents. Performance-based tests were used selectively by a third or more of the programs. Breast and pelvic examination skills and ability to perform sigmoidoscopy were thought not to be adequately assessed by the majority of residents in almost half of the programs.Conclusions:While most residents are receiving routine evaluation, including a CEX, increased efforts to educate residents about their evaluation system, to strengthen evaluation in the outpatient setting, and to evaluate certain procedural skills are recommended.


Evaluation & the Health Professions | 1984

A Comparison of Knowledge, Synthesis, and Clinical Judgment Multiple-Choice Questions in the Assessment of Physician Competence

John J. Norcini; David B. Swanson; Louis J. Grosso; Judy A. Shea; George D. Webster

This study compares the reliability, validity, and efficiency of three multiple-choice question (MCQs) ability scales with patient management problems (PMPs). Data are from the 1980, 1981, and 1982 American Board of Internal Medicine Certifying Examinations. The MCQ ability scales were constructed by classifying the one best answer and multiple-true/false questions in each examination as measuring predominantly clinical judgment, synthesis, or knowledge. Clinical judgment items require prioritizing or weighing management decisions; synthesis items require the integration of findings into a diagnostic decision; and knowledge items stress recall of factual information. Analyses indicate that the MCQ ability scales are more reliable and valid per unit of testing time than are PMPs and that clinical judgment and synthesis scales are slightly more correlated with PMPs than is the knowledge scale. Additionally, all MCQ ability scales seem to be measuring the same aspects of competence as PMPs.


Teaching and Learning in Medicine | 2013

Assessment in the context of licensure and certification.

John J. Norcini; Rebecca S. Lipner; Louis J. Grosso

Over the past 25 years, three major forces have had a significant influence on licensure and certification: the shift in focus from educational process to educational outcomes, the increasing recognition of the need for learning and assessment throughout a physicians career, and the changes in technology and psychometrics that have opened new vistas for assessment. These forces have led to significant changes in assessment for licensure and certification. To respond to these forces, licensure and certification programs have improved the ways in which their examinations are constructed, scored, and delivered. In particular, we note the introduction of adaptive testing; automated item creation, scoring, and test assembly; assessment engineering; and data forensics. Licensure and certification programs have also expanded their repertoire of assessments with the rapid development and adoption of simulation and workplace-based assessment. Finally, they have invested in research intended to validate their programs in four ways: (a) the acceptability of the program to stakeholders, (b) the extent to which stakeholders are encouraged to learn and improve, (c) the extent to which there is a relationship between performance in the programs and external measures, and (d) the extent to which there is a relationship between performance as measured by the assessment and performance in practice. Over the past 25 years, changes in licensure and certification have been driven by the educational outcomes movement, the need for lifelong learning, and advances in technology and psychometrics. Over the next 25 years, we expect these forces to continue to exert pressure for change which will lead to additional improvement and expansion in examination processes, methods of assessment, and validation research.


Journal of General Internal Medicine | 1993

The relevance to clinical practice of the certifying examination in internal medicine

John J. Norcini; Susan C. Day; Louis J. Grosso; Lynn O. Langdon; Harry R. Kimball; Richard L. Popp; Stephen E. Goldfinger

AbstractObjective: To determine the relevance of the initial certifying examination to the practice of internal medicine and the suitability of items used in initial certification for recertification. Design: Using a matrix-sampling approach, items from the 1991 Certifying Examination were assigned to two sets of judges: directors of the American Board of Internal Medicine (ABIM) and practicing general internists. Each judge rated the relevance of items on a five-point scale. Participants: 54 current or former directors of the ABIM and 72 practicing general internists; practitioners were nominated by directors and their ratings were included if they spent > 80% of their time in direct patient care. Results: The directors’ mean rating of all 576 items was 3.98 (SD=0.62); the practitioners’ mean rating was 4.11 (SD=0.82). The directors assigned to 27 items ratings of less than 3 and the practitioners assigned to 42 items ratings of less than 3; seven of these items received low ratings from both groups. There were differences in the two groups’ ratings of the relevance of various medical content categories, but the mean rating of core items was higher than that of noncore items and the mean rating of items testing clinical judgment was higher than that of items testing knowledge or synthesis. Conclusions: These findings suggest that the initial certifying examination is relevant to clinical practice and that many of the examination items are suitable for use in recertification. Differences in perception appear to exist between practitioners and directors, and the use of practitioner ratings is likely to be a routine part of judging the suitability of items for Board examinations in the future.


Journal of General Internal Medicine | 1994

Certification in internal medicine: 1989-1992

John J. Norcini; Harry R. Kimball; Louis J. Grosso; Susan C. Day; Rebecca A. Baranowski; Muriel W. Horne

AbstractObjective: To determine whether changes in the demographic/educational mix of those entering internal medicine from 1986 to 1989 were associated with differences among them at the time of certification. Participants: Included in the study were all candidates for the 1989 to 1992 American Board of Internal Medicine certifying examinations in internal medicine. Measurements: Demographic information and medical school, residency training, and examination experience were available for each candidate. Data defining quality, size, and number of subspecialties were available for internal medicine training programs. Results: From 1990 to 1992, the total number of men and women candidates increased as did the numbers of foreign-citizen non-U.S. medical school graduates and osteopathic medical school graduates; the number of U.S. medical school graduates remained nearly constant and the number of U.S.-citizen graduates of non-U.S. medical schools declined. The pass rates for all groups of first-time examination takers decreased, while the ratings of program directors remained relatively constant. Program quality, size, and number of subspecialty programs had modest positive relationships with examination performance. Conclusions: Changes in the characteristics of those entering internal medicine from 1986 to 1989 were associated with declines in performance at the time of certification. These declines occurred in all content areas of the test and were apparent regardless of program quality. These data identify some of the challenges internal medicine faces in the years ahead.


Evaluation & the Health Professions | 1988

A Criterion-Referenced Examination of Physician Competence

John J. Norcini; E. William Hancock; George D. Webster; Louis J. Grosso; Judy A. Shea

Despite the growing popularity of performance tests, scores on such measures have rarely been interpreted from a criterionreferenced perspective. This paper describes a test of skill in reading electrocardiographs (ECGs). Using generalizability theory, the errors of measurement and standard setting were estimated both alone and together from a criterionreferenced perspective. Performance on this test was also compared with a measure of the quality of training plus the multiplechoice questions and patient-management problems used in a medical certifying examination. Generalizabilityanalysesproduced positive results for the standard setting procedure and the ECGs, both separately and together. The preliminary validity evidence for scores was encouraging. The criterion-referenced ECGs ranked groups of examinees as expected based on prior education and examination experience. The criterion-referenced ECGs also had modest correlations with traditional measures of physician competence.


Journal of General Internal Medicine | 1993

A core component of the certification examination in internal medicine

Lynn O. Langdon; Louis J. Grosso; Susan C. Day; John J. Norcini; Harry R. Kimball; Suzanne W. Fletcher

AbstractObjective: To develop and test the psychometric characteristics of an examination of core content in internal medicine. Design: A cross-sectional pilot test comparing the core examination with the 1988 certifying examination and two pretest examinations. Setting: The 1988 certifying examination of the American Board of Internal Medicine. Participants: A random sample of 2,975 candidates from 8,968 candidates who took the 1988 certifying examination were given the core examination; similarly drawn samples were each given one of two pretests of traditional questions. Interventions: A framework for developing an examination of core internal medicine questions was designed and used to develop a 92-question core test with an absolute pass/fail standard. Results: Candidates answered 74% of core internal medicine questions, compared with 64%, 52%, and 53% of traditional questions on the 1988 certifying examination and the two pretests. The discriminating ability of the core internal medicine examination was lower than that of the certifying examination (r-values were 0.28 and 0.34, respectively). The pass rate was 83% for the core internal medicine examination and 57% for the certifying examination; 27% passed the core examination and failed the certifying examination; 1% passed the certifying examination and failed the core examination. Conclusion: Core internal medicine questions were easier than but almost as discriminating as traditional questions of the certifying examination. A small percentage of candidates passed the certifying examination but failed the core examination.

Collaboration


Dive into the Louis J. Grosso's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

David B. Swanson

American Board of Internal Medicine

View shared research outputs
Top Co-Authors

Avatar

Harry R. Kimball

American Board of Internal Medicine

View shared research outputs
Top Co-Authors

Avatar

Judy A. Shea

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Susan C. Day

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Lynn O. Langdon

American Board of Internal Medicine

View shared research outputs
Top Co-Authors

Avatar

Eric S. Holmboe

American Board of Internal Medicine

View shared research outputs
Top Co-Authors

Avatar

F. Daniel Duffy

American Board of Internal Medicine

View shared research outputs
Top Co-Authors

Avatar

Halyna Didura

American Board of Internal Medicine

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge