Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Susan M. Case is active.

Publication


Featured researches published by Susan M. Case.


Teaching and Learning in Medicine | 1993

Extended‐matching items: A practical alternative to free‐response questions

Susan M. Case; David B. Swanson

This article describes an item format, termed extended matching, that is currently used for the United States Medical Licensing Examination (USMLE). Extended‐matching items are organized in sets that include several items with a single option list. In one common form of extended‐matching items, each item in a set describes a patient with the same chief complaint and requires the examinee to select the most likely diagnosis from a list of diagnoses associated with that chief complaint. This article outlines procedures for writing and reviewing these items, discusses the test development and psychometric advantages of the format, and reviews practical issues that arise in test administration and scoring. If items provide fairly detailed descriptions of patient situations and require examinees to make a diagnosis or specify the next step in patient care, the extended‐matching format can be used to challenge important clinical decision‐making skills. With its long option list, the format provides a good compr...


Journal of Educational and Behavioral Statistics | 2002

Analysis of Differential Item Functioning (DIF) Using Hierarchical Logistic Regression Models

David B. Swanson; Brian E. Clauser; Susan M. Case; Ronald J. Nungester; Carol Morrison Featherman

Over the past 25 years a range of parametric and nonparametric methods have been developed for analyzing Differential Item Functioning (DIF). These procedures are typically performed for each item individually or for small numbers of related items. Because the analytic procedures focus on individual items, it has been difficult to pool information across items to identify potential sources of DIF analytically. In this article, we outline an approach to DIF analysis using hierarchical logistic regression that makes it possible to combine results of logistic regression analyses across items to identify consistent sources of DIF, to quantify the proportion of explained variation in DIF coefficients, and to compare the predictive accuracy of alternate explanations for DIF. The approach can also be used to improve the accuracy of DIF estimates for individual items by applying empirical Bayes techniques, with DIF-related item characteristics serving as collateral information. To illustrate the hierarchical logistic regression procedure, we use a large data set derived from recent computer-based administrations of Step 2, the clinical science component of the United States Medical Licensing Examination (USMLE®). Results of a small Monte Carlo study of the accuracy of the DIF estimates are also reported.


Advances in Health Sciences Education | 1999

CLINICAL SKILLS ASSESSMENT WITH STANDARDIZED PATIENTS IN HIGH-STAKES TESTS: A FRAMEWORK FOR THINKING ABOUT SCORE PRECISION, EQUATING, AND SECURITY

David B. Swanson; Brian E. Clauser; Susan M. Case

Over the past decade, there has been a dramatic increase in the use of standardizedpatients (SPs) for assessment of clinical skills in high-stakes testing situations.This paper provides a framework for thinking about three inter-related issues thatremain problematic in high-stakes use of SP-based tests: methods for estimatingthe precision of scores; procedures for placing (equating) scores from different testforms onto the same scale; and threats to the security of SP-based exams. Whilegeneralizability theory is now commonly employed to analyze factors influencingthe precision of test scores, it is very common for investigators to use designsthat do not appropriately represent the complexity of SP-based test administration.Development of equating procedures for SP-based tests is in its infancy, largelyutilizing methods adapted from multiple-choice testing. Despite the obvious impor-tance of adjusting scores on alternate test forms to reduce measurement error andensure equitable treatment of examinees, equating procedures are not typicallyemployed. Research on security to date has been plagued by serious methodo-logical problems, and procedures that seem likely to aid in maintaining securitytend to increase the complexity of test construction and administration, as well asthe analytic methods required to examine precision and equate scores across testforms. Recommendations are offered for improving research and use of SP-basedassessment in high-stakes tests.Over the past decade, the use of standardized patients (SPs) for assessment ofclinical skills has increased dramatically. In North America, it is now common forSPs to be used in high-stakes tests. Dozens of medical schools have now instituted“Clinical Practice Exams” that students take during their senior year (Associationof American Medical Colleges, 1998). At many of these schools students mustpass these exams to graduate; those who fail are typically assigned to remedialwork before retesting.


Teaching and Learning in Medicine | 1996

Conceptual and methodological issues in studies comparing assessment formats

Geoffrey R. Norman; David B. Swanson; Susan M. Case

Background: There is an extensive literature, dating back over several decades, comparing written items formats (i.e., multiple choice versus free response questions). Most studies conclude that different formats measure different underlying skills or competencies. Purpose: The purpose of this article was to critically examine methodological issues that arise in studies comparing alternate written assessment methods, and identify conditions that must be met before it can be concluded that different formats assess different aptitudes or traits. Methods: Using a hypothetical study as a focus, we critically examined the requirements for an unambiguous conclusion of a format difference. Results: We identified a total of eight different criteria which must be met for research on item formats. Conclusions: Although we made no attempt to systematically review the literature, a reasonable conclusion is that few studies have met these criteria and have sufficient rigor to support the claim that different formats a...


Academic Medicine | 1996

Retention of basic science information by fourth-year medical students.

David B. Swanson; Susan M. Case; Richard M. Luecht; Dillon Gf

No abstract available.


Academic Medicine | 1994

Comparison of items in five-option and extended-matching formats for assessment of diagnostic skills

Susan M. Case; David B. Swanson; D R Ripkey

No abstract available.


Academic Medicine | 1996

Performance of the class of 1994 in the new era of USMLE.

Susan M. Case; David B. Swanson; D R Ripkey; L T Bowles; Melnick De

No abstract available.


Academic Medicine | 1997

The effects of psychiatry clerkship timing and length on measures of performance

Susan M. Case; D R Ripkey; David B. Swanson

No abstract available.


Academic Medicine | 1996

Verbosity, window dressing, and red herrings: do they make a better test item?

Susan M. Case; David B. Swanson; D F Becker

No abstract available.


Academic Medicine | 1997

Predicting performances on the Nbme Surgery Subject Test and Usmle Step 2: the effects of surgery clerkship timing and length

D R Ripkey; Susan M. Case; David B. Swanson

No abstract available.

Collaboration


Dive into the Susan M. Case's collaboration.

Top Co-Authors

Avatar

David B. Swanson

National Board of Medical Examiners

View shared research outputs
Top Co-Authors

Avatar

Douglas R. Ripkey

National Board of Medical Examiners

View shared research outputs
Top Co-Authors

Avatar

Brian E. Clauser

National Board of Medical Examiners

View shared research outputs
Top Co-Authors

Avatar

D. E. Melnick

National Board of Medical Examiners

View shared research outputs
Top Co-Authors

Avatar

L. T. Bowles

National Board of Medical Examiners

View shared research outputs
Top Co-Authors

Avatar

Matthew C. Holtman

National Board of Medical Examiners

View shared research outputs
Top Co-Authors

Avatar

Ronald J. Nungester

National Board of Medical Examiners

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carol MacLaren

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Carol Morrison Featherman

National Board of Medical Examiners

View shared research outputs
Researchain Logo
Decentralizing Knowledge