Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael E. Walker is active.

Publication


Featured researches published by Michael E. Walker.


Applied Measurement in Education | 2012

Determining the Anchor Composition for a Mixed-Format Test: Evaluation of Subpopulation Invariance of Linking Functions

Sooyeon Kim; Michael E. Walker

This study examined the appropriateness of the anchor composition in a mixed-format test, which includes both multiple-choice (MC) and constructed-response (CR) items, using subpopulation invariance indices. Linking functions were derived in the nonequivalent groups with anchor test (NEAT) design using two types of anchor sets: (a) MC only and (b) a mix of MC and CR. In each anchor condition, the linking functions were also derived separately for males and females, and those subpopulation functions were compared to the total group function. In the MC-only condition, the difference between the subpopulation functions and the total group function was not trivial in a score region that included cut scores, leading to inconsistent pass/fail decisions for low-performing examinees in particular. Overall, the mixed anchor was a better choice than the MC-only anchor to achieve subpopulation invariance between males and females. The research reinforces subpopulation invariance indices as a means of determining the adequacy of the anchor.


Applied Measurement in Education | 2012

Investigating Repeater Effects on Chained Equipercentile Equating with Common Anchor Items.

Sooyeon Kim; Michael E. Walker

This study investigated the impact of repeat takers of a licensure test on the equating functions in the context of a nonequivalent groups with anchor test (NEAT) design. Examinees who had taken a new, to-be-equated form of the test were divided into three subgroups according to their previous testing experience: (a) repeaters who previously took the reference form, to which the new form would be equated; (b) repeaters who previously took any form other than the reference form; and (c) first-time test-takers for whom the new form was the first exposure to the test. Equating functions remained essentially invariant across all repeaters versus first-time test-takers, supporting score equatability of the two forms. However, when the repeater subgroup was sub-divided based on the particular form examinees took previously, subgroup equating functions substantially differed from the total-group equating function, indicating subgroup dependency of score equating. The results indicate that repeater membership needs to be more clearly specified to assess the impact of repeaters on score equating. Such clarification may be especially necessary for high-stakes licensure tests because repeaters tend to perform more poorly on such tests than first-time test-takers.


International Journal of Testing | 2012

Examining Possible Construct Changes to a Licensure Test by Evaluating Equating Requirements

Sooyeon Kim; Michael E. Walker; Kevin Larkin

We demonstrate how to assess the potential changes to a tests score scale necessitated by changes to the test specifications when a field study is not feasible. We used a licensure test, which is currently under revision, as an example. We created two research forms from an actual form of the test. One research form was developed with the current specifications and the other was developed with the redesigned (new) specifications in terms of the proportion (not number) of items in each category. We examined whether the current and redesigned tests measure the same construct and have the same level of reliability. Then we used subpopulation invariance indices to assess the equatability of the redesigned test to the current test using data sets from actual operational administrations of the current test. The results suggest that the change in test specifications might be great enough that the current and redesigned test scores could not be considered exchangeable across the score range. However, the score scales for the two tests appear to coincide in the cut-score region.


Journal of Educational Measurement | 2010

Comparisons among Designs for Equating Mixed‐Format Tests in Large‐Scale Assessments

Sooyeon Kim; Michael E. Walker; Frederick McHale


Journal of Educational Measurement | 2010

Investigating the Effectiveness of Equating Designs for Constructed-Response Tests in Large-Scale Assessments.

Sooyeon Kim; Michael E. Walker; Frederick McHale


ETS Research Report Series | 2008

EQUATING OF MIXED‐FORMAT TESTS IN LARGE‐SCALE ASSESSMENTS

Sooyeon Kim; Michael E. Walker; Frederick McHale


ETS Research Report Series | 2004

New SAT Writing Prompt Study: Analyses of Group Impact and Reliability

Hunter M. Breland; Melvin Y. Kubota; Kristine Nickerson; Catherine Trapani; Michael E. Walker


ETS Research Report Series | 2009

EVALUATING SUBPOPULATION INVARIANCE OF LINKING FUNCTIONS TO DETERMINE THE ANCHOR COMPOSITION FOR A MIXED-FORMAT TEST

Sooyeon Kim; Michael E. Walker


ETS Research Report Series | 2010

Linking Mixed-Format Tests Using Multiple Choice Anchors

Michael E. Walker; Sooyeon Kim


ETS Research Report Series | 2009

EFFECT OF REPEATERS ON SCORE EQUATING IN A LARGE-SCALE LICENSURE TEST

Sooyeon Kim; Michael E. Walker

Collaboration


Dive into the Michael E. Walker's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge