Adelaide Ariel
University of Twente
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Adelaide Ariel.
Journal of Educational and Behavioral Statistics | 2006
Wim J. van der Linden; Adelaide Ariel; Bernard P. Veldkamp
Test-item writing efforts typically results in item pools with an undesirable correlational structure between the content attributes of the items and their statistical information. If such pools are used in computerized adaptive testing (CAT), the algorithm may be forced to select items with less than optimal information, that violate the content constraints, and/or have unfavorable exposure rates. Although at first sight somewhat counterintuitive, it is shown that if the CAT pool is assembled as a set of linear test forms, undesirable correlations can be broken down effectively. It is proposed to assemble such pools using a mixed integer programming model with constraints that guarantee that each test meets all content specifications and an objective function that requires them to have maximal information at a well-chosen set of ability values. An empirical example with a previous master pool from the Law School Admission Test (LSAT) yielded a CAT with nearly uniform bias and mean-squared error functions for the ability estimator and item-exposure rates that satisfied the target for all items in the pool.
Elements of adaptive testing | 2009
Krista Breithaupt; Adelaide Ariel; Donovan R. Hare
There exists a natural tension between the goal of creating a large enough item bank to preserve the equivalency and security of test questions and that of cost reduction and efficiency for inventory creation.
Applied Psychological Measurement | 2006
Adelaide Ariel; Bernard P. Veldkamp; Krista Breithaupt
Computerized multistage testing (MST) designs require sets of test questions (testlets) to be assembled to meet strict, often competing criteria. Rules that govern testlet assembly may dictate the number of questions on a particular subject or may describe desirable statistical properties for the test, such as measurement precision. In an MST design, testlets of differing difficulty levels must be created. Statistical properties for assembly of the testlets can be expressed using item response theory (IRT) parameters. The testlet test information function (TIF) value can be maximized at a specific point on the IRT ability scale. In practical MST designs, parallel versions of testlets are needed, so sets of testlets with equivalent properties are built according to equivalent specifications. In this project, the authors study the use of a mathematical programming technique to simultaneously assemble testlets to ensure equivalence and fairness to candidates who may be administered different testlets.
Journal of Educational Measurement | 2004
Adelaide Ariel; Bernard P. Veldkamp; Wim J. van der Linden
International Journal of Testing | 2005
Krista Breithaupt; Adelaide Ariel; Bernard P. Veldkamp
Journal of Educational Measurement | 2006
Adelaide Ariel; Wim J. van der Linden; Bernard P. Veldkamp
Advances in Psychology Research | 2002
Bernard P. Veldkamp; Wim J. van der Linden; Adelaide Ariel
Archive | 2004
Krista Breithaupt; Adelaide Ariel; Bernard P. Veldkamp
Annual Meeting of the National Council on Measurement in Education (NCME) 2004 | 2004
Krista Breithaupt; Adelaide Ariel; Bernard P. Veldkamp
OMD research report | 2002
Bernard P. Veldkamp; Adelaide Ariel