Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Cornelis A.W. Glas is active.

Publication


Featured researches published by Cornelis A.W. Glas.


Archive | 2000

Computerized adaptive testing : theory and practice

Willem J. van der Linden; Cornelis A.W. Glas

Preface. Part 1: Item Selection and Ability Estimation. 1. Item selection and ability estimation in adaptive testing W.J. van der Linden, P.J. Pashley. 2. Constrained adaptive testing with shadow tests W.J. van der Linden. 3. Principles of multidimensional adaptive testing D.O. Segall. Part 2: Applications in Large-Scale Testing Programs. 4. The GRE computer adaptive test: Operational issues G.N. Mills, M. Steffen. 5. MATHCAT: A flexible testing system in mathematics education for adults A.J. Verschoor, G.J.J.M. Straetmans. 6. Computer-adaptive sequential testing in the US Medical Licensing Examination R.M. Luecht, R.J. Nungester. Part 3: Item Pool Development and Maintenance. 7. Innovative item types for computerized testing C.G. Parshall, et al. 8. Designing item pools for computerized adaptive testing B.P. Veldkamp, W.J. van der Linden. 9. Methods of controlling the exposure of items in CAT: M.L. Stocking, C. Lewis. Part 4: Item Calibration and Model Fit. 10. Item calibration and parameter drift C.A.W. Glas. 11. Detecting person misfit in adaptive testing using statistical process control techniques E.M.L.A. van Krimpen-Stoop, R.R. Meijer. 12. The Assessment of differential item functioning in computer adaptive tests R. Zwick. Part 5: Testlet-Based Adaptive Testing. 13. Testlet response theory: An analog for the 3PL model useful in testlet-based adaptive testing H. Wainer, et al. 14. MML and EAP estimation intestlet-based adaptive testing C.A.W. Glas, et al. 15. Testlet-based adaptive mastery testing H.J. Vos, C.A.W. Glas. Author Index. Subject Index.


Psychometrika | 2001

MCMC estimation and some model-fit analysis of multidimensional IRT models

Anton Beguin; Cornelis A.W. Glas

A Bayesian procedure to estimate the three-parameter normal ogive model and a generalization of the procedure to a model with multidimensional ability parameters are presented. The procedure is a generalization of a procedure by Albert (1992) for estimating the two-parameter normal ogive model. The procedure supports analyzing data from multiple populations and incomplete designs. It is shown that restrictions can be imposed on the factor matrix for testing specific hypotheses about the ability structure. The technique is illustrated using simulated and real data.


Statistics for Social Behavioral Sciences | 2010

Elements of adaptive testing

Willem J. van der Linden; Cornelis A.W. Glas

Item selection and ability estimation in adaptive testing (Wim J. van der Linden and Peter J. PashIey).- Constrained adaptive testing with shadow tests (Wim J. van der Linden).- Principles of multidimensional adaptive testing (Daniel O. Segall).- Multidimensional adaptive testing with Kullback-Liebler information item selection (Wim J. van der Linden and Joris Mulder).- Sequencing an adaptive test battery (Wim J. van der Linden).- Adaptive tests for measuring anxiety and depression (Otto B. Walter).- MATHCAT: A flexible testing system in mathematics education for adults (Alfred J. Verschoor and Gerard J. J. M. Straetmans).- Implementing the Graduate Management admission test computerized adaptive test (Lawrence M. Rudner).- Designing and implementing a multistage adaptive test: The uniform CPA exam (Gerald J. Melican, Krista Breithaupt, and Yanwei Zhang).- A Japanese adaptive test of English as a foreign language: Developmental and operational aspects (Yasuko Nogami and Norio Hayashi).- Innovative items for computerized testing (Cynthia G. Parshall, J. Christine Harmes, Tim Davey, and Peter J. Pashley).- Designing item pools for adaptive testing (Bernard P. Veldkamp and Wim J. van der Linden).- Assembling an inventory of multistage adaptive testing systems (Krista Breithaupt, Adelaide A. Ariel, and Donovan R. Hare).- Item parameter estimation and item fit analysis (Cees A.W. GIas).- Estimation of the parameters in an item-cloning model for adaptive testing (Cees A. W. GIas, Wim J. van der Linden, and Hanneke Geerlings).- Detecting person misfit in adaptive testing using statistical process control techniques (Edith M. L. A. van Krimpen-Stoop and Rob R. Meijer).- The assessment of differential item functioning in computer adaptive tests (Rebecca Zawick).- Multi-stage testing: Issues, designs, and research (April Zenisky, Ronald K.Hambleton, and Richard M. Luecht).- Three-category adaptive classification testing (Theo J.H.M. Eggen).- Testlet-based adaptive mastery testing (Hans J. Vos and Cees A. W. Glas).- Adaptive mastering testing using a multidimensional IRT model (Cees A. W. Glas and Hans J. Vos).


Psychometrika | 1999

Modification indices for the 2-PL and the nominal response model

Cornelis A.W. Glas

In this paper, it is shown that various violations of the 2-PL model and the nominal response model can be evaluated using the Lagrange multiplier test or the equivalent efficient score test. The tests presented here focus on violation of local stochastic independence and insufficient capture of the form of the item characteristic curves. Primarily, the tests are item-oriented diagnostic tools, but taken together, they also serve the purpose of evaluation of global model fit. A useful feature of Lagrange multiplier statistics is that they are evaluated using maximum likelihood estimates of the null-model only, that is, the parameters of alternative models need not be estimated. As numerical examples, an application to real data and some power studies are presented.


Applied Psychological Measurement | 2003

A comparison of item-fit statistics for the three-parameter logistic model

Cornelis A.W. Glas; Juan Carlos Suarez Falcon

In this article, the Type I error rate and the power of a number of existing and new tests of fit to the 3-parameter logistic model (3PLM) are investigated. The first test is a generalization of a test for the evaluation of the fit to the 2-parameter logistic model (2PLM) based on the Lagrange multiplier (LM) test or the equivalent efficient score test. This technique is applied to two model violations: deviation from the 3PLM item characteristic curve and violation of local stochastic independence. The LM test for the first violation is compared with the Q 1 – G² j and S – G² j tests, respectively. The LM test for the second violation is compared with the Q 3 test and a new test, the S 3 test, which can be viewed as a generalization of the approach of the S – G² j test to the evaluation of violation of local independence. The results of simulation studies indicate that all tests, except the Q 1 – G² j test, have a Type I error rate that is acceptably close to the nominal significance level, and good power to detect the model violations they are targeted at. When, however, misfitting items are present in a test, the proportion of items that are flagged incorrectly as misfitting can become undesirably high, especially for short tests.


Applied Psychological Measurement | 2003

A Bayesian Approach to Person Fit Analysis in Item Response Theory Models

Cornelis A.W. Glas; Rob R. Meijer

A Bayesian approach to the evaluation of person fit in item response theory (IRT) models is presented. In a posterior predictive check, the observed value on a discrepancy variable is positioned in its posterior distribution. In a Bayesian framework, a Markov chain Monte Carlo procedure can be used to generate samples of the posterior distribution of the parameters of interest. These draws can also be used to compute the posterior predictive distribution of the discrepancy variable. The procedure is worked out in detail for the three-parameter normal ogive model, but it is also shown that the procedure can be directly generalized to many other IRT models. Type I error rate and the power against some specific model violations are evaluated using a number of simulation studies. Index terms: Bayesian statistics, item response theory, person fit, model fit, 3-parameter normal ogive model, posterior predictive check, power studies, Type I error.


British Journal of Mathematical and Statistical Psychology | 2005

Modelling non-ignorable missing data mechanisms with item response theory models

Rebecca Holman; Cornelis A.W. Glas

A model-based procedure for assessing the extent to which missing data can be ignored and handling non-ignorable missing data is presented. The procedure is based on item response theory modelling. As an example, the approach is worked out in detail in conjunction with item response data modelled using the partial credit and generalized partial credit models. Simulation studies are carried out to assess the extent to which the bias caused by ignoring the missing-data mechanism can be reduced. Finally, the feasibility of the procedure is demonstrated using data from a study to calibrate a medical disability scale.


Psychometrika | 1993

A DYNAMIC GENERALIZATION OF THE RASCH MODEL

N.D. Verhelst; Cornelis A.W. Glas

In the present paper a model for describing dynamic processes is constructed by combining the common Rasch model with the concept of structurally incomplete designs. This is accomplished by mapping each item on a collection of virtual items, one of which is assumed to be presented to the respondent dependent on the preceding responses and/or the feedback obtained. It is shown that, in the case of subject control, no unique conditional maximum likelihood (CML) estimates exist, whereas marginal maximum likelihood (MML) proves a suitable estimation procedure. A hierarchical family of dynamic models is presented, and it is shown how to test special cases against more general ones. Furthermore, it is shown that the model presented is a generalization of a class of mathematical learning models, known as Luces beta-model.


Health and Quality of Life Outcomes | 2005

The Academic Medical Center Linear Disability Score (ALDS) item bank: item response theory analysis in a mixed patient population.

Rebecca Holman; Nadine Weisscher; Cornelis A.W. Glas; Marcel G. W. Dijkgraaf; Martinus Vermeulen; Rob J. de Haan; R. Lindeboom

BackgroundCurrently, there is a lot of interest in the flexible framework offered by item banks for measuring patient relevant outcomes. However, there are few item banks, which have been developed to quantify functional status, as expressed by the ability to perform activities of daily life. This paper examines the measurement properties of the Academic Medical Center linear disability score item bank in a mixed population.MethodsThis paper uses item response theory to analyse data on 115 of 170 items from a total of 1002 respondents. These were: 551 (55%) residents of supported housing, residential care or nursing homes; 235 (23%) patients with chronic pain; 127 (13%) inpatients on a neurology ward following a stroke; and 89 (9%) patients suffering from Parkinsons disease.ResultsOf the 170 items, 115 were judged to be clinically relevant. Of these 115 items, 77 were retained in the item bank following the item response theory analysis. Of the 38 items that were excluded from the item bank, 24 had either been presented to fewer than 200 respondents or had fewer than 10% or more than 90% of responses in the category can carry out. A further 11 items had different measurement properties for younger and older or for male and female respondents. Finally, 3 items were excluded because the item response theory model did not fit the data.ConclusionThe Academic Medical Center linear disability score item bank has promising measurement characteristics for the mixed patient population described in this paper. Further studies will be needed to examine the measurement properties of the item bank in other populations.


Controlled Clinical Trials | 2003

Power analysis in randomized clinical trials based on item response theory

Rebecca Holman; Cornelis A.W. Glas; Rob J. de Haan

Patient relevant outcomes, measured using questionnaires, are becoming increasingly popular endpoints in randomized clinical trials (RCTs). Recently, interest in the use of item response theory (IRT) to analyze the responses to such questionnaires has increased. In this paper, we used a simulation study to examine the small sample behavior of a test statistic designed to examine the difference in average latent trait level between two groups when the two-parameter logistic IRT model for binary data is used. The simulation study was extended to examine the relationship between the number of patients required in each arm of an RCT, the number of items used to assess them, and the power to detect minimal, moderate, and substantial treatment effects. The results show that the number of patients required in each arm of an RCT varies with the number of items used to assess the patients. However, as long as at least 20 items are used, the number of items barely affects the number of patients required in each arm of an RCT to detect effect sizes of 0.5 and 0.8 with a power of 80%. In addition, the number of items used has more effect on the number of patients required to detect an effect size of 0.2 with a power of 80%. For instance, if only five randomly selected items are used, it is necessary to include 950 patients in each arm, but if 50 items are used, only 450 are required in each arm. These results indicate that if an RCT is to be designed to detect small effects, it is inadvisable to use very short instruments analyzed using IRT. Finally, the SF-36, SF-12, and SF-8 instruments were considered in the same framework. Since these instruments consist of items scored in more than two categories, slightly different results were obtained.

Collaboration


Dive into the Cornelis A.W. Glas's collaboration.

Top Co-Authors

Avatar

Erik Taal

Medisch Spectrum Twente

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mart A F J van de Laar

Radboud University Nijmegen Medical Centre

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stephanie Nikolaus

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Qiwei He

University of Twente

View shared research outputs
Researchain Logo
Decentralizing Knowledge