Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Cees A. W. Glas is active.

Publication


Featured researches published by Cees A. W. Glas.


Psychometrika | 2001

Bayesian Estimation of a Multilevel IRT Model Using Gibbs Sampling.

Jean-Paul Fox; Cees A. W. Glas

In this article, a two-level regression model is imposed on the ability parameters in an item response theory (IRT) model. The advantage of using latent rather than observed scores as dependent variables of a multilevel model is that it offers the possibility of separating the influence of item difficulty and ability level and modeling response variation and measurement error. Another advantage is that, contrary to observed scores, latent scores are test-independent, which offers the possibility of using results from different tests in one analysis where the parameters of the IRT model and the multilevel model can be concurrently estimated. The two-parameter normal ogive model is used for the IRT measurement model. It will be shown that the parameters of the two-parameter normal ogive model and the multilevel model can be estimated in a Bayesian framework using Gibbs sampling. Examples using simulated and real data are given.


Psychometrika | 1988

The Derivation of Some Tests for the Rasch Model from the Multinomial Distribution.

Cees A. W. Glas

The present paper is concerned with testing the fit of the Rasch model. It is shown that this can be achieved by constructing functions of the data, on which model tests can be based that have power against specific model violations. It is shown that the asymptotic distribution of these tests can be derived by using the theoretical framework of testing model fit in general multinomial and product-multinomial models. The model tests are presented in two versions: one that can be used in the context of marginal maximum likelihood estimation and one that can be applied in the context of conditional maximum likelihood estimation.


Psychometrika | 2003

Bayesian modeling of measurement error in predictor variables using item response theory

Jean-Paul Fox; Cees A. W. Glas

It is shown that measurement error in predictor variables can be modeled using item response theory (IRT). The predictor variables, that may be defined at any level of an hierarchical regression model, are treated as latent variables. The normal ogive model is used to describe the relation between the latent variables and dichotomous observed variables, which may be responses to tests or questionnaires. It will be shown that the multilevel model with measurement error in the observed predictor variables can be estimated in a Bayesian framework using Gibbs sampling. In this article, handling measurement error via the normal ogive model is compared with alternative approaches using the classical true score model. Examples using real data are given.


Educational and Psychological Measurement | 2006

Application of multidimensional item response theory models to longitudinal data

Janneke M. te Marvelde; Cees A. W. Glas; Georges Van Landeghem; Jan Van Damme

The application of multidimensional item response theory (IRT) models to longitudinal educational surveys where students are repeatedly measured is discussed and exemplified. A marginal maximum likelihood (MML) method to estimate the parameters of a multidimensional generalized partial credit model for repeated measures is presented. It is shown that model fit can be evaluated using Lagrange multiplier tests. Two tests are presented: the first aims at evaluation of the fit of the item response functions and the second at the constancy of the item location parameters over time points. The outcome of the latter test is compared with an analysis using scatter plots and linear regression. An analysis of data from a school effectiveness study in Flanders (Belgium) is presented as an example of the application of these methods. In the example, it is evaluated whether the concepts “academic self-concept,” “well-being at school,” and “attentiveness in the classroom” were constant during the secondary school period.


Behavior Genetics | 2007

Variance Decomposition Using an IRT Measurement Model

Stéphanie Martine van den Berg; Cees A. W. Glas; Dorret I. Boomsma

Large scale research projects in behaviour genetics and genetic epidemiology are often based on questionnaire or interview data. Typically, a number of items is presented to a number of subjects, the subjects’ sum scores on the items are computed, and the variance of sum scores is decomposed into a number of variance components. This paper discusses several disadvantages of the approach of analysing sum scores, such as the attenuation of correlations amongst sum scores due to their unreliability. It is shown that the framework of Item Response Theory (IRT) offers a solution to most of these problems. We argue that an IRT approach in combination with Markov chain Monte Carlo (MCMC) estimation provides a flexible and efficient framework for modelling behavioural phenotypes. Next, we use data simulation to illustrate the potentially huge bias in estimating variance components on the basis of sum scores. We then apply the IRT approach with an analysis of attention problems in young adult twins where the variance decomposition model is extended with an IRT measurement model. We show that when estimating an IRT measurement model and a variance decomposition model simultaneously, the estimate for the heritability of attention problems increases from 40% (based on sum scores) to 73%.


Journal of Documentation | 2010

Developing scales for information‐seeking behaviour

Caroline F. Timmers; Cees A. W. Glas

Purpose – The main purpose of this paper is to describe the development of an instrument designed to measure information‐seeking behaviour of undergraduate students during study assignments.Design/methodology/approach – Literature research, internal consistency and reliability computed with Cronbachs Alpha (α), Factor Analyses with Varimax rotation, and item response theory form the approach to examining the subject.Findings – Four scales were found within a 46‐item survey on information‐seeking behaviour: a ten‐item scale for applying search strategies (α=0.68), a 14‐item scale for evaluating information (α=0.74), a six‐item scale for referring to information (α=0.81) and a 12‐item scale for regulation activities when seeking information (α=0.75).Originality/value – The four scales for information‐seeking behaviour can be used to monitor and evaluate this behaviour of students in higher education.


Essays on Item Response Theory | 2001

Differential Item Functioning Depending on General Covariates

Cees A. W. Glas

Item response theory (IRT) is a powerful tool for the detection of differential item functioning (DIF). It is shown that the class of IRT models with manifest predictors is a comprehensive framework for the detection of DIF. These models also support the investigation of the causes of DIF. In principle, the responses to every item in a test can be subject to DIF, and traditional IRT-based detection methods require one or more estimation runs for every single item. Therefore, (1998) proposed an alternative procedure that can be performed using only a single estimate of the item parameters. This procedure is based on the Lagrange multiplier test or the equivalent Rao efficient score test. In this chapter, the procedure is generalized in various directions, the most important one being the possibility of conditioning on general covariates. A small simulation study is presented to give an impression of the power of the test. In an example using real data it is shown how the method can be applied to the identification of main and interaction effects in DIF.


Journal of Clinical Epidemiology | 2010

The use of an item response theory-based disability item bank across diseases: accounting for differential item functioning

Nadine Weisscher; Cees A. W. Glas; Marinus Vermeulen; Rob J. de Haan

OBJECTIVE There is not a single universally accepted activity of daily living (ADL) instrument available to compare disability assessments across different patient groups. We developed a generic item bank of ADL items using item response theory, the Academic Medical Center Linear Disability Scale (ALDS). When comparing outcomes of the ALDS between patients groups, item characteristics of the ALDS should be comparable across groups. The aim of the study was to assess the differential item functioning (DIF) in a group of patients with various disorders to investigate the comparability across these groups. STUDY DESIGN AND SETTING Cross-sectional, multicenter study including 1,283 in- and outpatients with a variety of disorders and disability levels. The sample was divided in two groups: (1) mainly neurological patients (n=497; vascular medicine, Parkinsons disease and neuromuscular disorders) and (2) patients from internal medicine (n=786; pulmonary diseases, chronic pain, rheumatoid arthritis, and geriatric patients). RESULTS Eighteen of 72 ALDS items showed statistically significant DIF (P<0.01). However, the DIF could effectively be modeled by the introduction of disease-specific parameters. CONCLUSION In the subgroups studied, DIF could be modeled in such a way that the ensemble of the items comprised a scale applicable in both groups.


Neurology | 2007

The AMC Linear Disability Score in patients with newly diagnosed Parkinson disease

Nadine Weisscher; Bart Post; R.J. de Haan; Cees A. W. Glas; J. D. Speelman; M. Vermeulen

Objective: The aim of this study was to examine the clinimetric properties of the AMC Linear Disability Score (ALDS), a new generic disability measure based on Item Response Theory, in patients with newly diagnosed Parkinson disease (PD). Methods: A sample of 132 patients with PD was evaluated using the Hoehn and Yahr (H&Y), the Unified PD Rating Scale motor examination, the Schwab and England scale (S&E), the Short Form–36, the PD Quality of Life Questionnaire, and the ALDS. Results: The internal consistency reliability of the ALDS was good (α = 0.95) with 55 items extending the sufficient item-total correlation criterion (r > 0.20). The ALDS was correlated with other disability measures (r = 0.50 to 0.63) and decreasingly associated with measures reflecting impairments (r = 0.36 to 0.37) and mental health (r = 0.23 to −0.01). With regard to know-group validity, the ALDS indicated that patients with more severe PD (H&Y stage 3) were more disabled than patients with mild (H&Y stage 1) or moderate PD (H&Y stage 2) (p < 0.0001). The ALDS discriminated between more or less severe extrapyramidal symptoms (p = 0.001) and patients with postural instability showed lower ALDS scores compared to patients without postural instability (p = < 0.0001). Compared to the S&E (score 100% = 19%), the ALDS showed less of a ceiling effect (5%). Conclusion: The AMC Linear Disability Score is a flexible, feasible, and clinimetrically promising instrument to assess the level of disability in patients with newly diagnosed Parkinson disease.


Journal of Educational and Behavioral Statistics | 2000

Detection of Known Items in Adaptive Testing with a Statistical Quality Control Method

Wim J. J. Veerkamp; Cees A. W. Glas

Due to previous exposure of items in adaptive testing, items may become known to a substantial portion of examinees. A disclosed item is bound to show drift in the item parameter values. In this paper it is suggested to use a statistical quality control method for the detection of known items. The method is worked out in detail for the 1-PL and 3-PL models. Adaptive test data are used to re-estimate the item parameters, and these estimates are used in a test of parameter drift. The method is illustrated in a number of simulation studies, including a power study.

Collaboration


Dive into the Cees A. W. Glas's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge