H.L.J. van der Maas
University of Amsterdam
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by H.L.J. van der Maas.
Computers in Education | 2011
S. Klinkenberg; M. Straatemeier; H.L.J. van der Maas
In this paper we present a model for computerized adaptive practice and monitoring. This model is used in the Maths Garden, a web-based monitoring system, which includes a challenging web environment for children to practice arithmetic. Using a new item response model based on the Elo (1978) rating system and an explicit scoring rule, estimates of the ability of persons and the difficulty of items are updated with every answered item, allowing for on the fly item calibration. In the scoring rule both accuracy and response time are accounted for. Items are sampled with a mean success probability of .75, making the tasks challenging yet not too difficult. In a period of ten months our sample of 3648 children completed over 3.5 million arithmetic problems. The children completed about 33% of these problems outside school hours. Results show better measurement precision, high validity and reliability, high pupil satisfaction, and many interesting options for monitoring progress, diagnosing errors and analyzing development.
Psychological Medicine | 2016
Denny Borsboom; Mijke Rhemtulla; Angélique O. J. Cramer; H.L.J. van der Maas; Marten Scheffer; Conor V. Dolan
The question of whether psychopathology constructs are discrete kinds or continuous dimensions represents an important issue in clinical psychology and psychiatry. The present paper reviews psychometric modelling approaches that can be used to investigate this question through the application of statistical models. The relation between constructs and indicator variables in models with categorical and continuous latent variables is discussed, as are techniques specifically designed to address the distinction between latent categories as opposed to continua (taxometrics). In addition, we examine latent variable models that allow latent structures to have both continuous and categorical characteristics, such as factor mixture models and grade-of-membership models. Finally, we discuss recent alternative approaches based on network analysis and dynamical systems theory, which entail that the structure of constructs may be continuous for some individuals but categorical for others. Our evaluation of the psychometric literature shows that the kinds-continua distinction is considerably more subtle than is often presupposed in research; in particular, the hypotheses of kinds and continua are not mutually exclusive or exhaustive. We discuss opportunities to go beyond current research on the issue by using dynamical systems models, intra-individual time series and experimental manipulations.
Categorical Variables in Developmental Research, Methods of Analysis | 1996
H.L.J. van der Maas; Peter C. M. Molenaar
Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: http://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible.
Multivariate Behavioral Research | 2015
Dylan Molenaar; Francis Tuerlinckx; H.L.J. van der Maas
A generalized linear modeling framework to the analysis of responses and response times is outlined. In this framework, referred to as bivariate generalized linear item response theory (B-GLIRT), separate generalized linear measurement models are specified for the responses and the response times that are subsequently linked by cross-relations. The cross-relations can take various forms. Here, we focus on cross-relations with a linear or interaction term for ability tests, and cross-relations with a curvilinear term for personality tests. In addition, we discuss how popular existing models from the psychometric literature are special cases in the B-GLIRT framework depending on restrictions in the cross-relation. This allows us to compare existing models conceptually and empirically. We discuss various extensions of the traditional models motivated by practical problems. We also illustrate the applicability of our approach using various real data examples, including data on personality and cognitive ability.
Multivariate Behavioral Research | 2018
Maarten Marsman; Denny Borsboom; J. Kruis; Sacha Epskamp; R. van Bork; Lourens J. Waldorp; H.L.J. van der Maas; Gunter Maris
ABSTRACT In recent years, network models have been proposed as an alternative representation of psychometric constructs such as depression. In such models, the covariance between observables (e.g., symptoms like depressed mood, feelings of worthlessness, and guilt) is explained in terms of a pattern of causal interactions between these observables, which contrasts with classical interpretations in which the observables are conceptualized as the effects of a reflective latent variable. However, few investigations have been directed at the question how these different models relate to each other. To shed light on this issue, the current paper explores the relation between one of the most important network models—the Ising model from physics—and one of the most important latent variable models—the Item Response Theory (IRT) model from psychometrics. The Ising model describes the interaction between states of particles that are connected in a network, whereas the IRT model describes the probability distribution associated with item responses in a psychometric test as a function of a latent variable. Despite the divergent backgrounds of the models, we show a broad equivalence between them and also illustrate several opportunities that arise from this connection.
European Journal of Personality | 2013
Marjan Bakker; Angélique O. J. Cramer; Dora Matzke; Rogier A. Kievit; H.L.J. van der Maas; Eric-Jan Wagenmakers; Denny Borsboom
With the growing number of fraudulent and nonr replicability of experiments performed in laboratories world practices that investigators may be using to increase the replicability. We laud them for thoughtful intentions and ex domains: the structure of psychological science and the generalizability. The former represents a methodological/sta opportunity. Copyright
International Journal of Epidemiology | 2012
Kees-Jan Kan; Dorret I. Boomsma; Conor V. Dolan; H.L.J. van der Maas
Why is Gartners paper so interesting? It is for a number of reasons, but the most interesting is not the finding that phenotypic differences among animals still exist after standardization of genotype and environment (described in the first part of Gartners paper). These differences are expected to remain to a certain extent, because complete experimental control is impossible. More interesting is how large these differences are. Particularly interesting are the results of Gartners experiments in which he attempts to alter the amount of phenotypic variance by varying the amount of variance in environmental conditions and genetic influences (described in the second part). These results suggested the presence of an additional source of phenotypic variance besides the genotype and environment. Long before Gartners paper had been published, various researchers had speculated about the existence of such non-genetic, non-environmental (‘third’) source of variance. These included pioneering geneticists such as Sewall Wright (see the first path diagram ever), Sir Kenneth Mather and Jinks (p. 6) and Douglas Falconer and Mackay (p. 135). As we argue below, whereas this third source of variance was demonstrated in experimental organisms, it is also relevant to the interpretation of human quantitative (behaviour) genetic results (see also previous research).
Psychological Inquiry | 2016
Kees-Jan Kan; H.L.J. van der Maas; Rogier A. Kievit
Kristof Kovacs and Andrew Conway (this issue) offer a new theory for the positive manifold of intelligence (PM) and thus for the presence of a statistical general factor of intelligence. This aim is highly ambitious and deserves praise, especially if the new theory—process overlap theory (POT)—turns out to be true. If so, Kovacs and Conway argue, the general factor of intelligence needs to be regarded as a summary (formally, a constructivist or formative variable) rather than a realistic underlying source of individual differences in cognitive performance (a reflective variable), even in cases where a reflective measurement model is statistically tenable. In this sense, POT contrasts strongly with mainstream theories of intelligence (e.g., Cattell, 1963; Jensen, 1998; Spearman, 1904, 1924) in which the general factor of intelligence is conceptualized as representing a hypothetical yet realistic variable, dubbed g. If g-theory would be true, meaning a realistic g indeed exists, then reflective modeling is not only possible but also appropriate. Despite differences in interpretation of the statistical general factor of intelligence, there are also strong commonalities between POT and g-theory. For example, in both theories the subtests’ (or items’) factor loadings on a general factor of intelligence is a simple function of task complexity: The more complex a task, the higher its loading on the general factor, the better it indicates intelligence. Another example is that in both POT and g-theory the factors general and fluid intelligence are strongly related. Given such communalities, one may wonder if the interpretation of the general factor as being a realist or a constructivist variable is important, or if the reflective versus formative measurement approach matters; prediction of work success, health, and other important life outcomes (Gottfredson, 1997) will not change, for instance. In our view the distinction between formative and reflective perspectives does matter, and increasingly so given new insights from various fields. Due to the influence of scientific reductionism, modern studies of intelligence focus increasingly on the neuronal or genetic “basis of intelligence.” If the general factor of intelligence is nothing beyond a constructivist variable, the search for a simple neuronal instantiation of g (“neuro-g”; Haier et al., 2009) will not prove fruitful (e.g., Kievit et al., 2012). In addition, in the quest to detect “genes for general intelligence,” lack of power will become an even bigger issue than it already is (e.g., van der Sluis, Kan, & Dolan, 2010). In other words, if a constructivist conceptualization of the higher order factor is most appropriate, this informs and constrains our search for neural and genetic antecedents: The most fruitful path in such cases would be to focus on those lower order variables that do allow for a realist, causal interpretation. Comparing the plausibility and merit of scientific theories is a complex challenge, requiring balancing many desiderata including parsimony, explanatory power, internal consistency, falsifiability, and coherence across a range of settings. This is especially challenging in situations where multiple competing theories predict similar or even identical outcomes, like in the preceding examples, which has historically often been the case in the intelligence literature. We here focus on what we see as two possibly outstanding challenges of POT: first, internal consistency, and second, how we may go about testing (and therefore supporting or refuting) the model. In examining the consistency of POT across representations of the theory, we follow the authors and make a distinction between the theory as stated verbally (POT-V) and the theory as stated more formally, first as a structural relations model of the interindividual variance–covariance structure among intelligence test scores (POT-Structural Model [POT-S]) and second as a test theoretical model (a multidimensional item response model) in the form of Kovacs and Conway’s equation (POT-Item Response Theory [POT-I]). We maintain the following position: If POT is a valid theory, POT-V, POT-S, and POT-I should align and should all explain the PM, hence the existence of a statistical general factor, together as well as individually. In addition, inconsistencies or contradictions between POT-V, POT-S, and POT-I will provide a threat to the validity of POT as a whole, or at least require further investigation regarding what representation of POT should be considered the correct conceptualization. We agree with the authors that a strong theory of intelligence should account for more major findings than simply the positive manifold. Kovacs and Conway (this issue) identify four such findings: (a) the fact that higher order general factor of intelligence and the factor fluid intelligence are strongly correlated (e.g., Detterman & Daniel, 1989; Gustafsson, 1984; Kan, Kievit, Dolan, & van der Maas, 2011; Kvist & Gustafsson, 2008); (b) the finding that the positive manifold is stronger at lower levels intelligence than at higher levels of intelligence (Detterman & Daniel, 1989; Molenaar, Dolan, Wicherts, & van
American Journal of Psychology | 2005
H.L.J. van der Maas; Eric-Jan Wagenmakers
Journal of Mathematical Psychology | 2012
Gilles Dutilh; van Don Ravenzwaaij; Sander Nieuwenhuis; H.L.J. van der Maas; Birte U. Forstmann; Eric-Jan Wagenmakers