Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jean-Paul Fox is active.

Publication


Featured researches published by Jean-Paul Fox.


Psychometrika | 2001

Bayesian Estimation of a Multilevel IRT Model Using Gibbs Sampling.

Jean-Paul Fox; Cees A. W. Glas

In this article, a two-level regression model is imposed on the ability parameters in an item response theory (IRT) model. The advantage of using latent rather than observed scores as dependent variables of a multilevel model is that it offers the possibility of separating the influence of item difficulty and ability level and modeling response variation and measurement error. Another advantage is that, contrary to observed scores, latent scores are test-independent, which offers the possibility of using results from different tests in one analysis where the parameters of the IRT model and the multilevel model can be concurrently estimated. The two-parameter normal ogive model is used for the IRT measurement model. It will be shown that the parameters of the two-parameter normal ogive model and the multilevel model can be estimated in a Bayesian framework using Gibbs sampling. Examples using simulated and real data are given.


Journal of Marketing Research | 2008

Using Item Response Theory to Measure Extreme Response Style in Marketing Research: A Global Investigation

Martijn G. de Jong; Jan-Benedict E. M. Steenkamp; Jean-Paul Fox; Hans Baumgartner

Extreme response style (ERS) is an important threat to the validity of survey-based marketing research. In this article, the authors present a new item response theory–based model for measuring ERS. This model contributes to the ERS literature in two ways. First, the method improves on existing procedures by allowing different items to be differentially useful for measuring ERS and by accommodating the possibility that an items usefulness differs across groups (e.g., countries). Second, the model integrates an advanced item response theory measurement model with a structural hierarchical model for studying antecedents of ERS. The authors simultaneously estimate a persons ERS score and individual- and group-level (country) drivers of ERS. Through simulations, they show that the new method improves on traditional procedures. They further apply the model to a large data set consisting of 12,506 consumers from 26 countries on four continents. The findings show that the model extensions are necessary to model the data adequately. Finally, they report substantive results about the effects of sociodemographic and national-cultural variables on ERS.


Journal of Consumer Research | 2007

Relaxing Measurement Invariance in Cross-National Consumer Research Using a Hierarchical IRT Model

Martijn G. de Jong; Jan-Benedict E. M. Steenkamp; Jean-Paul Fox

With the growing interest of consumer researchers to test measures and theories in an international context, the cross-national invariance of measurement instruments has become an important issue. At least two issues still need to be addressed. First, the ordinal nature of the rating scale is ignored. Second, when few or no items in the confirmatory factor analysis (CFA) exhibit metric and scalar invariance across all countries, comparison of results across countries is difficult. We solve these problems using a hierarchical IRT model. An empirical application is provided for susceptibility to normative influence, using a sample of 5,484 respondents from 11 countries on four continents.


Psychometrika | 2003

Bayesian modeling of measurement error in predictor variables using item response theory

Jean-Paul Fox; Cees A. W. Glas

It is shown that measurement error in predictor variables can be modeled using item response theory (IRT). The predictor variables, that may be defined at any level of an hierarchical regression model, are treated as latent variables. The normal ogive model is used to describe the relation between the latent variables and dichotomous observed variables, which may be responses to tests or questionnaires. It will be shown that the multilevel model with measurement error in the observed predictor variables can be estimated in a Bayesian framework using Gibbs sampling. In this article, handling measurement error via the normal ogive model is compared with alternative approaches using the classical true score model. Examples using real data are given.


Journal of Marketing Research | 2010

Reducing social desirability bias through item randomized response: An application to measure underreported desires

Martijn G. de Jong; Rik Pieters; Jean-Paul Fox

The authors present a polytomous item randomized response model to measure socially sensitive consumer behavior. It complements established methods in marketing to correct for social desirability bias a posteriori and traditional randomized response models to prevent social desirability bias a priori. The model allows for individual-level inferences at the construct level while protecting the privacy of respondents at the item level. In addition, it is possible to incorporate covariates in to various parts of the model. The proposed method is especially useful to study social issues in marketing. In the empirical application, the authors use a two-group experimental survey design and find that with the new procedure, participants report their sensitive desires more truthfully, with significant differences between socioeconomic groups. In addition, the method performs better than methods based on social desirability scales. Finally, the authors discuss truthfulness in data collection and confidentiality in data utilization.


Psychometrika | 2009

A multivariate multilevel approach to the modeling of accuracy and speed of test takers

R.H. Klein Entink; Jean-Paul Fox; W. J. van der Linden

Response times on test items are easily collected in modern computerized testing. When collecting both (binary) responses and (continuous) response times on test items, it is possible to measure the accuracy and speed of test takers. To study the relationships between these two constructs, the model is extended with a multivariate multilevel regression structure which allows the incorporation of covariates to explain the variance in speed and accuracy between individuals and groups of test takers. A Bayesian approach with Markov chain Monte Carlo (MCMC) computation enables straightforward estimation of all model parameters. Model-specific implementations of a Bayes factor (BF) and deviance information criterium (DIC) for model selection are proposed which are easily calculated as byproducts of the MCMC computation. Both results from simulation studies and real-data examples are given to illustrate several novel analyses possible with this modeling framework.


Applied Psychological Measurement | 2010

IRT parameter estimation with response times as collateral information

Wim J. van der Linden; Rinke Klein Entink; Jean-Paul Fox

Hierarchical modeling of responses and response times on test items facilitates the use of response times as collateral information in the estimation of the response parameters. In addition to the regular information in the response data, two sources of collateral information are identified: (a) the joint information in the responses and the response times summarized in the estimates of the second-level parameters and (b) the information in the posterior distribution of the response parameters given the response times. The latter is shown to be a natural empirical prior distribution for the estimation of the response parameters. Unlike traditional hierarchical item response theory (IRT) modeling, where the gain in estimation accuracy is typically paid for by an increase in bias, use of this posterior predictive distribution improves both the accuracy and the bias of IRT parameter estimates. In an empirical study, the improvements are demonstrated for the estimation of the person and item parameters in a three-parameter response model.


Psychological Methods | 2009

Evaluating cognitive theory: A joint modeling approach using responses and response times

Rinke Klein Entink; Jörg-Tobias Kuhn; Lutz F. Hornke; Jean-Paul Fox

In current psychological research, the analysis of data from computer-based assessments or experiments is often confined to accuracy scores. Response times, although being an important source of additional information, are either neglected or analyzed separately. In this article, a new model is developed that allows the simultaneous analysis of accuracy scores and response times of cognitive tests with a rule-based design. The model is capable of simultaneously estimating ability and speed on the person side as well as difficulty and time intensity on the task side, thus dissociating information that is often confounded in current analysis procedures. Further, by integrating design matrices on the task side, it becomes possible to assess the effects of design parameters (e.g., cognitive processes) on both task difficulty and time intensity, offering deeper insights into the task structure. A Bayesian approach, using Markov Chain Monte Carlo methods, has been developed to estimate the model. An application of the model in the context of educational assessment is illustrated using a large-scale investigation of figural reasoning ability.


Journal of Educational and Behavioral Statistics | 2005

Randomized Item Response Theory Models

Jean-Paul Fox

The randomized response (RR) technique is often used to obtain answers on sensitive questions. A new method is developed to measure latent variables using the RR technique because direct questioning leads to biased results. Within the RR technique is the probability of the true response modeled by an item response theory (IRT) model. The RR technique links the observed item response with the true item response. Attitudes can be measured without knowing the true individual answers. This approach makes also a hierarchical analysis possible, with explanatory variables, given observed RR data. All model parameters can be estimated simultaneously using Markov chain Monte Carlo. The randomized item response technique was applied in a study on cheating behavior of students at a Dutch University. In this study, it is of interest if students’ cheating behavior differs across studies and if there are indicators that can explain differences in cheating behaviors.


British Journal of Mathematical and Statistical Psychology | 2009

A Box–Cox normal model for response times

R.H. Klein Entink; Jean-Paul Fox; W. J. van der Linden

The log-transform has been a convenient choice in response time modelling on test items. However, motivated by a dataset of the Medical College Admission Test where the lognormal model violated the normality assumption, the possibilities of the broader class of Box-Cox transformations for response time modelling are investigated. After an introduction and an outline of a broader framework for analysing responses and response times simultaneously, the performance of a Box-Cox normal model for describing response times is investigated using simulation studies and a real data example. A transformation-invariant implementation of the deviance information criterium (DIC) is developed that allows for comparing model fit between models with different transformation parameters. Showing an enhanced description of the shape of the response time distributions, its application in an educational measurement context is discussed at length.

Collaboration


Dive into the Jean-Paul Fox's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Martijn G. de Jong

Erasmus University Rotterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge