Miguel A. Sorrel
Autonomous University of Madrid
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Miguel A. Sorrel.
Psychological Assessment | 2016
Francisco J. Abad; Miguel A. Sorrel; Francisco J. Román; Roberto Colom
IQ summary scores may not involve equivalent psychological meaning for different educational levels. Ultimately, this relates to the distinction between constructs and measurements. Here, we explore this issue studying the standardization of the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) for Spain. A representative sample of 743 individuals (374 females and 369 males) who completed the 15 subtests comprising this intelligence battery was considered. We analyzed (a) the best latent factor structure for modeling WAIS-IV subtest performance, (b) measurement invariance across educational levels, and (c) the relationships of educational level/attainment with latent factors, Full Scale IQ (FSIQ), and index factor scores. These were the main findings: (a) the bifactor model provides the best fit; (b) there is partial invariance, and therefore it is concluded that the battery is a proper measure of the constructs of interest for the educational levels analyzed (nevertheless, the relevance of g decreases at high educational levels); (c) at the latent level, g and, to a lesser extent, Verbal Comprehension and Processing Speed, are positively related to educational level/attainment; (d) despite the previous finding, we find that Verbal Comprehension and Processing Speed factor index scores have reduced incremental validity beyond FSIQ; and (e) FSIQ is a slightly biased measure of g. (PsycINFO Database Record
Organizational Research Methods | 2016
Miguel A. Sorrel; Julio Olea; Francisco J. Abad; Jimmy de la Torre; David Aguado; Filip Lievens
Conventional methods for assessing the validity and reliability of situational judgment test (SJT) scores have proven to be inadequate. For example, factor analysis techniques typically lead to nonsensical solutions, and assumptions underlying Cronbach’s alpha coefficient are violated due to the multidimensional nature of SJTs. In the current article, we describe how cognitive diagnosis models (CDMs) provide a new approach that not only overcomes these limitations but that also offers extra advantages for scoring and better understanding SJTs. The analysis of the Q-matrix specification, model fit, and model parameter estimates provide a greater wealth of information than traditional procedures do. Our proposal is illustrated using data taken from a 23-item SJT that presents situations about student-related issues. Results show that CDMs are useful tools for scoring tests, like SJTs, in which multiple knowledge, skills, abilities, and other characteristics are required to correctly answer the items. SJT classifications were reliable and significantly related to theoretically relevant variables. We conclude that CDM might help toward the exploration of the nature of the constructs underlying SJT, one of the principal challenges in SJT research.
International Journal of Selection and Assessment | 2015
Filip Lievens; Jan Corstjens; Miguel A. Sorrel; Francisco J. Abad; Julio Olea; Vicente Ponsoda
Despite the globalization of HRM, there is a dearth of research on the potential use of contextualized selection instruments such as situational judgment tests (SJTs) in other countries than those where the selection instruments were originally developed. Therefore, two studies are conducted to examine the transportability of an integrity SJT that was originally developed in the United States to a Spanish context. Study 1 showed that most SJT scenarios (16 out of 19) that were developed in the United States were also considered realistic in a Spanish context. In Study 2, the item option endorsement patterns converged to the original scoring scheme, with the exception of two items. In addition, there were high correlations between the original US empirical scoring scheme and two empirical scoring schemes that were tailored to the Spanish context (i.e., mode consensus scoring and proportional consensus scoring). Finally, correlations between the SJT integrity scores and ratings on a self-report integrity measure did not differ significantly from each other according to the type of scoring key (original US scoring vs. Spanish scoring keys). Overall, these results shed light on potential issues and solutions related to the cross-cultural use of contextualized selection instruments such as SJTs.
Applied Psychological Measurement | 2017
Miguel A. Sorrel; Francisco J. Abad; Julio Olea; Jimmy de la Torre; Juan Ramón Barrada
Research related to the fit evaluation at the item level involving cognitive diagnosis models (CDMs) has been scarce. According to the parsimony principle, balancing goodness of fit against model complexity is necessary. General CDMs require a larger sample size to be estimated reliably, and can lead to worse attribute classification accuracy than the appropriate reduced models when the sample size is small and the item quality is poor, which is typically the case in many empirical applications. The main purpose of this study was to systematically examine the statistical properties of four inferential item-fit statistics: S − X 2 , the likelihood ratio (LR) test, the Wald (W) test, and the Lagrange multiplier (LM) test. To evaluate the performance of the statistics, a comprehensive set of factors, namely, sample size, correlational structure, test length, item quality, and generating model, is systematically manipulated using Monte Carlo methods. Results show that the S − X 2 statistic has unacceptable power. Type I error and power comparisons favor LR and W tests over the LM test. However, all the statistics are highly affected by the item quality. With a few exceptions, their performance is only acceptable when the item quality is high. In some cases, this effect can be ameliorated by an increase in sample size and test length. This implies that using the above statistics to assess item fit in practical settings when the item quality is low remains a challenge.
Assessment | 2016
Francisco J. Abad; Miguel A. Sorrel; Luis F. García; Anton Aluja
Contemporary models of personality assume a hierarchical structure in which broader traits contain narrower traits. Individual differences in response styles also constitute a source of score variance. In this study, the bifactor model is applied to separate these sources of variance for personality subscores. The procedure is illustrated using data for two personality inventories—NEO Personality Inventory–Revised and Zuckerman–Kuhlman–Aluja Personality Questionnaire. The inclusion of the acquiescence method factor generally improved the fit to acceptable levels for the Zuckerman–Kuhlman–Aluja Personality Questionnaire, but not for the NEO Personality Inventory–Revised. This effect was higher in subscales where the number of direct and reverse items is not balanced. Loadings on the specific factors were usually smaller than the loadings on the general factor. In some cases, part of the variance was due to domains being different from the main one. This information is of particular interest to researchers as they can identify which subscale scores have more potential to increase predictive validity.
Journal of Environmental Psychology | 2016
Silvia Collado; Henk Staats; Miguel A. Sorrel
Journal of Environmental Psychology | 2015
Silvia Collado; Gary W. Evans; José Antonio Corraliza; Miguel A. Sorrel
Methodology | 2017
Miguel A. Sorrel; Jimmy de la Torre; Francisco J. Abad; Julio Olea
Journal of Environmental Psychology | 2016
Silvia Collado; Henk Staats; Miguel A. Sorrel
Journal of Environmental Psychology | 2017
Silvia Collado; Gary W. Evans; Miguel A. Sorrel