Pedro M. Hontangas
University of Valencia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Pedro M. Hontangas.
Multivariate Behavioral Research | 2000
José M. Tomás; Pedro M. Hontangas; Amparo Oliver
Two models for confirmatory factor analysis of multitrait-multimethod data (MTMM) were assessed, the correlated traits-correlated methods (CTCM), and the correlated traits-correlated uniqueness models (CTCU). Two Monte Carlo experiments (100 replications per cell) were performed to study the behavior of these models in terms of magnitude and direction of bias, and accuracy of estimates. Study one included a single indicator per trait-method combination, and it manipulated three independent variables: matrix type, from three traits-three methods to six traits-six methods; correlation among method factors, from zero to .6; and model type (CTCM and CTCU). Study two included simulated MTMM matrices with two or more indicators per trait-method combination. Again, three independent variables were manipulated: number of indicators per trait-method combination, from 2 to 5; correlation among methods; and model type, CTCM and CTCU. The results from study one showed that the CTCU model performed very well for MTMM designs with a single indicator per trait-method combination, and consistently better than the CTCM model. However, the results from study two showed that the CTCM model worked reasonably well and better than the CTCU model when more than two indicators per trait-method combination were available. Despite the CTCM models allowance for correlation between methods, results pointed to better estimates when methods were orthogonal. The main conclusion of the present article is that the use of CTCU models in the situations described in study one and the use of CTCM models in those represented in study two could be recommended.
European Journal of Psychological Assessment | 2007
Víctor J. Rubio; David Aguado; Pedro M. Hontangas; José M. Hernández
Item response theory (IRT) provides valuable methods for the analysis of the psychometric properties of a psychological measure. However, IRT has been mainly used for assessing achievements and ability rather than personality factors. This paper presents an application of the IRT to a personality measure. Thus, the psychometric properties of a new emotional adjustment measure that consists of a 28-six graded response items is shown. Classical test theory (CTT) analyses as well as IRT analyses are carried out. Samejimas (1969) graded-response model has been used for estimating item parameters. Results show that the bank of items fulfills model assumptions and fits the data reasonably well, demonstrating the suitability of the IRT models for the description and use of data originating from personality measures. In this sense, the model fulfills the expectations that IRT has undoubted advantages: (1) The invariance of the estimated parameters, (2) the treatment given to the standard error of measurement, an...
Applied Psychological Measurement | 2015
Pedro M. Hontangas; Jimmy de la Torre; Vicente Ponsoda; Iwin Leenen; Daniel Morillo; Francisco J. Abad
This article explores how traditional scores obtained from different forced-choice (FC) formats relate to their true scores and item response theory (IRT) estimates. Three FC formats are considered from a block of items, and respondents are asked to (a) pick the item that describes them most (PICK), (b) choose the two items that describe them the most and the least (MOLE), or (c) rank all the items in the order of their descriptiveness of the respondents (RANK). The multi-unidimensional pairwise-preference (MUPP) model, which is extended to more than two items per block and different FC formats, is applied to obtain the responses to each item block. Traditional and IRT (i.e., expected a posteriori) scores are computed from each data set and compared. The aim is to clarify the conditions under which simpler traditional scoring procedures for FC formats may be used in place of the more appropriate IRT estimates for the purpose of inter-individual comparisons. Six independent variables are considered: response format, number of items per block, correlation between the dimensions, item discrimination level, and sign-heterogeneity and variability of item difficulty parameters. Results show that the RANK response format outperforms the other formats for both the IRT estimates and traditional scores, although it is only slightly better than the MOLE format. The highest correlations between true and traditional scores are found when the test has a large number of blocks, dimensions assessed are independent, items have high discrimination and highly dispersed location parameters, and the test contains blocks formed by positive and negative items.
European Journal of Psychological Assessment | 2000
Pedro M. Hontangas; Vicente Ponsoda; Julio Olea; Steven L. Wise
Summary: The difficulty level choices made by examinees during a self-adapted test were studied. A positive correlation between estimate ability and difficulty choice was found. The mean difficulty level selected by the examinees increased nonlinearly as the testing session progressed. Regression analyses showed that the best predictors of difficulty choice were examinee ability, difficulty of the previous item, and score on the previous item. Four strategies for selecting difficulty levels were examined, and examinees were classified into subgroups based on the best-fitting strategy. The subgroups differed with regard to ability, pretest anxiety, number of items passed, and mean difficulty level chosen. The self-adapted test was found to reduce state anxiety for only some of the strategy groups.
European Journal of Psychological Assessment | 2004
Pedro M. Hontangas; Julio Olea; Vicente Ponsoda; Javier Revuelta; Steven L. Wise
Abstract: A new type of self-adapted test (S-AT), called Assisted Self-Adapted Test (AS-AT), is presented. It differs from an ordinary S-AT in that prior to selecting the difficulty category, the computer advises examinees on their best difficulty category choice, based on their previous performance. Three tests (computerized adaptive test, AS-AT, and S-AT) were compared regarding both their psychometric (precision and efficiency) and psychological (anxiety) characteristics. Tests were applied in an actual assessment situation, in which test scores determined 20% of term grades. A sample of 173 high school students participated. Neither differences in posttest anxiety nor ability were obtained. Concerning precision, AS-AT was as precise as CAT, and both revealed more precision than S-AT. It was concluded that AS-AT acted as a CAT concerning precision. Some hints, but not conclusive support, of the psychological similarity between AS-AT and S-AT was also found.
Psicothema | 2016
Pedro M. Hontangas; Iwin Leenen; Jimmy de la Torre; Vicente Ponsoda; Daniel Morillo; Francisco J. Abad
BACKGROUND Forced-choice tests (FCTs) were proposed to minimize response biases associated with Likert format items. It remains unclear whether scores based on traditional methods for scoring FCTs are appropriate for between-subjects comparisons. Recently, Hontangas et al. (2015) explored the extent to which traditional scoring of FCTs relates to the true scores and IRT estimates. The authors found certain conditions under which traditional scores (TS) can be used with FCTs when the underlying IRT model was an unfolding model. In this study, we examine to what extent the results are preserved when the underlying process becomes a dominance model. METHOD The independent variables analyzed in a simulation study are: forced-choice format, number of blocks, discrimination of items, polarity of items, variability of intra-block difficulty, range of difficulty, and correlation between dimensions. RESULTS A similar pattern of results was observed for both models; however, correlations between TS and true thetas are higher and the differences between TS and IRT estimates are less discrepant when a dominance model involved. CONCLUSIONS A dominance model produces a linear relationship between TS and true scores, and the subjects with extreme thetas are better measured.
Applied Psychological Measurement | 2016
Daniel Morillo; Iwin Leenen; Francisco J. Abad; Pedro M. Hontangas; Jimmy de la Torre; Vicente Ponsoda
Forced-choice questionnaires have been proposed as a way to control some response biases associated with traditional questionnaire formats (e.g., Likert-type scales). Whereas classical scoring methods have issues of ipsativity, item response theory (IRT) methods have been claimed to accurately account for the latent trait structure of these instruments. In this article, the authors propose the multi-unidimensional pairwise preference two-parameter logistic (MUPP-2PL) model, a variant within Stark, Chernyshenko, and Drasgow’s MUPP framework for items that are assumed to fit a dominance model. They also introduce a Markov Chain Monte Carlo (MCMC) procedure for estimating the model’s parameters. The authors present the results of a simulation study, which shows appropriate goodness of recovery in all studied conditions. A comparison of the newly proposed model with a Brown and Maydeu’s Thurstonian IRT model led us to the conclusion that both models are theoretically very similar and that the Bayesian estimation procedure of the MUPP-2PL may provide a slightly better recovery of the latent space correlations and a more reliable assessment of the latent trait estimation errors. An application of the model to a real data set shows convergence between the two estimation procedures. However, there is also evidence that the MCMC may be advantageous regarding the item parameters and the latent trait correlations.
Acta de Investigación Psicológica | 2015
José-Manuel Tomás; Amparo Oliver; Pedro M. Hontangas; Patricia Sancho; Laura Galiana
Abstract Rosenbergs self-esteem scale has been extensively used in all areas of psychology to assess global self-esteem (Rosenberg, 1965, 1979). Its construct validity, and specifically its factor structure, has almost from the beginning been under debate. More than four decades after its creation the cumulated evidence points that the scale measures a single trait (self-esteem) but confounded by a method factor associated to negatively worded items. The aim of the study is to examine the measurement invariance of the RSES by gender and test potential gender differences at the latent (trait and method) variable level, while controlling for method effects, in a sample of Spanish students. A series of completely a priori structural models were specified, with a standard invariance routine implemented for male and female samples. The results lead to several conclusions. Conclusions: a) the scale seem gender invariant for both trait and method factors; b) there were small but significant differences between males and females in self-esteem, differences that favored male respondents; and c) there were statistically non-significant differences between men and women in the method factors latent means.
Journal of Vocational Behavior | 2016
Enrique Merino-Tejedor; Pedro M. Hontangas; Joan Boada-Grau
Psicothema | 2005
David Aguado; Víctor J. Rubio; Pedro M. Hontangas; Jose M Hernandez