Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where José-Luis Padilla is active.

Publication


Featured researches published by José-Luis Padilla.


Psicothema | 2014

Validity evidence based on response processes

José-Luis Padilla; Isabel Benítez

BACKGROUND Validity evidence based on response processes was first introduced explicitly as a source of validity evidence in the latest edition of Standards for Educational and Psychological Testing. In this paper, we present the theory, the relationship with other sources of validity evidence, and the methods available for validation studies aimed at obtaining validity evidence about response processes. METHOD A comprehensive review of the literature along with theoretical and practical proposals. RESULTS The articles provides arguments for determining when validity evidence based on response processes is critical for supporting the use of the test for a particular purpose, and examples of how to perform a validation study to obtain such validity evidence. CONCLUSIONS There are methods for obtaining validity evidence based on response processes. Special attention should be paid to validation studies using the cognitive interview method given its features and possibilities. Future research problems pose how to combine data from different methods -qualitative and quantitative-, to develop complete validity arguments that support the use of the test for a particular purpose.


Field Methods | 2011

Design and Analysis of Cognitive Interviews for Comparative Multinational Testing

Kristen Miller; Rory Fitzgerald; José-Luis Padilla; Stephanie Willson; Sally Widdop; Rachel Caspar; Martin Dimov; Michelle Gray; Cátia Nunes; Peter Prüfer; Nicole Schöbi; Alisu Schoua-Glusberg

This article summarizes the work of the Comparative Cognitive Testing Workgroup, an international coalition of survey methodologists interested in developing an evidence-based methodology for examining the comparability of survey questions within cross-cultural or multinational contexts. To meet this objective, it was necessary to ensure that the cognitive interviewing (CI) method itself did not introduce method bias. Therefore, the workgroup first identified specific characteristics inherent in CI methodology that could undermine the comparability of CI evidence. The group then developed and implemented a protocol addressing those issues. In total, 135 cognitive interviews were conducted by participating countries. Through the process, the group identified various interpretive patterns resulting from sociocultural and language-related differences among countries as well as other patterns of error that would impede comparability of survey data.


Methodology: European Journal of Research Methods for The Behavioral and Social Sciences | 2009

Efficacy of Effect Size Measures in Logistic Regression

Juana Gómez-Benito; M. Dolores Hidalgo; José-Luis Padilla

Statistical techniques based on logistic regression (LR) are adequate for the detection of differential item functioning (DIF) in dichotomous items. Nevertheless, they return more false positives (FPs) than do other DIF detection techniques. This paper compares the efficacy of DIF detection using the LR significance test and the estimation of the effect size that these procedures provide using R2 of Nagelkerke. The variables manipulated were different conditions of sample size, focal and reference group sample size ratio, amount of DIF, test length and percentage of test items with DIF. In addition, examinee responses were generated to simulate both uniform and nonuniform DIF (symmetric and asymmetric). In all cases, dichotomous response tests were used. The results show that the use of R2 as a strategy for detecting DIF obtained lower correct detection percentages than those obtained from significance tests. Moreover, the LR significance test showed adequate control of FP rates, close to the nominal 5%, ...


Journal of Mixed Methods Research | 2014

Analysis of Nonequivalent Assessments across Different Linguistic Groups Using a Mixed Methods Approach: Understanding the Causes of Differential Item Functioning by Cognitive Interviewing.

Isabel Benítez; José-Luis Padilla

Differential item functioning (DIF) can undermine the validity of cross-lingual comparisons. While a lot of efficient statistics for detecting DIF are available, few general findings have been found to explain DIF results. The objective of the article was to study DIF sources by using a mixed method design. The design involves a quantitative phase in which DIF was analyzed followed by a qualitative phase conducting cognitive interviews. To illustrate the proposal, polytomous DIF was analyzed in the scales from the PISA (Programme for International Student Assessment) Student Questionnaire (Organisation for Economic Co-operation and Development). Evidence obtained allowed DIF to be connected with differences in the interpretation patterns of participants from the different linguistic groups. Finally, benefits of mixed methods design for analyzing equivalence in cross-lingual assessments are discussed.


Research in Nursing & Health | 2012

Psychometric properties of the Spanish version of the health-promoting lifestyle profile II†

Adriana Pérez-Fortis; Sara Ulla Díez; José-Luis Padilla

The Health-Promoting Lifestyle Profile II (HPLPII) has been psychometrically validated across several linguistic and cultural groups; however the Spanish version has not been psychometrically tested for the Spanish population. The purpose of this research was to evaluate the reliability and factor structure of the Spanish version of the HPLPII for Spanish people. Principal component analysis (PCA) revealed that a six-component model for 44 items accounted for 40% of the variance, and the scale had an internal consistency of .87. Confirmatory factor analysis demonstrated that a better fit of the six-component structure emerged from the PCA than from the model proposed in the original version of the HPLPII, suggesting that the health-promoting lifestyle might be sensitive to context and culture.


Applied Psychological Measurement | 2011

EASY-DIF: Software for Analyzing Differential Item Functioning Using the Mantel- Haenszel and Standardization Procedures

Andrés González; José-Luis Padilla; M. Dolores Hidalgo; Juana Gómez-Benito; Isabel Benítez

Practitioners are increasingly becoming interested in Differential Item Functioning (DIF) forimproving the validity of test and scale interpretations. Among the statistical procedures availableto assess DIF in dichotomous and polytomous items, the Mantel-Haenszel chi-square (1959) andother standardization proceduresmay beparticularly attractive to practitioners.Some of the statis-tics associated with Mantel-Haenszel procedure can be performed using specific software such asEZ-DIF (Waller, 1998), DIFAS (Penfield, 2005), and MH-DIF (Fidalgo, 1994). However, usingthese programs requires being familiar with the statistics of DIF procedures in order to understandoutput. In addition, key characteristics of the Mantel-Haenszel procedures, such as the matchingstrategy (thick or thin), purification of the matching criteria, and so on, are not currently availablein all statistical software packages. As a result, EASY-DIF was developed to provide easy-to-usesoftware for performing the most common and useful MH and standardization procedures, witha view to guiding practitioners through the analyses and helping them interpret the output.EASY-DIFanalyzesuniformandnonuniformDIFforthetotalsampleandseparatelyforlow-performing and high-performing groups (Clauser, Mazor, & Hambleton, 1994). Users canexplore possible cancellation and amplification DIF effects by establishing different cut scoresfor each group. Up to six different matching strategies based on total score distributions can beimplemented: thinmatching,equal interval, percentage of totalsample, percentage of focalsam-ple, censored matching, and minimum cell frequency (Donoghue & Allen, 1993). In addition,users can use a procedure purification of matching criteria through the selection of a validsubtest.For dichotomous items, EASY-DIF computes the Mantel-Haenszel chi-square (Holland T Mantel & Haenszel, 1959), the Mantel-Haenszel common odds ratio (CamilliSMantel&Haenszel,1959),theMH-DeltaDIF,thestandarderrorofMH-DeltaComputer Program Exchange


Applied Measurement in Education | 2016

Using Mixed Methods to Interpret Differential Item Functioning

Isabel Benítez; José-Luis Padilla; María Dolores Hidalgo Montesinos; Stephen G. Sireci

ABSTRACT Analysis of differential item functioning (DIF) is often used to determine if cross-lingual assessments are equivalent across languages. However, evidence on the causes of cross-lingual DIF is still evasive. Expert appraisal is a qualitative method useful for obtaining detailed information about problematic elements in the different linguistic versions of items. In this article we propose and explore a mixed methods approach that integrates quantitative results from DIF analysis and qualitative findings from expert appraisal to discover reasons why items exhibit DIF across languages. First, polytomous DIF was analyzed in responses to the U.S. and Spanish version of scales from the PISA Student Questionnaire by Differential Step Functioning and Ordinal Logistic Regression. Items flagged by both methods were selected to be studied to interpret DIF causes. Secondly, experts were asked about non-comparable elements in items. Experts provided qualitative evidence on problematic issues (different interpretation patterns or response processes), that may have been the cause of the DIF. The integration of results from both methods was aimed at relating type of DIF to expert appraisal findings.


IEEE Antennas and Propagation Magazine | 2017

An Embedded Lightweight Folded Printed Quadrifilar Helix Antenna: UAV telemetry and remote control systems.

Jose-Manuel Fernandez Gonzalez; Pablo Padilla; Juan F. Valenzuela-Valdés; José-Luis Padilla; Manuel Sierra-Perez

An embedded folded, printed, quadrifilar helix antenna (FPQHA) with a wide-angle coverage for unmanned aerial vehicles (UAVs) telemetry and remote control systems is presented in this article. The novelty of this design is that the FPQHA needs to be designed carefully due to UAV tail dimensions and weight constraints while maintaining a high performance to be integrated in the inner part of the UAV tail fuselage to reduce aerodynamic drag. The radiating terminal, formed by a folded, printed, four-helix, radiating section and a compact feeding network, is designed to provide left-handed, circular polarization (LHCP). The complete design offers a very homogeneous pattern in azimuth with a very good axial-ratio (AR) level over a wide range of elevation angles. The use of low-loss and lightweight materials is also an advantage of this design. The wide radiation pattern favors its use for multielement communication systems. Finally, the antenna performance results are obtained mounted inside a UAV tail platform.


Psicothema | 2015

Spanish adaptation of the Green Paranoid Thought Scales.

Inmaculada Ibanez-Casas; Pedro Femia-Marzo; José-Luis Padilla; Catherine E. L. Green; Enrique de Portugal; Jorge A. Cervilla

BACKGROUND The aim of this study was to adapt and obtain validity evidence of the Spanish Green Paranoid Thought Scales (S-GPTS). METHOD 191 Spanish people responded to S-GPTS, Peters Delusions Inventory (PDI), and measures of psychopathology. RESULTS Principal Component Analyses on the polychoric correlation matrix identified two factors accounting for 71.0% of the cumulative variance. Cronbach alphas for S-GPTS total and its subscales were above .90 in clinical and non-clinical group. The value of the area under the receiver operating characteristic curve was higher for the S-GPTS (.898), than for the PDI (.859). The best S-GPTS threshold to discriminate between cases and non-cases was 92 (sensitivity, 97.35%; specificity, 65%). S-GPTS scores positively correlated with PDI and measures of anxiety and depression. CONCLUSION The S-GPTS has adequate psychometric properties to provide valid measures of delusional ideation in a Spanish population.


Archive | 2017

Cognitive Interviewing and Think Aloud Methods

José-Luis Padilla; Jacqueline P. Leighton

There are no clear recommendations, best practices, or enough experience in validation studies aimed at obtaining validity evidence using response processes. Cognitive interviewing and think aloud methods can provide such validity evidence. The overlapping labels and the blurred delimitation between cognitive interviewing and think aloud methods can lead to researchers consolidating bad practices, and avoiding obtaining full advantages from these methods. The aim of the chapter is to help researchers make informed decisions about what method is the best option when planning a validation study to obtain response processes validity evidence. First, we describe the state-of-the-art in conducting think aloud and cognitive interviewing studies, and second, we describe the main procedural issues in conducting both methods. Similarities and differences between both methods will be evident throughout the chapter.

Collaboration


Dive into the José-Luis Padilla's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

M. Sierra-Castañer

Technical University of Madrid

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kristen Miller

National Center for Health Statistics

View shared research outputs
Researchain Logo
Decentralizing Knowledge