Rodney A. McCloy
University of Minnesota
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rodney A. McCloy.
Journal of Applied Psychology | 1994
Rodney A. McCloy; John P. Campbell; Robert Cudeck
The total variance in any observed measure of performance can be attributed to 3 sources: (a) the correlation of the measure with the latent variable of interest (i.e., true score variance), (b) reliable but irrelevant variance due to contamination, and (c) error. A model is proposed that specifies 3, and only 3, determinants of the relevant variance: declarative knowledge, procedural knowledge and skill, and volitional choice (motivation). The 3 determinants are defined, and their implications for performance measurement are discussed. Using data from the U.S. Army Selection and Classification Project (Project A), the authors found that the model fits a simplex pattern to the criterion data matrix. The predictor-determinant correlations are also estimated
Journal of Applied Psychology | 2006
Eric D. Heggestad; Morgan Morrison; Charlie L. Reeve; Rodney A. McCloy
Recent research suggests multidimensional forced-choice (MFC) response formats may provide resistance to purposeful response distortion on personality assessments. It remains unclear, however, whether these formats provide normative trait information required for selection contexts. The current research evaluated score correspondences between an MFC format measure and 2 Likert-type measures in honest and instructed-faking conditions. In honest response conditions, scores from the MFC measure appeared valid indicators of normative trait standing. Under faking conditions, the MFC measure showed less score inflation than the Likert measure at the group level of analysis. In the individual-level analyses, however, the MFC measure was as affected by faking as was the Likert measure. Results suggest the MFC format is not a viable method to control faking.
Organizational Research Methods | 2005
Rodney A. McCloy; Eric D. Heggestad; Charlie L. Reeve
This article presents a psychometric approach for extracting normative information from multidimensional forced-choice (MFC) formats while retaining the methods faking-resistant property. The approach draws on concepts from Coombss unfolding models and modern item response theory to develop a theoretical model of the judgment process used to answer MFC items, which is then used to develop a scoring system that provides estimates of normative trait standings.
Journal of Applied Psychology | 2008
Dan J. Putka; Huy Le; Rodney A. McCloy; Tirso Diaz
Organizational research and practice involving ratings are rife with what the authors term ill-structured measurement designs (ISMDs)--designs in which raters and ratees are neither fully crossed nor nested. This article explores the implications of ISMDs for estimating interrater reliability. The authors first provide a mock example that illustrates potential problems that ISMDs create for common reliability estimators (e.g., Pearson correlations, intraclass correlations). Next, the authors propose an alternative reliability estimator--G(q,k)--that resolves problems with traditional estimators and is equally appropriate for crossed, nested, and ill-structured designs. By using Monte Carlo simulation, the authors evaluate the accuracy of traditional reliability estimators compared with that of G(q,k) for ratings arising from ISMDs. Regardless of condition, G(q,k) yielded estimates as precise or more precise than those of traditional estimators. The advantage of G(q,k) over the traditional estimators became more pronounced with increases in the (a) overlap between the sets of raters that rated each ratee and (b) ratio of rater main effect variance to true score variance. Discussion focuses on implications of this work for organizational research and practice.
Organizational Research Methods | 2011
Dan J. Putka; Charles E. Lance; Huy Le; Rodney A. McCloy
The authors illustrate a problem with confirmatory factor analysis (CFA)-based strategies to model disaggregated multitrait—multirater (MTMR) data—the potential to find markedly different results with the same sample of ratees simply as a result of how one selects and identifies raters within the data set one has gathered for analysis. Using performance ratings gathered as part of a large criterion-related validation study, the authors show how such differences manifest themselves in several ways including variation in (a) covariance matrices that serve as input for the modeling effort, (b) model convergence, (c) admissibility of solutions, (d) overall model fit, (e) model parameter estimates, and (f) model selection. Implications of this study for past research and recommendations for future CFA-based MTMR modeling efforts are discussed.
Journal of Applied Psychology | 1990
Leaetta M. Hough; Newell K. Eaton; John Kamp; Rodney A. McCloy
Archive | 2012
Paul R. Sackett; Dan J. Putka; Rodney A. McCloy
Archive | 1996
Teresa L. Russell; Jennifer L. Crafts; Felicity A. Tagliareni; Rodney A. McCloy; Pamela Barkley
International Journal of Selection and Assessment | 2011
Deborah L. Whetzel; Rodney A. McCloy; Amy Hooper; Teresa L. Russell; Shonna Waters; Wanda J. Campbell; Robert A. Ramos
Technology-Enhanced Assessment of Talent | 2011
Rodney A. McCloy; Robert E. Gibby