Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chad H. Van Iddekinge is active.

Publication


Featured researches published by Chad H. Van Iddekinge.


Academy of Management Journal | 2011

Acquiring and Developing Human Capital in Service Contexts: The Interconnectedness of Human Capital Resources

Chad H. Van Iddekinge; William I. MacKenzie

Past research has not adequately considered the importance of interconnected human capital resources. Drawing on the resource-based view, we propose a dynamic model in which changes in generic huma...


Journal of Applied Psychology | 2008

The Team Role Test: Development and Validation of a Team Role Knowledge Situational Judgment Test

Troy V. Mumford; Chad H. Van Iddekinge; Frederick P. Morgeson; Michael A. Campion

The main objectives in this research were to introduce the concept of team role knowledge and to investigate its potential usefulness for team member selection. In Study 1, the authors developed a situational judgment test, called the Team Role Test, to measure knowledge of 10 roles relevant to the team context. The criterion-related validity of this measure was examined in 2 additional studies. In a sample of academic project teams (N = 93), team role knowledge predicted team member role performance (r = .34). Role knowledge also provided incremental validity beyond mental ability and the Big Five personality factors in the prediction of role performance. The results of Study 2 revealed that the predictive validity of role knowledge generalizes to team members in a work setting (N = 82, r = .30). The implications of the results for selection in team environments are discussed.


Journal of Applied Psychology | 2011

Are you interested? A meta-analysis of relations between vocational interests and employee performance and turnover.

Chad H. Van Iddekinge; Philip L. Roth; Dan J. Putka; Stephen E. Lanivich

A common belief among researchers is that vocational interests have limited value for personnel selection. However, no comprehensive quantitative summaries of interests validity research have been conducted to substantiate claims for or against the use of interests. To help address this gap, we conducted a meta-analysis of relations between interests and employee performance and turnover using data from 74 studies and 141 independent samples. Overall validity estimates (corrected for measurement error in the criterion but not for range restriction) for single interest scales were .14 for job performance, .26 for training performance, -.19 for turnover intentions, and -.15 for actual turnover. Several factors appeared to moderate interest-criterion relations. For example, validity estimates were larger when interests were theoretically relevant to the work performed in the target job. The type of interest scale also moderated validity, such that corrected validities were larger for scales designed to assess interests relevant to a particular job or vocation (e.g., .23 for job performance) than for scales designed to assess a single, job-relevant realistic, investigative, artistic, social, enterprising, or conventional (i.e., RIASEC) interest (.10) or a basic interest (.11). Finally, validity estimates were largest when studies used multiple interests for prediction, either by using a single job or vocation focused scale (which tend to tap multiple interests) or by using a regression-weighted composite of several RIASEC or basic interest scales. Overall, the results suggest that vocational interests may hold more promise for predicting employee performance and turnover than researchers may have thought.


Journal of Applied Psychology | 2009

Effects of Selection and Training on Unit-Level Performance Over Time: A Latent Growth Modeling Approach

Chad H. Van Iddekinge; Gerald R. Ferris; Pamela L. Perrewé; Alexa A. Perryman; Fred R. Blass; Thomas D. Heetderks

Surprisingly few data exist concerning whether and how utilization of job-related selection and training procedures affects different aspects of unit or organizational performance over time. The authors used longitudinal data from a large fast-food organization (N = 861 units) to examine how change in use of selection and training relates to change in unit performance. Latent growth modeling analyses revealed significant variation in both the use and the change in use of selection and training across units. Change in selection and training was related to change in 2 proximal unit outcomes: customer service performance and retention. Change in service performance, in turn, was related to change in the more distal outcome of unit financial performance (i.e., profits). Selection and training also affected financial performance, both directly and indirectly (e.g., through service performance). Finally, results of a cross-lagged panel analysis suggested the existence of a reciprocal causal relationship between the utilization of the human resources practices and unit performance. However, there was some evidence to suggest that selection and training may be associated with different causal sequences, such that use of the training procedure appeared to lead to unit performance, whereas unit performance appeared to lead to use of the selection procedure.


Journal of Applied Psychology | 2012

The criterion-related validity of integrity tests: an updated meta-analysis.

Chad H. Van Iddekinge; Philip L. Roth; Patrick H. Raymark; Heather N. Odle-Dusseau

Integrity tests have become a prominent predictor within the selection literature over the past few decades. However, some researchers have expressed concerns about the criterion-related validity evidence for such tests because of a perceived lack of methodological rigor within this literature, as well as a heavy reliance on unpublished data from test publishers. In response to these concerns, we meta-analyzed 104 studies (representing 134 independent samples), which were authored by a similar proportion of test publishers and non-publishers, whose conduct was consistent with professional standards for test validation, and whose results were relevant to the validity of integrity-specific scales for predicting individual work behavior. Overall mean observed validity estimates and validity estimates corrected for unreliability in the criterion (respectively) were .12 and .15 for job performance, .13 and .16 for training performance, .26 and .32 for counterproductive work behavior, and .07 and .09 for turnover. Although data on restriction of range were sparse, illustrative corrections for indirect range restriction did increase validities slightly (e.g., from .15 to .18 for job performance). Several variables appeared to moderate relations between integrity tests and the criteria. For example, corrected validities for job performance criteria were larger when based on studies authored by integrity test publishers (.27) than when based on studies from non-publishers (.12). In addition, corrected validities for counterproductive work behavior criteria were larger when based on self-reports (.42) than when based on other-reports (.11) or employee records (.15).


Journal of Applied Psychology | 2005

Assessing Personality With a Structured Employment Interview: Construct-Related Validity and Susceptibility to Response Inflation

Chad H. Van Iddekinge; Patrick H. Raymark; Philip L. Roth

The authors evaluated the extent to which a personality-based structured interview was susceptible to response inflation. Interview questions were developed to measure facets of agreeableness, conscientiousness, and emotional stability. Interviewers administered mock interviews to participants instructed to respond honestly or like a job applicant. Interviewees completed scales of the same 3 facets from the NEO Personality Inventory, under the same honest and applicant-like instructions. Interviewers also evaluated interviewee personality with the NEO. Multitrait-multimethod analysis and confirmatory factor analysis provided some evidence for the construct-related validity of the personality interviews. As for response inflation, analyses revealed that the scores from the applicant-like condition were significantly more elevated (relative to honest condition scores) for self-report personality ratings than for interviewer personality ratings. In addition, instructions to respond like an applicant appeared to have a detrimental effect on the structure of the self-report and interview ratings, but not interviewer NEO ratings.


Journal of Applied Psychology | 2011

Reconsidering vocational interests for personnel selection: the validity of an interest-based selection test in relation to job knowledge, job performance, and continuance intentions.

Chad H. Van Iddekinge; Dan J. Putka; John P. Campbell

Although vocational interests have a long history in vocational psychology, they have received extremely limited attention within the recent personnel selection literature. We reconsider some widely held beliefs concerning the (low) validity of interests for predicting criteria important to selection researchers, and we review theory and empirical evidence that challenge such beliefs. We then describe the development and validation of an interests-based selection measure. Results of a large validation study (N = 418) reveal that interests predicted a diverse set of criteria—including measures of job knowledge, job performance, and continuance intentions—with corrected, cross-validated Rs that ranged from .25 to .46 across the criteria (mean R = .31). Interests also provided incremental validity beyond measures of general cognitive aptitude and facets of the Big Five personality dimensions in relation to each criterion. Furthermore, with a couple exceptions, the interest scales were associated with small to medium subgroup differences, which in most cases favored women and racial minorities. Taken as a whole, these results appear to call into question the prevailing thought that vocational interests have limited usefulness for selection.


Journal of Management | 2016

Social Media in Employee-Selection-Related Decisions A Research Agenda for Uncharted Territory

Philip L. Roth; Philip Bobko; Chad H. Van Iddekinge; Jason Bennett Thatcher

Social media (SM) pervades our society. One rapidly growing application of SM is its use in personnel decision making. Organizations are increasingly searching SM (e.g., Facebook) to gather information about potential employees. In this article, we suggest that organizational practice has outpaced the scientific study of SM assessments in an area that has important consequences for individuals (e.g., being selected for work), organizations (e.g., successfully predicting job performance or withdrawal), and society (e.g., consequent adverse impact/diversity). We draw on theory and research from various literatures to advance a research agenda that addresses this gap between practice and research. Overall, we believe this is a somewhat rare moment in the human resources literature when a new class of selection methods arrives on the scene, and we urge researchers to help understand the implications of using SM assessments for personnel decisions.


Journal of Management | 2007

Antecedents of Impression Management Use and Effectiveness in a Structured Interview

Chad H. Van Iddekinge; Lynn A. McFarland; Patrick H. Raymark

The authors examine personality variables and interview format as potential antecedents of impression management (IM) behaviors in simulated selection interviews. The means by which these variables affect ratings of interview performance is also investigated. The altruism facet of agreeableness predicted defensive IM behaviors, the vulnerability facet of emotional stability predicted self- and other-focused behaviors, and interview format (behavior description vs. situational questions) predicted self-focused and defensive behaviors. Consistent with theory and research on situational strength, antecedent—IM relations were consistently weaker in a strong situation in which interviewees had an incentive to manage their impressions. There was also evidence that IM partially mediated the effects of personality and interview format on interview performance in the weak situation.


Journal of Applied Psychology | 2010

If at first you don't succeed, try, try again: understanding race, age, and gender differences in retesting score improvement.

Deidra J. Schleicher; Chad H. Van Iddekinge; Frederick P. Morgeson; Michael A. Campion

This article explores the intersection of 2 critical and timely concerns in personnel selection-applicant retesting and subgroup differences-by exploring demographic differences in retest effects across multiple assessments. Results from large samples of applicants taking 3 written tests (N = 7,031) and 5 performance tests (N = 2,060) revealed that Whites showed larger retest score improvements than Blacks or Hispanics on several of the assessments. However, the differential improvement of Whites was greater on the written tests than on the performance tests. In addition, women and applicants under 40 years of age showed larger improvements with retesting than did men and applicants over 40. We offer some preliminary theoretical explanations for these demographic differences in retesting gains, including differences in ability, testing attitudes and motivation, and receptivity to feedback. In terms of practical implications, the results suggest that allowing applicants to retake selection tests may, in some cases, exacerbate levels of adverse impact, which can have distinct implications for retesting policy and practices in organizations.

Collaboration


Dive into the Chad H. Van Iddekinge's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Huy Le

University of Texas at San Antonio

View shared research outputs
Top Co-Authors

Avatar

Liwen Zhang

Florida State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge