Dan J. Putka
Ohio University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dan J. Putka.
Journal of Applied Psychology | 2002
Jeffrey B. Vancouver; Charles M. Thompson; E. Casey Tischner; Dan J. Putka
Although hundreds of studies have found a positive relationship between self-efficacy and performance, several studies have found a negative relationship when the analysis is done across time (repeated measures) rather than across individuals. W. T. Powers (1991) predicted this negative relationship based on perceptual control theory. Here, 2 studies are presented to (a) confirm the causal role of self-efficacy and (b) substantiate the explanation. In Study 1, self-efficacy was manipulated for 43 of 87 undergraduates on an analytic game. The manipulation was negatively related to performance on the next trial. In Study 2, 104 undergraduates played the analytic game and reported self-efficacy between each game and confidence in the degree to which they had assessed previous feedback. As expected, self-efficacy led to overconfidence and hence increased the likelihood of committing logic errors during the game.
Journal of Applied Psychology | 2011
Chad H. Van Iddekinge; Philip L. Roth; Dan J. Putka; Stephen E. Lanivich
A common belief among researchers is that vocational interests have limited value for personnel selection. However, no comprehensive quantitative summaries of interests validity research have been conducted to substantiate claims for or against the use of interests. To help address this gap, we conducted a meta-analysis of relations between interests and employee performance and turnover using data from 74 studies and 141 independent samples. Overall validity estimates (corrected for measurement error in the criterion but not for range restriction) for single interest scales were .14 for job performance, .26 for training performance, -.19 for turnover intentions, and -.15 for actual turnover. Several factors appeared to moderate interest-criterion relations. For example, validity estimates were larger when interests were theoretically relevant to the work performed in the target job. The type of interest scale also moderated validity, such that corrected validities were larger for scales designed to assess interests relevant to a particular job or vocation (e.g., .23 for job performance) than for scales designed to assess a single, job-relevant realistic, investigative, artistic, social, enterprising, or conventional (i.e., RIASEC) interest (.10) or a basic interest (.11). Finally, validity estimates were largest when studies used multiple interests for prediction, either by using a single job or vocation focused scale (which tend to tap multiple interests) or by using a regression-weighted composite of several RIASEC or basic interest scales. Overall, the results suggest that vocational interests may hold more promise for predicting employee performance and turnover than researchers may have thought.
Journal of Applied Psychology | 2011
Chad H. Van Iddekinge; Dan J. Putka; John P. Campbell
Although vocational interests have a long history in vocational psychology, they have received extremely limited attention within the recent personnel selection literature. We reconsider some widely held beliefs concerning the (low) validity of interests for predicting criteria important to selection researchers, and we review theory and empirical evidence that challenge such beliefs. We then describe the development and validation of an interests-based selection measure. Results of a large validation study (N = 418) reveal that interests predicted a diverse set of criteria—including measures of job knowledge, job performance, and continuance intentions—with corrected, cross-validated Rs that ranged from .25 to .46 across the criteria (mean R = .31). Interests also provided incremental validity beyond measures of general cognitive aptitude and facets of the Big Five personality dimensions in relation to each criterion. Furthermore, with a couple exceptions, the interest scales were associated with small to medium subgroup differences, which in most cases favored women and racial minorities. Taken as a whole, these results appear to call into question the prevailing thought that vocational interests have limited usefulness for selection.
Organizational Research Methods | 2009
Huy Le; Frank L. Schmidt; Dan J. Putka
Measurement artifacts, including measurement errors and scale-specific factors, distort observed correlations between measures of psychological and organizational constructs. The authors discuss two alternative procedures, one using the generalized coefficient of equivalence and stability (GCES) and one based on structural equation modeling, to correct for the biasing effect of measurement artifacts in order to estimate construct-level relationships. Assumptions underlying the procedures are discussed and the degrees of biases resulting from violating the assumptions are examined by means of Monte Carlo simulation. They then propose an approach using cumulative knowledge in the literature about properties of measures of a construct to estimate the GCES. That approach can allow researchers to estimate relationships between constructs in most research situations. The authors apply the approach to estimate the GCES for overall job satisfaction, an important organizational construct.
Journal of Applied Psychology | 2008
Dan J. Putka; Huy Le; Rodney A. McCloy; Tirso Diaz
Organizational research and practice involving ratings are rife with what the authors term ill-structured measurement designs (ISMDs)--designs in which raters and ratees are neither fully crossed nor nested. This article explores the implications of ISMDs for estimating interrater reliability. The authors first provide a mock example that illustrates potential problems that ISMDs create for common reliability estimators (e.g., Pearson correlations, intraclass correlations). Next, the authors propose an alternative reliability estimator--G(q,k)--that resolves problems with traditional estimators and is equally appropriate for crossed, nested, and ill-structured designs. By using Monte Carlo simulation, the authors evaluate the accuracy of traditional reliability estimators compared with that of G(q,k) for ratings arising from ISMDs. Regardless of condition, G(q,k) yielded estimates as precise or more precise than those of traditional estimators. The advantage of G(q,k) over the traditional estimators became more pronounced with increases in the (a) overlap between the sets of raters that rated each ratee and (b) ratio of rater main effect variance to true score variance. Discussion focuses on implications of this work for organizational research and practice.
Organizational Research Methods | 2005
Jeffrey B. Vancouver; Dan J. Putka; Charles A. Scherbaum
To encourage the use of computational modeling in organizational behavior research, an example computational model is developed and rigorous tests of it presented. Specifically, a computational model based on control theory was created to test the theory’s explanation of the goal-level effect (e.g., higher goals lead to higher performance). Data from simulations of the model were compared with the behavior of 32 undergraduate students performing a scheduling task under various within-subject manipulations and across time. Correlational analyses indicated that the model accounted for most of the participants’data, with coefficients between the model and each participant’s behavior mostly in the high 90s.
Journal of Applied Psychology | 2013
Dan J. Putka; Brian J. Hoffman
Though considerable research has evaluated the functioning of assessment center (AC) ratings, surprisingly little research has articulated and uniquely estimated the components of reliable and unreliable variance that underlie such ratings. The current study highlights limitations of existing research for estimating components of reliable and unreliable variance in AC ratings. It provides a comprehensive empirical decomposition of variance in AC ratings that: (a) explicitly accounts for assessee-, dimension-, exercise-, and assessor-related effects, (b) does so with 3 large sets of operational data from a multiyear AC program, and (c) avoids many analytic limitations and confounds that have plagued the AC literature to date. In doing so, results show that (a) the extant AC literature has masked the contribution of sizable, substantively meaningful sources of variance in AC ratings, (b) various forms of assessor bias largely appear trivial, and (c) there is far more systematic, nuanced variance present in AC ratings than previous research indicates. Furthermore, this study also illustrates how the composition of reliable and unreliable variance heavily depends on the level to which assessor ratings are aggregated (e.g., overall AC-level, dimension-level, exercise-level) and the generalizations one desires to make based on those ratings. The implications of this study for future AC research and practice are discussed.
Journal of Applied Psychology | 2005
Chad H. Van Iddekinge; Dan J. Putka; Patrick H. Raymark; Carl E. Eidson
The authors modeled sources of error variance in job specification ratings collected from 3 levels of raters across 5 organizations (N=381). Variance components models were used to estimate the variance in ratings attributable to true score (variance between knowledge, skills, abilities, and other characteristics [KSAOs]) and error (KSAO-by-rater and residual variance). Subsequent models partitioned error variance into components related to the organization, position level, and demographic characteristics of the raters. Analyses revealed that the differential ordering of KSAOs by raters was not a function of these characteristics but rather was due to unexplained rating differences among the raters. The implications of these results for job specification and validity transportability are discussed.
Organizational Research Methods | 2012
David J. Woehr; Dan J. Putka; Mark C. Bowler
For nearly three decades, the predominant approach to modeling the latent structure of multitrait–multimethod (MTMM) data in organizational research has involved confirmatory factor analysis (CFA). Despite the frequency with which CFA is used to model MTMM data, commonly used CFA models may produce ambiguous or even erroneous results. This article examines the potential of generalizability theory (G-theory) methods for modeling MTMM data and makes such methods more accessible to organizational researchers. Although G-theory methods have existed for more than half a century, the research literature has yet to provide a clear description and integration of latent models implied by univariate and multivariate G-theory with MTMM data, notions of construct validity, and CFA. To help fill this void, the authors first provide a jargon-free overview of the univariate and multivariate G-theory models and analytically demonstrate linkages between their parameters (variance and covariance components), elements of the MTMM matrices, indices of convergent and discriminant validity, and CFA. The authors conclude with a discussion and empirical illustration of a G-theory-based modeling process that helps clarify the use of G-theory methods for modeling MTMM data.
Journal of Management | 2015
Deborah E. Rupp; Brian J. Hoffman; David Bischof; William Byham; Lynn Collins; Alyssa Mitchell Gibbons; Shinichi Hirose; Martin Kleinmann; Martin Lanik; Duncan J. R. Jackson; M. S. Kim; Filip Lievens; Deon Meiring; Klaus G. Melchers; Vina G. Pendit; Dan J. Putka; Nigel Povah; Doug Reynolds; Sandra Schlebusch; John Scott; Svetlana Simonenko; George C. Thornton
The article presents guidelines for professionals and ethical considerations concerning the assessment center method. Topics of the guidelines will be beneficial to human resource management specialists, industrial and organizational consultants. The social responsibility of business, their legal compliance and ethics are also explored.