Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John E. Hunter is active.

Publication


Featured researches published by John E. Hunter.


Psychological Bulletin | 1998

The Validity and Utility of Selection Methods in Personnel Psychology: Practical and Theoretical Implications of 85 Years of Research Findings

Frank L. Schmidt; John E. Hunter

This article summarizes the practical and theoretical implications of 85 years of research in personnel selection. On the basis of meta-analytic findings, this article presents the validity of 19 selection procedures for predicting job performance and training performance and the validity of paired combinations of general mental ability (GMA) and Ihe 18 other selection procedures. Overall, the 3 combinations with the highest multivariate validity and utility for job performance were GMA plus a work sample test (mean validity of .63), GMA plus an integrity test (mean validity of .65), and GMA plus a structured interview (mean validity of .63). A further advantage of the latter 2 combinations is that they can be used for both entry level selection and selection of experienced employees. The practical utility implications of these summary findings are substantial. The implications of these research findings for the development of theories of job performance are discussed. From the point of view of practical value, the most important property of a personnel assessment method is predictive validity: the ability to predict future job performance, job-related learning (such as amount of learning in training and development programs), and other criteria. The predictive validity coefficient is directly proportional to the practical economic value (utility) of the assessment method (Brogden, 1949; Schmidt, Hunter, McKenzie, & Muldrow, 1979). Use of hiring methods with increased predictive validity leads to substantial increases in employee performance as measured in percentage increases in output, increased monetary value of output, and increased learning of job-related skills (Hunter, Schmidt, & Judiesch, 1990). Today, the validity of different personnel measures can be determined with the aid of 85 years of research. The most wellknown conclusion from this research is that for hiring employees without previous experience in the job the most valid predictor of future performance and learning is general mental ability ([GMA], i.e., intelligence or general cognitive ability; Hunter & Hunter, 1984; Ree & Earles, 1992). GMA can be measured using commercially available tests. However, many other measures can also contribute to the overall validity of the selection process. These include, for example, measures of


Psychological Bulletin | 1984

Validity and Utility of Alternative Predictors of Job Performance

John E. Hunter; Ronda F. Hunter

Meta-analysis of the cumulative research on various predictors of job performance shows that for entry-level jobs there is no predictor with validity equal to that of ability, which has a mean validity of .53. For selection on the basis of current job performance, the work sample test, with mean validity of .54, is slightly better. For federal entry-level jobs, substitution of an alternative predictor would cost from


Archive | 2004

Methods of Meta-Analysis

Frank L. Schmidt; John E. Hunter

3.12 billion (job tryout) to


Educational Researcher | 1986

Meta-analysis : cumulating research findings across studies

John E. Hunter; Frank L. Schmidt; Gregg B. Jackson

15.89 billion per year (age). Hiring on ability has a utility of


Journal of Vocational Behavior | 1986

Cognitive ability, cognitive aptitudes, job knowledge, and job performance

John E. Hunter

15.61 billion per year, but affects minority groups adversely. Hiring on ability by quotas would decrease this utility by 5%. A third strategy—using a low cutoff score—would decrease utility by 83%. Using other predictors in conjunction with ability tests might improve validity and reduce adverse impact, but there is as yet no data base for studying this possibility.


Communication Research | 1993

Relationships Among Attitudes, Behavioral Intentions, and Behavior A Meta-Analysis of Past Research, Part 2

Min-Sun Kim; John E. Hunter

Methods of Meta-Analysis , Methods of Meta-Analysis , کتابخانه مرکزی دانشگاه علوم پزشکی ایران


International Journal of Selection and Assessment | 2000

Fixed Effects vs. Random Effects Meta- Analysis Models: Implications for Cumulative Research Knowledge

John E. Hunter; Frank L. Schmidt

Meta-analysis is a way of synthesizing previous research on a subject in order to assess what has already been learned, and even to derive new conclusions from the mass of already researched data. In the opinion of many social scientists, it offers hope for a truly cumulative social scientific knowledge.


Journal of Applied Psychology | 1979

Impact of valid selection procedures on work-force productivity.

Frank L. Schmidt; John E. Hunter; Robert C. McKenzie; Tressie W. Muldrow

Abstract This paper reviews the hundreds of studies showing that general cognitive ability predicts job performance in all jobs. The first section shows that general cognitive ability predicts supervisor ratings and training success. The second section shows that general cognitive ability predicts objective, rigorously content valid work sample performance with even higher validity. Path analysis shows that much of this predictive power stems from the fact that general cognitive ability predicts job knowledge ( r = .80 for civilian jobs) and job knowledge predicts job performance ( r = .80). However, cognitive ability predicts performance beyond this value ( r = .75 versus r = [.80][.80] = .64) verifying job analyses showing that most major cognitive skills are used in everyday work. The third section of the paper briefly reviews evidence showing that it is general cognitive ability and not specific cognitive aptitudes that predict performance.


Journal of Applied Psychology | 1988

Job experience correlates of job performance.

Michael A. McDaniel; Frank L. Schmidt; John E. Hunter

In a recent meta-analysis of attitude-behavior research, the authors of this article found a strong overall attitude-behavior relationship (r = .79) when methodological artifacts are eliminated. The trend in A-B research, however, is to conceive of behavioral intentions (BI) as a mediator between attitudes (A) and behaviors (B). In this study, it is hypothesized that (a) A-BI correlation would be higher than A-B correlation, (b) BI-B correlation would be higher than A-B correlation, (c) A-BI correlation would be higher than BI-B correlation, (d) the variation in BI-B correlations would be greater than that of A-BI, and (e) attitudinal relevance would affect the magnitude of the A-BI correlation. A series of meta-analyses, integrating the findings of 92 A-BI correlations (N = 16,785) and 47 B-BI correlations (N = 10,203) that deal with 19 specified categories and a variety of miscellaneous topics was performed. The results were consistent with all five hypotheses. The theoretical and methodological implications are discussed.


Journal of Applied Psychology | 2006

Implications of Direct and Indirect Range Restriction for Meta-Analysis Methods and Findings

John E. Hunter; Frank L. Schmidt; Huy Le

Research conclusions in the social sciences are increasingly based on meta-analysis, making questions of the accuracy of meta-analysis critical to the integrity of the base of cumulative knowledge. Both fixed effects (FE) and random effects (RE) meta-analysis models have been used widely in published meta-analyses. This article shows that FE models typically manifest a substantial Type I bias in significance tests for mean effect sizes and for moderator variables (interactions), while RE models do not. Likewise, FE models, but not RE models, yield confidence intervals for mean effect sizes that are narrower than their nominal width, thereby overstating the degree of precision in meta-analysis findings. This article demonstrates analytically that these biases in FE procedures are large enough to create serious distortions in conclusions about cumulative knowledge in the research literature. We therefore recommend that RE methods routinely be employed in meta-analysis in preference to FE methods.

Collaboration


Dive into the John E. Hunter's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kenneth Pearlman

United States Office of Personnel Management

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ralph L. Levine

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Franklin J. Boster

Society of American Military Engineers

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge