Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kenneth Pearlman is active.

Publication


Featured researches published by Kenneth Pearlman.


Journal of Applied Psychology | 1981

Task Differences as Moderators of Aptitude Test Validity in Selection: A Red Herring

Frank L. Schmidt; John E. Hunter; Kenneth Pearlman

This article describes results of two studies, based on a total sample size of nearly 400,000, examining the traditional belief that between-job task differences cause aptitude tests to be valid for some jobs but not for others. Results indicate that aptitude tests are valid across jobs. The moderating effect of tasks is negligible even when jobs differ grossly in task makeup and is probably nonexistent when task differences are less extreme. These results have important implications for validity generalization, for the use of task-oriented job analysis in selection research, for criterion construction, for moderator research, and for proper interpretation of the Uniform Guidelines on Employee Selection Procedures. The philosophy of science and methodological assumptions historically underlying belief in the hypothesis that tasks are important moderators of test validities are examined and critiqued. It is concluded that the belief in this hypothesis can be traced to behaviorist assumptions introduced into personnel psychology in the early 1960s and that, in retrospect, these assumptions can be seen to be false.


Journal of Applied Psychology | 1993

Refinements in Validity Generalization Methods: Implications for the Situational Specificity Hypothesis

Frank L. Schmidt; Kenneth Law; John E. Hunter; Hannah R. Rothstein; Kenneth Pearlman; Michael A. McDaniel

Using a large database, this study examined three refinements of validity generalization procedures: (a) a more accurate procedure for correcting the residual SD for range restriction to estimate SDP, (b) use of f instead of study-observed rs in the formula for sampling error variance, and (c) removal of non-Pearson rs. The first procedure does not affect the amount of variance accounted for by artifacts. The addition of the second and third procedures increased the mean percentage of validity variance accounted for by artifacts from 70% to 82%, a 17% increase. The cumulative addition of all three procedures decreased the mean SDf estimate from .150 to .106, a 29% decrease. Six additional variance-producing artifacts were identified that could not be corrected for. In light of these, we concluded that the obtained estimates of mean SDP and mean validity variance accounted for were consistent with the hypothesis that the true mean SDP value is close to zero. These findings provide further evidence against the situational specificity hypothesis. The first published validity generalization research study (Schmidt & Hunter, 1977) hypothesized that if all sources of artifactual variance in cognitive test validities could be controlled methodologically through study design (e.g., construct validity of tests and criterion measures, computational errors) or corrected for (e.g., sampling error, measurement error), there might be no remaining variance in validities across settings. That is, not only would validity be generalizable based on 90% credibility values in the estimated true validity distributions, but all observed variance in validities would be shown to be artifactual and the situational specificity hypothesis would be shown to be false even in its limited form. However, subsequent validity generalization research (e.g., Pearlman, Schmidt, & Hunter, 1980; Schmidt, Gast-Rosenberg, & Hunter, 1980; Schmidt, Hunter, Pearlman, & Shane, 1979) was based on data drawn from the general published and unpublished research literature, and therefore it was not possible to control or correct for the sources of artifactual variance that can generally be controlled for only through study design and execution (e.g., computational and typographical errors, study differences in criterion contamination). Not unexpectedly, many of these meta-analyses accounted for less than 100% of observed validity variance, and the average across studies was also less than 100% (e.g., see Pearlman et al., 1980; Schmidt et al., 1979). The conclusion that the validity of cognitive abilities tests in employment is generalizable is now widely accepted (e.g., see


Journal of Applied Psychology | 1980

Validity generalization results for tests used to predict job proficiency and training success in clerical occupations.

Kenneth Pearlman; Frank L. Schmidt; John E. Hunter


Personnel Psychology | 1985

FORTY QUESTIONS ABOUT VALIDITY GENERALIZATION AND META‐ANALYSIS: COMMENTARY ON FORTY QUESTIONS ABOUT VALIDITY GENERALIZATION AND META‐ANALYSIS

Frank L. Schmidt; John E. Hunter; Kenneth Pearlman; Hannah Rothstein Hirsh; Paul R. Sackett; Neal Schmitt; Mary L. Tenopyr; Jerard F. Kehoe; Sheldon Zedeck


Personnel Psychology | 1979

FURTHER TESTS OF THE SCHMIDT-HUNTER BAYESIAN VALIDITY GENERALIZATION PROCEDURE1

Frank L. Schmidt; Kenneth Pearlman; John E. Hunter; Guy S. Shane


Personnel Psychology | 1982

ASSESSING THE ECONOMIC IMPACT OF PERSONNEL PROGRAMS ON WORKFORCE PRODUCTIVITY

Frank L. Schmidt; John E. Hunter; Kenneth Pearlman


Personnel Psychology | 2006

UNPROCTORED INTERNET TESTING IN EMPLOYMENT SETTINGS

Nancy T. Tippins; James Beaty; Fritz Drasgow; Wade M. Gibson; Kenneth Pearlman; Daniel O. Segall; William Shepherd


Psychological Bulletin | 1980

Job families: A review and discussion of their implications for personnel selection.

Kenneth Pearlman


Journal of Business and Psychology | 1996

An experimental test of the influence of selection procedures on fairness perceptions, attitudes about the organization, and job pursuit intentions

James W. Smither; Roger E. Millsap; Ronald W. Stoffey; Richard R. Reilly; Kenneth Pearlman


Journal of Applied Psychology | 1982

Progress in validity generalization: Comments on Callender and Osburn and further developments.

Frank L. Schmidt; John E. Hunter; Kenneth Pearlman

Collaboration


Dive into the Kenneth Pearlman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

John E. Hunter

United States Office of Personnel Management

View shared research outputs
Top Co-Authors

Avatar

Neal Schmitt

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sheldon Zedeck

University of California

View shared research outputs
Top Co-Authors

Avatar

Daniel O. Segall

Defense Manpower Data Center

View shared research outputs
Top Co-Authors

Avatar

Guy S. Shane

University of Baltimore

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge