H. John Bernardin
Florida Atlantic University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by H. John Bernardin.
Academy of Management Journal | 1995
Jeffrey S. Kane; H. John Bernardin; Peter Villanova; Joseph Peyrefitte
Research has identified rating leniency as one of the most troublesome of rating errors. Little is known about the extent to which the error is a stable rater tendency, although Guilford hypothesiz...
Academy of Management Journal | 1993
H. John Bernardin; Donna K. Cooke
A 1990 review of the research on the validity of integrity-honesty tests performed by the U.S. Congresss Office of Technology Assessment found no studies conducted by independent researchers in wh...
Group & Organization Management | 2005
H. John Bernardin; Peter Villanova
We report two studies using different methodologies to advance research in the understanding of rater self-efficacy. In the first study, an experimental investigation of the effects of a program designed to augment rater self-efficacy perceptions found that student raters provided with Self-Efficacy Training for Raters (SET-R) produced less elevated ratings subsequent to training as well as reported lower levels of performance appraisal discomfort based on responses to the Performance Appraisal Discomfort Scale (PADS). We also report the survey results conducted with appraisal participants develop a rater self-efficacy scale with high fidelity to actual appraisal situations. Accordingly, appraisal circumstances that heighten the threat appraisal of raters are described by four discrete behavioral dimensions. Two super ordinate dimensions that exhibited significant differences between more and less experienced raters may suffice for a simpler representation of the rater self-efficacy domain. These findings hold promise for advancing efforts in understanding rater self-efficacy.
Journal of Business and Psychology | 1990
H. John Bernardin; Joseph A. Orban
The effect of rating format and non-performance variables on rating leniency were studied in two law enforcement organizations. One of these variables, trust in the appraisal process, was defined as the extent to which a rater believes that fair and accurate appraisal will be made in the organization. A measure of trust in appraisal accounted for a significant proportion of variance in performance ratings. The purpose of appraisal (i.e., feedback or promotion) also accounted for rating variance. A mixed-standard rating format showed less susceptibility to the non-performance variables on the extent of leniency. Discussion centers on the usefulness of rater and organizational variables in performance appraisal research.
Human Resource Management Review | 1992
H. John Bernardin
Abstract An “analytic” framework is presented in the context of business-related trends which have an impact on performance appraisal systems and their ultimate effectiveness. Survey data is reviewed identifying key areas for improvement in appraisal. A model of customer-based criterion development is presented which focuses on differentiated criteria with quantity, quality and other criteria defined by critical customers and the use of internal and external customers for appraisal on certain performance dimensions.
Academy of Management Journal | 1987
H. John Bernardin
The article discusses methods of measuring job-related discomfort. The author notes a number of obstacles to obtaining an accurate measure of negative emotions related to job performance, including...
Human Resource Management Review | 1995
H. John Bernardin; H.W. Hennessey; Joseph Peyrefitte
Abstract We examined a common expert witness theme in EEO cases that rating bias in the form of ethnic, age, or gender differences in personnel decisions based on performance appraisals is moderated by criterion specificity or rating scale format. Few studies have investigated this issue and results do not support the position that the more objective or specific criteria for assessment will result in smaller differences between groups based on age, gender or ethnic classification.
Journal of Criminal Justice | 1992
Joan E. Pynes; H. John Bernardin
This article examines the utility of an assessment center for the selection of entry-level police officers. The assessment center successfully predicted both training academy and on-the-job performance. The predictive validities of the assessment center ratings were compared to the predictive validities of a paper-and-pencil cognitive ability test. The cognitive ability test outpredicted the assessment center ratings for training academy performance. However, the assessment center ratings outpredicted the cognitive ability test for on-the-job performance. Assuming a selection rate of 150 candidates for the cognitive ability test, there would have been adverse impact against the Black and Hispanic candidates. Basing selection on the assessment center results would have eliminated the adverse impact against Hispanics and increased the percentage of Blacks being selected. Despite the implementation costs associated with assessment centers, they are a viable alternative for selecting police officers.
Psychological Reports | 1987
H. John Bernardin
This study tested the hypothesis that “reciprocal leniency” moderated the relationship between Consideration scores on the Leader Behavior Description Questionnaire—Form XII and performance ratings. Reciprocal leniency was defined as a response style in which scores on the questionnaire are affected by harsh, lenient, or fair ratings made by the supervisor. Results partially supported the hypothesis.
Educational and Psychological Measurement | 1985
Lawrence D. Greene; H. John Bernardin; Jarold Abbott
It was noted that format comparisons for performance rating scales rarely report the most critical statistic in terms of assessing score comparability; i.e., the disattenuated correlation of scores between formats. This study examined the degree to which scores on different formats are correlated after disattenuation. Corrected correlations ranged from .7 to above 1.00. While it appears the scales are measuring essentially the same thing, several cautionary statements are made regarding the interpretation of the results. The failure to consider the purpose for data collection in these studies was one major problem cited.