Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where George C. Thornton is active.

Publication


Featured researches published by George C. Thornton.


Journal of Applied Psychology | 2003

Faking and Selection: Considering the Use of Personality From Select-In and Select-Out Perspectives

Rose A. Mueller-Hanson; Eric D. Heggestad; George C. Thornton

The effects of faking on criterion-related validity and the quality of selection decisions are examined in the present study by combining the control of an experiment with the realism of an applicant setting. Participants completed an achievement motivation measure in either a control group or an incentive group and then completed a performance task. With respect to validity, greater prediction error was found in the incentive condition among those with scores at the high end of the predictor distribution. When selection ratios were small, those in the incentive condition were more likely to be selected and had lower mean performance than those in the control group. Implications for using personality assessments from select-in and select-out strategies are discussed.


Journal of Applied Psychology | 1992

Construct Validity of Self- and Peer Evaluations of Performance Dimensions in an Assessment Center

Ted H. Shore; Lynn M. Shore; George C. Thornton

The construct validity of final self- and peer evaluations in an assessment center was examined within a nomological network of conceptually related and unrelated variables. Data included self-, peer, and assessor evaluations, cognitive ability and personality measures, and job advancement. The evidence for construct validity was stronger for peer than for self-evaluations, and for more easily observable dimensions than for dimensions requiring greater inferential judgment. Self- and peer evaluations were associated with assessor ratings of management potential, whereas only peer evaluations predicted job advancement. Implications for the use of self- and peer evaluations in assessment centers and the need for further research are discussed.


International Journal of Selection and Assessment | 2006

Incremental Validity of Assessment Center Ratings over Cognitive Ability Tests: A Study at the Executive Management Level

Diana E. Krause; Martin Kersting; Eric D. Heggestad; George C. Thornton

Both tests of cognitive ability and assessment center (AC) ratings of various performance attributes have proven useful in personnel selection and promotion contexts. To be of theoretical or practical value, however, the AC method must show incremental predictive accuracy over cognitive ability tests given the cost disparities between the two predictors. In the present study, we investigated this issue in the context of promotion of managers in German police departments into a training academy for high-level executive positions. Candidates completed a set of cognitive ability tests and a 2-day AC. The criterion measure was the final grade at the police academy. Results indicated that AC ratings of managerial abilities were important predictors of training success, even after accounting for cognitive ability test scores. These results confirm that AC ratings provide unique contribution to the understanding and prediction of training performance of high-level executive positions beyond cognitive ability tests.


Public Personnel Management | 2003

Ten Classic Assessment Center Errors: Challenges to Selection Validity:

Cam Caldwell; George C. Thornton; Melissa L. Gruys

This paper summarizes 10 classic errors associated with selection and promotion related Assessment Center (AC) administration. Critical errors covered are: 1. Poor planning, 2. Inadequate job analysis, 3. Weakly defined dimensions, 4. Poor exercises, 5. No pre-test evaluations, 6. Unqualified assessors, 7. Inadequate assessor training, 8. Inadequate candidate preparation, 9. Sloppy behavior documentation and scoring, and 10. Misuse of results. The list of common errors is aimed at assisting public human resource professionals in assessing the extent to which the assessment centers used by their jurisdictions comply with “best practices.” Reducing and/or eliminating the errors in this list will allow municipalities to more efficiently and effectively use ACs for employee promotion and selection decisions.


Psychological Reports | 1990

DISTINCTIVENESS OF THREE WORK ATTITUDES: JOB INVOLVEMENT, ORGANIZATIONAL COMMITMENT, AND CAREER SALIENCE

Ted H. Shore; George C. Thornton; Lynn M. Shore

Research on work commitment has treated organizational commitment, job involvement, and career salience as distinct constructs. The purpose of this study was to provide empirical evidence for the d...


International Journal of Human Resource Management | 2009

Selection versus development assessment centers: an international survey of design, execution, and evaluation

George C. Thornton; Diana E. Krause

No recent survey documents differences in assessment center (AC) practices for selection and development in organizations in diverse countries. We analyze the design, execution, and evaluation of AC selection programs compared to development programs in a sample of 144 organizations in 18 countries. Our comparison identifies similarities and differences in job analysis techniques, dimensions (job requirements), observer pools, methods of assessor training, exercises and psychometric testing procedures, information provided to participants, and areas of evaluation of the program by participants. Results show important differences between selection and developmental programs which may be explained by economical, legal, and social factors. We also point out features of ACs that should be designed differently to optimize objectives.


Journal of Management | 2015

Guidelines and ethical considerations for assessment center operations

Deborah E. Rupp; Brian J. Hoffman; David Bischof; William Byham; Lynn Collins; Alyssa Mitchell Gibbons; Shinichi Hirose; Martin Kleinmann; Martin Lanik; Duncan J. R. Jackson; M. S. Kim; Filip Lievens; Deon Meiring; Klaus G. Melchers; Vina G. Pendit; Dan J. Putka; Nigel Povah; Doug Reynolds; Sandra Schlebusch; John Scott; Svetlana Simonenko; George C. Thornton

The article presents guidelines for professionals and ethical considerations concerning the assessment center method. Topics of the guidelines will be beneficial to human resource management specialists, industrial and organizational consultants. The social responsibility of business, their legal compliance and ethics are also explored.


Law and Human Behavior | 2003

Organizational Downsizing and Age Discrimination Litigation: The Influence of Personnel Practices and Statistical Evidence on Litigation Outcomes

Peter H. Wingate; George C. Thornton; Kelly S. McIntyre; Jennifer H. Frame

The present study examined relationships between reduction-in-force (RIF) personnel practices, presentation of statistical evidence, and litigation outcomes. Policy capturing methods were utilized to analyze the components of 115 federal district court opinions involving age discrimination disparate treatment allegations and organizational downsizing. Univariate analyses revealed meaningful links between RIF personnel practices, use of statistical evidence, and judicial verdict. The defendant organization was awarded summary judgment in 73% of the claims included in the study. Judicial decisions in favor of the defendant organization were found to be significantly related to such variables as formal performance appraisal systems, termination decision review within the organization, methods of employee assessment and selection for termination, and the presence of a concrete layoff policy. The use of statistical evidence in ADEA disparate treatment litigation was investigated and found to be a potentially persuasive type of indirect evidence. Legal, personnel, and evidentiary ramifications are reviewed, and a framework of downsizing mechanics emphasizing legal defensibility is presented.


International Journal of Selection and Assessment | 2000

Higher Cost, Lower Validity and Higher Utility: Comparing the Utilities of Two Tests that Differ in Validity, Costs and Selectivity

George C. Thornton; Kevin R. Murphy; Tina M. Everest; Calvin C. Hoffman

Traditional approaches to comparing the utility of two tests have not systematically considered the effects of different levels of selectivity that are feasible and appropriate in various selection situations. For example, employers who hope to avoid adverse impact often find they can be more selective with some tests than with others. We conducted two studies to compare the utilities of two tests that differ in costs, validity, and feasible levels of selectivity which can be employed. First, an analytical solution was conducted starting with a standard formula for utility. This analysis showed that for both fixed and variable hiring costs, a higher-cost, lower-validity procedure can have higher utility than a lower-cost, higher-validity procedure when the selection ratios permissible using the two procedures are sufficiently (yet realistically) different. Second, using a computer simulation method, several combinations of the critical variables were varied systematically to detect the limits of this effect in a finite set of specific selection situations. The results showed that the existence of more severe levels of adverse impact greatly reduced the utility of a written test with relatively high validity and low cost in comparison with an assessment center with lower validity and higher cost. Both studies showed that the consideration of selectivity can yield surprising conclusions about the comparative utility of two tests. Even if one test has lower validity and higher cost than a second test, the first may yield higher utility if it allows the organization to exercise stricter levels of selectivity.


Educational and Psychological Measurement | 1992

Development and Validation of a Measure of Attitudes toward Employee Drug Testing

Kevin R. Murphy; George C. Thornton

Employee drug testing, although widespread, is still controversial. In this study, a 19-item scale measuring attitudes toward employee drug testing, based on responses to actual drug testing policies and practices, was developed. The obtained coefficient alpha for this scale was .90. Results of a maximum-likelihood confirmatory factor analysis indicate a four-factor structure (one general and three content-oriented factors). Evidence of criterion-related validity and discriminant validity were offered.

Collaboration


Dive into the George C. Thornton's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lynn M. Shore

Colorado State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ted H. Shore

Kennesaw State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Diana E. Krause

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge