Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anton J. Villado is active.

Publication


Featured researches published by Anton J. Villado.


Journal of Applied Psychology | 2006

The use of person-organization fit in employment decision making: an assessment of its criterion-related validity.

Winfred Arthur; Suzanne T. Bell; Anton J. Villado; Dennis Doverspike

Because measures of person-organization (P-O) fit are accountable to the same psychometric and legal standards used for other employment tests when they are used for personnel decision making, the authors assessed the criterion-related validity of P-O fit as a predictor of job performance and turnover. Meta-analyses resulted in estimated true criterion-related validities of .15 (k = 36, N = 5,377) for P-O fit as a predictor of job performance and .24 (k = 8, N = 2,476) as a predictor of turnover, compared with a stronger effect of .31 (k = 109, N = 108,328) for the more commonly studied relation between P-O fit and work attitudes. In contrast to the relations between P-O fit and work attitudes, the lower 95% credibility values for the job performance and turnover relations included zero. In addition, P-O fits relations with job performance and turnover were partially mediated by work attitudes. Potential concerns pertaining to the use of P-O fit in employment decision making are discussed in light of these results.


Journal of Applied Psychology | 2008

The importance of distinguishing between constructs and methods when comparing predictors in personnel selection research and practice.

Winfred Arthur; Anton J. Villado

The authors highlight the importance and discuss the criticality of distinguishing between constructs and methods when comparing predictors. They note that comparisons of constructs and methods in comparative evaluations of predictors result in outcomes that are theoretically to conceptually uninterpretable and thus potentially misleading. The theoretical and practical implications of the distinction between predictor constructs and predictor methods are discussed, with three important streams of personnel psychology research being used to frame this discussion. Researchers, editors, reviewers, educators, and consumers of research are urged to carefully consider the extent to which the construct-method distinction is made and maintained in their own research and that of others, especially when predictors are being compared. It is hoped that this discussion will reorient researchers and practitioners toward a more construct-oriented approach that is aligned with a scientific emphasis in personnel selection research and practice.


Human Factors | 2005

Team task analysis: identifying tasks and jobs that are team based.

Winfred Arthur; Bryan D. Edwards; Suzanne T. Bell; Anton J. Villado; Winston Bennett

This paper presents initial information on the development and validation of three team task analysis scales. These scales were designed to quantitatively assess the extent to which a group of tasks or a job is team based. During a 2-week period, 52 male students working in 4-person teams were trained to perform a complex highly interdependent computer-simulated combat mission consisting of both individual- and team-based tasks. Our results indicated that the scales demonstrated high levels of interrater agreement. In addition, the scales differentiated between tasks that were predetermined to be individual versus team based. Finally, the results indicated that job-level ratings of team workflow were more strongly related to team performance than were aggregated task-level ratings of team-relatedness or team workflow. These results suggest that the scales presented here are an effective means of quantifying the extent to which tasks or jobs are team based. A research and practical implication of our findings is that the team task analysis scales could serve as criterion measures in the evaluation of team training interventions or predictors of team performance.


International Journal of Selection and Assessment | 2010

The Magnitude and Extent of Cheating and Response Distortion Effects on Unproctored Internet-Based Tests of Cognitive Ability and Personality

Winfred Arthur; Ryan M. Glaze; Anton J. Villado; Jason E. Taylor

The use of unproctored internet-based testing (UIT) for employee selection is quite widespread. Although this mode of testing has advantages over onsite testing, researchers and practitioners continue to be concerned about potential malfeasance (e.g., cheating and response distortion) under high-stakes conditions. Therefore, the primary objective of the present study was to investigate the magnitude and extent of high- and low-stakes retest effects on the scores of a UIT speeded cognitive ability test and two UIT personality measures. These data permitted inferences about the magnitude and extent of malfeasant responding. The study objectives were accomplished by implementing two within-subjects design studies (Study 1N=296; Study 2N=318) in which test takers first completed the tests as job applicants (high-stakes) or incumbents (low-stakes) then as research participants (low-stakes). For the speeded cognitive ability measure, the pattern of test score differences was more consonant with a psychometric practice effect than a malfeasance explanation. This result is likely due to the speeded nature of the test. And for the UIT personality measures, the pattern of higher high-stakes scores compared with lower low-stakes scores is similar to those reported for proctored tests in the extant literature. Thus, our results indicate that the use of a UIT administration does not uniquely threaten personality measures in terms of elevated scores under high-stakes testing that are higher than those observed for proctored tests in the extant literature.


Human Performance | 2010

The Effect of Distributed Practice on Immediate Posttraining, and Long-Term Performance on a Complex Command-and-Control Simulation Task

Winfred Arthur; Eric Anthony Day; Anton J. Villado; Paul R. Boatman; Vanessa Kowollik; Winston Bennett; Alok Bhupatkar

Using 192 paid participants who trained on a command-and-control microworld simulation, we examined the comparative effectiveness of two distributed practice schedules in enhancing performance at the end of training as well as after an 8-week nonuse period. Longer interstudy intervals (10 hr of practice over 2 weeks) led to higher levels of skill at the end of training and after nonuse than shorter interstudy intervals (10 hr of practice over 1 week). The study begins to address gaps in the skill retention literature by using a cognitively complex task and an extended nonuse interval. The primary implication of our findings is that scheduling longer interstudy practice intervals is a viable means of enhancing immediate posttraining performance and promoting long-term skill retention for cognitively complex tasks.


Human Factors | 2012

Team Task Analysis Differentiating Between Tasks Using Team Relatedness and Team Workflow as Metrics of Team Task Interdependence

Winfred Arthur; Ryan M. Glaze; Alok Bhupatkar; Anton J. Villado; Winston Bennett; Leah J. Rowe

Objective: As a constructive replication and extension of Arthur, Edwards, Bell, Villado, and Bennett (2005), the objective of the current study was to further investigate the efficacy of team relatedness and team workflow ratings (along with their composite) as metrics of interdependence. Background: Although an analysis of task and job interdependence has important implications and uses in domains such as job design, selection, and training, the job analysis literature has been slow to develop an effective method to identify team-based tasks and jobs. Method: To achieve the study’s objectives, 140 F-16 fighter pilots (35 four-person teams) rated 34 task and activity statements in terms of their team relatedness and team workflow. Results: The results indicated that team relatedness and team workflow effectively differentiated between tasks with varying levels of interdependency (as identified by instructor pilots who served as subject matter experts) within the same job. In addition, teams that accurately perceived the level of interdependency performed better on a four-ship F-16 flight-training program than those that did not. Conclusion: Team relatedness and team workflow ratings can effectively differentiate between tasks with varying levels of interdependency. Application: Like traditional individual task or job analysis, this information can serve as the basis for specified human resource functions and interventions, and as diagnostic indicators as well.


Journal of Applied Psychology | 2013

The comparative effect of subjective and objective after-action reviews on team performance on a complex task.

Anton J. Villado; Winfred Arthur


Industrial and Organizational Psychology | 2009

Unproctored Internet-Based Tests of Cognitive Ability and Personality: Magnitude of Cheating and Response Distortion

Winfred Arthur; Ryan M. Glaze; Anton J. Villado; Jason E. Taylor


Archive | 2007

Rail Crew Resource Management (CRM): The Business Case for CRM Training in the Railroad Industry

Stephen S. Roop; Curtis A Morgan; Tobin B Kyte; Winfred Arthur; Anton J. Villado; Ted Beneigh


Archive | 2007

Decay, Transfer, and the Reacquisition of a Complex Skill: An Investigation of Practice Schedules, Observational Rehearsal, and Individual Differences

Winfred Arthur; Eric Anthony Day; Anton J. Villado; Paul R. Boatman; Vanessa Kowollik; Winston Bennett; Alok Bhupatkar

Collaboration


Dive into the Anton J. Villado's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Winston Bennett

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge