Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Duncan J. R. Jackson is active.

Publication


Featured researches published by Duncan J. R. Jackson.


Disaster Prevention and Management | 2002

Developing disaster management capability: an assessment centre approach

Douglas Paton; Duncan J. R. Jackson

Fundamental to disaster readiness planning is developing training strategies to compensate for the limited opportunities available for acquiring actual disaster response experience. With regard to communication, decision making and integrated emergency management response, the need to develop mental models capable of reconciling knowledge of multiple goals with the collective expertise of those responding represents a significant challenge for training. This paper explores the utility of the assessment centre as a developmental resource capable of achieving this goal. In addition to providing multiple, expertly evaluated simulations to facilitate the development and practice of specific skills, the ability of assessment centre methodology to promote tacit knowledge and self‐efficacy renders it an appropriate vehicle for developing the mental models that underpin the core disaster management competencies of situational awareness and naturalistic and team decision making.


Human Performance | 2005

Rating Tasks Versus Dimensions in Assessment Centers: A Psychometric Comparison

Duncan J. R. Jackson; Jennifer A. Stillman; Stephen G. Atkins

Assessment centers (ACs) have been widely criticized on the basis of measurement problems throughout the literature dating back to 1982. This study investigates whether an alternative to the prevailing trait paradigm would provide a more sensible treatment of AC ratings. All data were obtained in a real-world AC from the behavioral responses of 187 participants. Two paradigms of assessment were compared in a repeated measures design. The first model treated the AC data as though they comprised situationally specific behavioral samples. The second, more traditional model treated the data as though they were indicative of trait-based responses. Using generalizability theory, factor analysis, and confirmatory factor analysis, both models demonstrated similar psychometric characteristics, although only data treated under the situationally specific model held a conceptual justification in this study. These findings suggest that the situationally specific task-based model presents a more appropriate means by which to treat AC ratings in practice.


International Journal of Selection and Assessment | 2010

Task-Based Assessment Centers: Empirical Support for a Systems Model

Duncan J. R. Jackson; Jennifer A. Stillman; Paul Englert

Task-based assessment centers (TBACs) have been suggested to hold promise for practitioners and users of real-world ACs. However, a theoretical understanding of this approach is lacking in the literature, which leads to misunderstandings. The present study tested aspects of a systems model empirically, to help elucidate TBACs and explore their inner workings. When applied to data from an AC completed by 214 managers, canonical correlation analysis revealed that extraversion, abstract reasoning, and verbal reasoning, conceptualized as inputs into a system, explained around 21% of variance in manifest assessment center behavior. Behavior, in this regard, was found to consist of both general and situationally specific elements. Results are discussed in terms of their support for a systems model and as they pertain to the literature on TBACs.


Journal of Management | 2015

Guidelines and ethical considerations for assessment center operations

Deborah E. Rupp; Brian J. Hoffman; David Bischof; William Byham; Lynn Collins; Alyssa Mitchell Gibbons; Shinichi Hirose; Martin Kleinmann; Martin Lanik; Duncan J. R. Jackson; M. S. Kim; Filip Lievens; Deon Meiring; Klaus G. Melchers; Vina G. Pendit; Dan J. Putka; Nigel Povah; Doug Reynolds; Sandra Schlebusch; John Scott; Svetlana Simonenko; George C. Thornton

The article presents guidelines for professionals and ethical considerations concerning the assessment center method. Topics of the guidelines will be beneficial to human resource management specialists, industrial and organizational consultants. The social responsibility of business, their legal compliance and ethics are also explored.


Human Performance | 2007

When Traits Are Behaviors: The Relationship Between Behavioral Responses and Trait-Based Overall Assessment Center Ratings

Duncan J. R. Jackson; Andrew R. Barney; Jennifer A. Stillman; William Walton Kirkley

Interest in exercise effects commonly observed in assessment centers (ACs) has resurfaced with Lance, Lambert, Gewin, Lievens, and Conways 2004 study. The study presented here addressed the construct validity puzzle associated with ACs by investigating whether traditional trait-based overall assessment ratings (OARs) could be explained by behavioral performance on exercises. In a sample of 208 job applicants from a real-world AC, it was found that the multivariate combination of scores from three behavioral checklists explained around 90% (p < .001) of the variance in supposedly trait-based OARs. This study adds to the AC literature by suggesting that traditional OARs are predictive of work outcomes because they reflect exercise-specific behavioral performance rather than trait-based assessments. If this is the case, validity and efficiency are best served by abandoning redundant trait ratings (dimensions) in favor of more direct behavioral ratings.


Public Personnel Management | 2005

Frame of Reference Training for Assessment Centers: Effects on Interrater Reliability When Rating Behaviors and Ability Traits

Duncan J. R. Jackson; Stephen G. Atkins; Richard B. Fletcher; Jennifer A. Stillman

Assessment centers have been widely criticized on the basis of measurement problems. The present study sought to present a methodological piece on the extent to which Frame of Reference (FOR) training would increase the interrater reliability associated with assessment center ratings provided by non-psychologist assessors. Five managerial assessors (with no psychological training) rated the behavior and the ability traits of a contrived participant on the basis of behaviors described in two alternative vignettes (detailing critical incidents of job performance). The ratings were obtained both before and following FOR training. It was found that agreement among assessors on their assessment of both behaviors and traits increased subsequent to the FOR training procedure. The implications of increasing the precision associated with assessment center ratings are discussed.


Journal of Applied Psychology | 2016

Everything that you have ever been told about assessment center ratings is confounded.

Duncan J. R. Jackson; George Michaelides; Chris Dewberry; Young-Jae Kim

Despite a substantial research literature on the influence of dimensions and exercises in assessment centers (ACs), the relative impact of these 2 sources of variance continues to raise uncertainties because of confounding. With confounded effects, it is not possible to establish the degree to which any 1 effect, including those related to exercises and dimensions, influences AC ratings. In the current study (N = 698) we used Bayesian generalizability theory to unconfound all of the possible effects contributing to variance in AC ratings. Our results show that ≤1.11% of the variance in AC ratings was directly attributable to behavioral dimensions, suggesting that dimension-related effects have no practical impact on the reliability of ACs. Even when taking aggregation level into consideration, effects related to general performance and exercises accounted for almost all of the reliable variance in AC ratings. The implications of these findings for recent dimension- and exercise-based perspectives on ACs are discussed. (PsycINFO Database Record


Journal of Occupational and Organizational Psychology | 2005

A detection theory approach to the evaluation of assessors in assessment centres

Jennifer A. Stillman; Duncan J. R. Jackson

The ratings given to job applicants in an assessment centre (AC) will be influenced both by an assessors sensitivity to the evidence of suitability provided by the applicants, and by whether the assessor has a liberal or a conservative rating tendency. In this study we explore the usefulness of signal detection methodology for evaluating and comparing both aspects of performance. A group of eight managerial assessors in a real-world AC rated 195 applicants for retail sales positions. The sensitivity and response bias of assessors was evaluated using receiver operating characteristic (ROC) analyses, and their performance was evaluated by means of critical operating characteristic (COC) analyses. RscorePlus software (Harvey, 2002) was used for these purposes. We conclude that, in this and similar organizational contexts, such analyses are potentially useful for estimating and comparing the performances of assessors, thereby highlighting the need for, and subsequently evaluating the effectiveness of, any remedial intervention.


Journal of Occupational and Organizational Psychology | 2017

The internal structure of situational judgement tests reflects candidate main effects: Not dimensions or situations

Duncan J. R. Jackson; Alexander C. LoPilato; Dan Hughes; Nigel Guenole; Ali Shalfrooshan

Despite their popularity and capacity to predict performance, there is no clear consensus on the internal measurement characteristics of situational judgement tests (SJTs). Contemporary propositions in the literature focus on treating SJTs as methods, as measures of dimensions, or as measures of situational responses. However, empirical evidence relating to the internal structure of SJT scores is lacking. Using generalizability theory, we decomposed multiple sources of variance for three different SJTs used with different samples of job candidates (N1 = 2,320; N2 = 989; N3 = 7,934). Results consistently indicated that (a) the vast majority of reliable observed score variance reflected SJT-specific candidate main effects, analogous to a general judgment factor and that (b) the contribution of dimensions and situations to reliable SJT variance was, in relative terms, negligible. These findings do not align neatly with any of the proposals in the contemporary literature; however they do suggest an internal structure for SJTs.


Computers in Human Behavior | 2016

Simulating Déjà Vu

Duncan J. R. Jackson; Sahangsoon Kim; Choonwoo Lee; Youngjun Choi; Jihee Song

Video games offer a unique and flexible virtual environment in which to study human performance in response to virtual situational characteristics. In an experimental design, participants in the current study were presented with two conditions in an action video game environment. In Condition 1, the same virtual situation was presented on three occasions. In Condition 2, three different virtual situations were presented. Results revealed that person?×?situation interactions were of a notable magnitude, regardless as to whether the same or different situations were presented to participants, suggesting the presence of intraindividual effects across occasions. However, a general performance effect was only identifiable to a meaningful extent when different situations were presented (i.e., in Condition 2 only), suggesting that the presence of different situations is necessary in order for participants to exhibit general performance variability. We present conditions of the same versus different virtual situations.Person?×?situation effects occur regardless of condition.General performance effects depend on the presence of different situations.

Collaboration


Dive into the Duncan J. R. Jackson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge