Alyssa Mitchell Gibbons
Colorado State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alyssa Mitchell Gibbons.
Journal of Occupational Health Psychology | 2016
Gwenith G. Fisher; Russell A. Matthews; Alyssa Mitchell Gibbons
The validity of organizational research relies on strong research methods, which include effective measurement of psychological constructs. The general consensus is that multiple item measures have better psychometric properties than single-item measures. However, due to practical constraints (e.g., survey length, respondent burden) there are situations in which certain single items may be useful for capturing information about constructs that might otherwise go unmeasured. We evaluated 37 items, including 18 newly developed items as well as 19 single items selected from existing multiple-item scales based on psychometric characteristics, to assess 18 constructs frequently measured in organizational and occupational health psychology research. We examined evidence of reliability; convergent, discriminant, and content validity assessments; and test-retest reliabilities at 1- and 3-month time lags for single-item measures using a multistage and multisource validation strategy across 3 studies, including data from N = 17 occupational health subject matter experts and N = 1,634 survey respondents across 2 samples. Items selected from existing scales generally demonstrated better internal consistency reliability and convergent validity, whereas these particular new items generally had higher levels of content validity. We offer recommendations regarding when use of single items may be more or less appropriate, as well as 11 items that seem acceptable, 14 items with mixed results that might be used with caution due to mixed results, and 12 items we do not recommend using as single-item measures. Although multiple-item measures are preferable from a psychometric standpoint, in some circumstances single-item measures can provide useful information.
Journal of Management | 2009
Alyssa Mitchell Gibbons; Deborah E. Rupp
This article presents a historical review of how inconsistency in assessment center ratings has been regarded among AC researchers and practitioners, then compares these perspectives to views of inconsistency found in personality psychology. Based on this review, the authors argue for a return to the study of consistency as an individual difference, rather than as simple measurement error. They offer four propositions regarding the inconsistency observed in AC performance, arguing that such inconsistency presents a unique opportunity to identify individuals’ patterns of skill proficiency. Finally, they discuss ways in which differences in consistency are likely to relate to organizational interests, including implications for selection and development.
Journal of Management | 2015
Deborah E. Rupp; Brian J. Hoffman; David Bischof; William Byham; Lynn Collins; Alyssa Mitchell Gibbons; Shinichi Hirose; Martin Kleinmann; Martin Lanik; Duncan J. R. Jackson; M. S. Kim; Filip Lievens; Deon Meiring; Klaus G. Melchers; Vina G. Pendit; Dan J. Putka; Nigel Povah; Doug Reynolds; Sandra Schlebusch; John Scott; Svetlana Simonenko; George C. Thornton
The article presents guidelines for professionals and ethical considerations concerning the assessment center method. Topics of the guidelines will be beneficial to human resource management specialists, industrial and organizational consultants. The social responsibility of business, their legal compliance and ethics are also explored.
International Journal of Selection and Assessment | 2013
Svetlana Simonenko; George C. Thornton; Alyssa Mitchell Gibbons; Anna Kravtcova
Controversy has revolved around whether assessment center ratings have construct validity to measure intended dimensions of managerial performance. In contrast to much recent research on the internal structure of assessment center ratings, the present studies investigated the relationship of final competency ratings derived by consensus discussion with external questionnaire measures of personality characteristics. Expanding on previous studies showing correlations of dimension scores in relation to individual trait measures, this study investigated the relationship of complex competencies with both single personality traits and with composites of personality traits. Evidence from two samples of managers in Russia shows that final competency ratings are related to predicted composites of personality factors more consistently than to single factors. Taken together, these findings provide evidence that assessment center ratings derived by consensus discussion show construct validity in relationship with predicted composites of personality characteristics.
The Psychologist-Manager Journal | 2011
Martin Lanik; Alyssa Mitchell Gibbons
The globalization and internationalization of the assessment center (AC) method presents numerous challenges for research and practice. Culture affects every aspect of assessment, and in a multicultural context this creates tremendous potential for bias, miscommunication, misunderstanding, and inconsistency. The authors review and synthesize the general cross-cultural literature with respect to a critical issue in AC development: assessor training. On the basis of this research, they propose seven broad guidelines for AC developers to consider when planning assessor training in a multicultural context. As far as possible, the authors offer specific examples of training approaches, materials, and resources to facilitate these processes. Their goal is offer a useful overview for international AC practitioners and to encourage future research in this area. “In a world joined together by nails, a hammer is a more useful tool than a wrench. In a world held together by nuts and bolts, a wrench is a more useful...
Journal of Organizational Effectiveness: People and Performance | 2014
Benjamin R. Kaufman and; Konstantin P. Cigularov; Peter Y. Chen; Krista Hoffmeister; Alyssa Mitchell Gibbons; Stefanie K. Johnson
Purpose – The purpose of this paper is to examine the main and interactive effects of general and safety-specific leader justice (SSLJ) (i.e. fair treatment) and leader support for safety (LSS) on safety performance. Design/methodology/approach – Two independent samples of construction workers rate their leaders with regards to fair treatment and support for safety and report their own safety performance in a survey. Findings – In both studies, LSS significantly moderated relationships of both general and SSLJ with safety performance. In Study 1, the strength of relationship between general leader justice and safety performance increases while LSS is increased. Similar pattern was found for the relationship between SSLJ and safety performance in Study 2. Practical implications – Safety interventions targeting leadership should consider training for leader safety practices that are perceived as supportive and fair. Originality/value – The research is unique in its examination of leader justice in a safety-...
Journal of Leadership & Organizational Studies | 2018
Stefanie K. Johnson; Stefanie E. Putter; Rebecca J. Reichard; Krista Hoffmeister; Konstantin P. Cigularov; Alyssa Mitchell Gibbons; Peter Y. Chen; John Rosecrance
Leader efficacy is a key outcome of leader development, but little is known about if and when developmental leader experiences, such as engaging in training, relate to gains in leader efficacy. We present a theoretical model of the effects of mastery goal orientation and performance during development as determinants of leader efficacy. We argue that a mastery goal orientation, whether dispositional (Studies 1, 2) or situationally induced (Study 2), can increase performance and mitigate the deleterious effects of poor performance, resulting in higher leader efficacy. Two field studies of individuals taking leader development courses largely supported these predictions. Study 1 showed that individuals with a high dispositional mastery goal orientation (dMGO) added effort over time, performed better, and had higher leader efficacy than low dMGO individuals. The benefits of dMGO increased over the 4-week leader development course. Study 2 showed that a mastery goal intervention reduced the effects of low dMGO on leader efficacy.
The Psychologist-Manager Journal | 2017
Diana R. Sanchez; Saar Van Lysebetten; Alyssa Mitchell Gibbons
Workplace simulations, often used to assess or train employees, historically rely on human raters who use judgment to evaluate and score the behavior they observe (judgment-based scoring). Such judgments are often complex and holistic, raising concerns about their reliability and susceptibility to bias. Human raters are also resource-intensive; thus, organizations are interested in strategies for reducing the role of human judgment in simulations. For example, using a checklist of discrete, clearly observable behaviors with predefined point values (analytic scoring) might be expected to simplify the rating process and produce more consistent scores. With the use of good text- or voice-recognition software, such a checklist might even be amenable to automation, eliminating the need for human raters altogether. Although the possibility of such potential benefits may appeal to organizations, it is unclear how changing the scoring method in this way may affect the meaning of scores. The authors developed a framework for converting judgment-based scores to analytic scores, using the automated scoring and qualitative content analysis literatures, and applied this framework to the original constructed responses of 84 managers in a workplace simulation. The responses were adapted into discrete behaviors and scored analytically. Results indicated that responses could be adequately summarized using a reasonable number of discrete behaviors, and that analytic scores converged significantly but not strongly with the original judgment-based scores from human raters. We discuss implications for future research and provide recommendations for practitioners considering automated scores in workplace simulations.
Human Performance | 2016
Adam J. Vanhove; Alyssa Mitchell Gibbons; Uma Kedharnath
ABSTRACT Error in performance ratings is typically believed to be due to the cognitive complexity of the rating task. Distributional assessment (DA) is proposed to improve rater accuracy by reducing cognitive load. In two laboratory studies, raters reported perceptions of cognitive effort and difficulty while assessing rating targets using DA or the traditional assessment approach. Across both studies, DA raters showed greater interrater agreement, and Study 2 findings provide some support for DA being associated with greater true score rating accuracy. However, DA raters also reported experiencing greater cognitive load during the rating task, and cognitive load did not mediate the relationship between rating format and rater accuracy. These findings have important implications regarding our understanding of cognitive load in the rating process.
Journal of Management | 2015
Deborah E. Rupp; Brian J. Hoffman; David Bischof; William Byham; Lynn Collins; Alyssa Mitchell Gibbons; Shinichi Hirose; Martin Kleinmann; Martin Lanik; Duncan J. R. Jackson; M. S. Kim; Filip Lievens; Deon Meiring; Klaus G. Melchers; Vina G. Pendit; Dan J. Putka; Nigel Povah; Doug Reynolds; Sandra Schlebusch; John Scott; Svetlana Simonenko; George C. Thornton
The article presents guidelines for professionals and ethical considerations concerning the assessment center method. Topics of the guidelines will be beneficial to human resource management specialists, industrial and organizational consultants. The social responsibility of business, their legal compliance and ethics are also explored.