Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eric D. Heggestad is active.

Publication


Featured researches published by Eric D. Heggestad.


Psychological Bulletin | 1997

Intelligence, personality, and interests: Evidence for overlapping traits.

Phillip L. Ackerman; Eric D. Heggestad

The authors review the development of the modern paradigm for intelligence assessment and application and consider the differentiation between intelligence-as-maximal performance and intelligence-as-typical performance. They review theories of intelligence, personality, and interest as a means to establish potential overlap. Consideration of intelligence-as-typical performance provides a basis for evaluation of intelligence-personality and intelligence-interest relations. Evaluation of relations among personality constructs, vocational interests, and intellectual abilities provides evidence for communality across the domains of personality of J. L. Hollands (1959) model of vocational interests. The authors provide an extensive meta-analysis of personality-intellectual ability correlations, and a review of interest-intellectual ability associations. They identify 4 trait complexes: social, clerical/conventional, science/math, and intellectual/cultural.


Journal of Applied Psychology | 2003

Faking and Selection: Considering the Use of Personality From Select-In and Select-Out Perspectives

Rose A. Mueller-Hanson; Eric D. Heggestad; George C. Thornton

The effects of faking on criterion-related validity and the quality of selection decisions are examined in the present study by combining the control of an experiment with the realism of an applicant setting. Participants completed an achievement motivation measure in either a control group or an incentive group and then completed a performance task. With respect to validity, greater prediction error was found in the incentive condition among those with scores at the high end of the predictor distribution. When selection ratios were small, those in the incentive condition were more likely to be selected and had lower mean performance than those in the control group. Implications for using personality assessments from select-in and select-out strategies are discussed.


Journal of Applied Psychology | 2006

Forced-choice assessments of personality for selection: Evaluating issues of normative assessment and faking resistance.

Eric D. Heggestad; Morgan Morrison; Charlie L. Reeve; Rodney A. McCloy

Recent research suggests multidimensional forced-choice (MFC) response formats may provide resistance to purposeful response distortion on personality assessments. It remains unclear, however, whether these formats provide normative trait information required for selection contexts. The current research evaluated score correspondences between an MFC format measure and 2 Likert-type measures in honest and instructed-faking conditions. In honest response conditions, scores from the MFC measure appeared valid indicators of normative trait standing. Under faking conditions, the MFC measure showed less score inflation than the Likert measure at the group level of analysis. In the individual-level analyses, however, the MFC measure was as affected by faking as was the Likert measure. Results suggest the MFC format is not a viable method to control faking.


Learning and Individual Differences | 1996

Motivational skills & self-regulation for learning: A trait perspective

Ruth Kanfer; Phillip L. Ackerman; Eric D. Heggestad

Abstract We report a series of investigations that focus on the nature of motivational skills and self-regulation for learning as traits, in contrast to consideration of self-regulation as resulting from particular interventions. In this context, we consider how self-report measures of motivational and self-regulation skills relate to other traits, such as ability, personality, interests, academic selfconcept, self-ratings of abilities. In addition, we discuss how such trait measures are associated with task-specific self efficacy across tasks of varying complexity—from simple information processing to complex air traffic controller tasks. Selfregulatory and motivational skills show substantial overlap with other trait measures, as do measures of learning strategies. Motivational and domain-specific self-concepts, along with trait anxiety, appear to be strongly related to task-specific self efficacy.


Journal of Experimental Psychology: Applied | 2005

The predictive validity of self-efficacy in training performance: little more than past performance.

Eric D. Heggestad; Ruth Kanfer

Past research on the influence of self-efficacy in training has provided mixed results. Key differences between studies pertain to whether past performance is operationalized as a residual variable or as an unadjusted variable and to the type of task used. In this study, the authors conducted and performed a reanalysis to examine the influence of self-efficacy using both operationalizations of past performance in 2 experimental tasks. Results indicate that, regardless of task version or type, self-efficacy predicted performance only when a residual measure of past performance was used, but not when past performance was unadjusted. However, when past performance was adjusted, the findings for self-efficacy were likely a statistical artifact. These results suggest that self-efficacy is a consequence rather than a cause of performance in training.


International Journal of Educational Research | 2000

Individual Differences in Trait Motivation: Development of the Motivational Trait Questionnaire.

Eric D. Heggestad; Ruth Kanfer

Abstract The development and initial evaluation of a measure of motivational traits, the Motivational Trait Questionnaire (MTQ), is described. Based upon theorizing by Kanfer and Heggestad (In B.M. Staw, & L.L. Cummings (Eds.), Research in organizational behaviour, vol. 19 (pp. 1–56). Greenwich, CT: JAI Press, Inc.) development of the MTQ began by identifying and defining five motivational traits. Item pools were generated for each of the proposed traits, and initial facets were developed through a content-sorting procedure. Two studies were conducted to evaluate the MTQ at the item, facet, and scale levels. In Study 1, the facet scales were refined based on item-level factor analyses and item characteristics. An exploratory factor analysis of the refined MTQ facets provided support for three of the proposed traits. In Study 2, the facets were re-evaluated at the item-level. The factor structure of the MTQ facets was similar to that found in Study 1. An extension analysis from the three trait factors to extant measures of achievement, test and trait anxiety, and personality provided construct validity evidence for the MTQ scales. Results from these studies support the multidimensional structure of motivational traits proposed by Kanfer and Heggestad (In B.M. Staw, & L.L. Cummings (Eds.), Research in organizational behaviour, vol. 19 (pp. 1–56). Greenwich, CT: JAI Press, Inc.). Implications for motivation research in education are discussed.


Journal of Applied Psychology | 2007

An examination of psychometric bias due to retesting on cognitive ability tests in selection settings

Filip Lievens; Charlie L. Reeve; Eric D. Heggestad

Using a latent variable approach, the authors examined whether retesting on a cognitive ability measure resulted in measurement and predictive bias. A sample of 941 candidates completed a cognitive ability test in a high-stakes context. Results of both the within-group between-occasions comparison and the between-groups within-occasion comparison indicated that no measurement bias existed during the initial testing but that retesting induced both measurement and predictive bias. Specifically, the results suggest that the factor underlying the retest scores was less saturated with g and more associated with memory than the latent factor underlying initial test scores and that these changes eliminated the tests criterion-related validity. This studys implications for retesting theory, practice, and research are discussed.


Human Factors | 2005

Interruption management: the use of attention-directing tactile cues.

Pamela J. Hopp; C. A. P. Smith; Benjamin A. Clegg; Eric D. Heggestad

Previous research has suggested that providing informative cues about interrupting stimuli aids management of multiple tasks. However, auditory and visual cues can be ineffective in certain situations. The objective of the present study was to explore whether attention-directing tactile cues aid or interfere with performance. A two-group posttest-only randomized experiment was conducted. Sixty-one participants completed a 30-min performance session consisting of aircraft-monitoring and gauge-reading computer tasks. Tactile signals were administered to a treatment group to indicate the arrival and location of interrupting tasks. Control participants had to remember to visually check for the interrupting tasks. Participants in the treatment group responded to more interrupting tasks and responded faster than did control participants. Groups did not differ on error rates for the interrupting tasks, performance of the primary task, or subjective workload perceptions. In the context of the tasks used in the present research, tactile cues allowed participants to effectively direct attention where needed without disrupting ongoing information processing. Tactile cues should be explored in a variety of other visual, interruptladen environments. Potential applications exist for aviation, user-interface design, vigilance tasks, and team environments.


Organizational Research Methods | 2005

A Silk Purse From the Sow's Ear: Retrieving Normative Information From Multidimensional Forced-Choice Items.

Rodney A. McCloy; Eric D. Heggestad; Charlie L. Reeve

This article presents a psychometric approach for extracting normative information from multidimensional forced-choice (MFC) formats while retaining the methods faking-resistant property. The approach draws on concepts from Coombss unfolding models and modern item response theory to develop a theoretical model of the judgment process used to answer MFC items, which is then used to develop a scoring system that provides estimates of normative trait standings.


International Journal of Selection and Assessment | 2006

Incremental Validity of Assessment Center Ratings over Cognitive Ability Tests: A Study at the Executive Management Level

Diana E. Krause; Martin Kersting; Eric D. Heggestad; George C. Thornton

Both tests of cognitive ability and assessment center (AC) ratings of various performance attributes have proven useful in personnel selection and promotion contexts. To be of theoretical or practical value, however, the AC method must show incremental predictive accuracy over cognitive ability tests given the cost disparities between the two predictors. In the present study, we investigated this issue in the context of promotion of managers in German police departments into a training academy for high-level executive positions. Candidates completed a set of cognitive ability tests and a 2-day AC. The criterion measure was the final grade at the police academy. Results indicated that AC ratings of managerial abilities were important predictors of training success, even after accounting for cognitive ability test scores. These results confirm that AC ratings provide unique contribution to the understanding and prediction of training performance of high-level executive positions beyond cognitive ability tests.

Collaboration


Dive into the Eric D. Heggestad's collaboration.

Top Co-Authors

Avatar

Linda Rhoades Shanock

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar

Charlie L. Reeve

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Steven G. Rogelberg

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ruth Kanfer

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Alexandra M. Dunn

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar

Benjamin E. Baran

Northern Kentucky University

View shared research outputs
Top Co-Authors

Avatar

C. A. P. Smith

Colorado State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge