Kimberly A. Smith-Jentsch
University of Central Florida
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kimberly A. Smith-Jentsch.
Psychological Science in the Public Interest | 2012
Eduardo Salas; Scott I. Tannenbaum; Kurt Kraiger; Kimberly A. Smith-Jentsch
Organizations in the United States alone spend billions on training each year. These training and development activities allow organizations to adapt, compete, excel, innovate, produce, be safe, improve service, and reach goals. Training has successfully been used to reduce errors in such high-risk settings as emergency rooms, aviation, and the military. However, training is also important in more conventional organizations. These organizations understand that training helps them to remain competitive by continually educating their workforce. They understand that investing in their employees yields greater results. However, training is not as intuitive as it may seem. There is a science of training that shows that there is a right way and a wrong way to design, deliver, and implement a training program. The research on training clearly shows two things: (a) training works, and (b) the way training is designed, delivered, and implemented matters. This article aims to explain why training is important and how to use training appropriately. Using the training literature as a guide, we explain what training is, why it is important, and provide recommendations for implementing a training program in an organization. In particular, we argue that training is a systematic process, and we explain what matters before, during, and after training. Steps to take at each of these three time periods are listed and described and are summarized in a checklist for ease of use. We conclude with a discussion of implications for both leaders and policymakers and an exploration of issues that may come up when deciding to implement a training program. Furthermore, we include key questions that executives and policymakers should ask about the design, delivery, or implementation of a training program. Finally, we consider future research that is important in this area, including some still unanswered questions and room for development in this evolving field. Language: en
Small Group Research | 2008
Kimberly A. Smith-Jentsch; Janis A. Cannon-Bowers; Scott I. Tannenbaum; Eduardo Salas
This research investigated the effects of guided team self-correction using an empirically derived expert model of teamwork as the organizing framework. First, the authors describe the process used to define this model. Second, they report findings from two studies in which the expert model was used to structure the process of guided team self-correction. Participants were U.S. Navy command and control teams (25 in Study 1, 13 in Study 2). Results indicated that teams debriefed using the expert model-driven guided team self-correction approach developed more accurate mental models of teamwork (Study 1) and demonstrated greater teamwork processes and more effective outcomes (Study 2) than did teams debriefed using a less participative and chronologically organized approach that is more typical for these teams.
Human Factors | 2010
Stephen M. Fiore; Michael A. Rosen; Kimberly A. Smith-Jentsch; Eduardo Salas; Michael Letsky; Norman Warner
Objective: This article presents a model for predicting complex collaborative processes as they arise in one-of-a-kind problem-solving situations to predict performance outcomes. The goal is to outline a set of key processes and their interrelationship and to describe how these can be used to predict collaboration processes embedded within problem-solving contexts. Background: Teams are increasingly called upon to address complex problem-solving tasks in novel situations. This represents a domain of performance that to date has been underrepresented in the research literature. Method: Multidisciplinary theoretical and empirical literature relating to knowledge work in teams is synthesized. Results: A set of propositions developed to guide research into how teams externalize cognition and build knowledge in service of problem solving is presented. First, a brief overview of macrocognition in teams is provided to distinguish the present work from other views of team cognition. Second, a description of the foundational theoretical concepts driving the theory of macrocognition in teams presented here is provided. Third, a set of propositions described within the context of a model of macrocognition in teams is forwarded. Conclusion: The theoretical framework described in this article provides a set of empirically testable propositions that can ultimately guide practitioners in efforts to support macrocognition in teams. Application: A theory of macrocognition in teams can provide guidance for the development of training interventions and the design of collaborative tools to facilitate knowledge-based performance in teams.
Journal of Applied Psychology | 1996
Kimberly A. Smith-Jentsch; Florian Jentsch; Stephanie C. Payne; Eduardo Salas
This study examined the effects of having experienced negative events related to the purpose ofa training program on learning and retention. Participants were 32 private pilots who participated in an assertiveness-training study. The purpose of the training was to prevent aviation accidents caused by human error. Structured telephone interviews were conducted to determine whether participants had previously experienced 3 types of negative events related to the purpose of training. Results indicated a linear relationship between these negative events and assertive performance in a behavioral exercise 1 week after training. The same negative events, however, were not significantly related to the performance of untrained participants in the same behavioral exercise. It is suggested that previous experiences influenced posttraining performance by increasing motivation to learn.
Theoretical Issues in Ergonomics Science | 2010
Stephen M. Fiore; Kimberly A. Smith-Jentsch; Eduardo Salas; Norman Warner; Michael Letsky
One of the significant challenges for the burgeoning field of macrocognition is the development of more sophisticated models that are able to adequately explain and predict complex cognitive processes. This is even more critical when specifying research questions involving cognition unfolding across interacting individuals, that is, macrocognition in teams. In this article, we provide a foundation for developing a model of macrocognition focusing on collaborating problem-solving teams with a measurement framework for studying macrocognitive processes in this context. We first discuss an important set of key assumptions from team measurement theory that form a critical foundation for this model. We then describe the core definitions we suggest are foundational to the conceptualisation of macrocognition in teams. We conclude with a description of the key dimensions and subcomponents of our model in order to lay the foundation for a principled approach to measuring and understanding macrocognition in teams.
Human Performance | 2007
Kimberly A. Smith-Jentsch
The research presented here investigated the impact of making targeted dimensions transparent to participants prior to their performance of a simulation exercise, on the level of dimension ratings and their correlations with typical performance predictors. Results from two studies, both employing between-subjects designs, showed that conceptually matched typical performance predictors were more positively associated with dimension ratings when targeted dimensions were not made transparent than when they were. In addition, only when targeted dimensions were not made transparent did conceptually matched typical performance predictors correlate more positively with dimension ratings than conceptually distinct typical performance predictors. Finally, those who were made aware of targeted dimensions received higher mean ratings in Study 1 but not in Study 2.
Archive | 2011
Jessica L. Wildman; Wendy L. Bedwell; Eduardo Salas; Kimberly A. Smith-Jentsch
For decades, one of the primary goals of organizational research has been the improvement and management of organizational performance. Inherent to the goal of improving performance is the concept of performance measurement (PM). PM is the mechanism that allows managers and researchers to gain an understanding of individual, team, and overall organizational performance. Without the ability to accurately measure a construct such as performance, it is impossible to truly understand, control, or improve it. As Sink and Tuttle (1989) asserted, one cannot manage what one cannot measure. Ultimately, the effective training and management of employees, teams, and organizations in any context is contingent on the quality of PM. Accordingly, much effort has been devoted over the past several decades to exploring theories, methods, and practices associated with PM (e.g., Bititci, Turner, & Begemann, 2000; Campbell, McCloy, Oppler, & Sager, 1993; Folan & Browne, 2005; Gershoni & Rudy, 1981; Kendall & Salas, 2004; Pun & White, 2005). The PM literature can generally be categorized into three distinct perspectives: individual-level PM, teamlevel PM, and organizational-level PM. Very little research has simultaneously examined multiple levels. This is problematic given that actual performance in organizations takes place at all three levels simultaneously, and perhaps more important, all three levels of performance are intertwined. Teams are becoming the predominant method for achieving organizational goals. These teams are made up of individual employees, who actually engage in behaviors that lead to performance. Thus, there is a need to integrate these three streams of PM research into one comprehensive understanding of PM and its implications. To address this need, this chapter presents a multilevel perspective on the field of PM. First, we discuss the criterion problem, which represents a broad issue underscoring the importance of PM. Next, we briefly describe five critical considerations when choosing or designing any PM system. Then, after the core underlying issues are clear, we dive into PM as described from the individual, team, and organizational perspectives. This includes the general definition of performance, key theories, and common measurement strategies used in each stream of literature. Once each perspective is discussed separately, we discuss a multilevel approach to PM. The chapter concludes with a review of current trends requiring future research and some concluding remarks. (See also Vol. 2, chap. 9, this handbook.)
54th Human Factors and Ergonomics Society Annual Meeting 2010, HFES 2010 | 2010
Aaron S. Dietz; Sallie J. Weaver; Mary Jane Sierra; Wendy L. Bedwell; Eduardo Salas; Stephen M. Fiore; Kimberly A. Smith-Jentsch; James E. Driskell
Long-duration space flight demands prolonged exposure to a myriad of stressors which manifest and interact over time. Despite a significant body of work dedicated to identifying, mitigating, and managing the effects of stress on performance, a clear theoretical foundation explicating the ways in which interactions among stressors occurs, as well as how and when stress develops chronically remains unclear. Additionally, it is not yet well understood how such temporal and interactive effects impact performance at multiple-levels of analysis, including both individual and team performance. The current paper presents an innovative theoretical approach for unpacking these complex relationships, forming a foundation for understanding their impact on dynamic episodes of individual and team performance.
Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2002
Michelle E. Harper; Florian Jentsch; Lori Rhodenizer Van Duyne; Kimberly A. Smith-Jentsch; Alicia D. Sanchez
One way to determine training needs and to evaluate learning is to measure how trainees organize knowledge using a card sorting task. While card sorting is a valid tool for assessing knowledge organization, it can be work intensive and error-prone when it is manually administered. For this reason, we developed a software tool that computerizes the card sort task. We present a study that was conducted to determine whether the computerized version of card sorting is comparable to the manual sort. One-hundred eight participants completed two card sorts, either two manual sorts, one manual and one computerized sort, or two computerized sorts. No differences were found between the administration methods with respect to card sort accuracy, test-retest scores, and number of piles created. Differences between the two methods were found in administration time and length of the pile labels. These differences disappeared after one computerized administration. We conclude that a computerized card sorting task is just as effective at eliciting knowledge as a manual card sort.
Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2005
Raegan M. Hoeft; Florian Jentsch; Kimberly A. Smith-Jentsch; Clint A. Bowers
Previous research has suggested that when high-performing teams are experiencing increased workload, they will adaptively shift from explicit to more implicit forms of coordination. This is thought to occur because the team members have shared mental models (SMMs) which allow them to anticipate one anothers needs. However, it is currently not known how SMMs are related to implicit coordination. Much of the research on SMMs had focused on the actual level of sharedness and, to some degree, on the accuracy of each team members model. However, to our knowledge, none has investigated the relationship between SMMs and implicit coordination. Furthermore, one line of research that has received very little attention is the notion of perceptions of sharedness. Must team members have an accurate perception of how well they share mental models in order to exploit them via implicit coordination? The purpose of this paper is to explore these fundamental questions that drive the process of implicit coordination.