Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Florian Jentsch is active.

Publication


Featured researches published by Florian Jentsch.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2006

Understanding performance and cognitive efficiency when training for x-ray security screening

Stephen M. Fiore; Sandro Scielzo; Florian Jentsch; Megan L. Howard

We describe an experiment designed to understand the X-ray security screener task via investigation of how training environment and content influence perceptual learning. We examined both perceptual discrimination and the presence/absence of clutter during training and how this impacted performance. Overall, the data show that performance was generally better when there were clutter items in the training images. We also examined the diagnosticity of a measure of cognitive efficiency, a combinatory metric that simultaneously considers test performance and workload. In terms of cognitive efficiency, participants who trained in the difficult discrimination with clutter present experienced lower workload during the test relative to their actual performance. The discussion centers on how improved analytical techniques are better able to diagnose the relative effectiveness of training interventions.


Human Factors and Ergonomics Society Annual Meeting Proceedings | 2009

The Influence of Team Size and Communication Modality on Team Effectiveness with Unmanned Systems

Thomas Fincannon; A. William Evans; Elizabeth Phillips; Florian Jentsch; Joseph R. Keebler

This study examines the effects of team size (2 versus 3 operators) and communication modality (audio versus text) on team performance. Performance and workload measures from 112 undergraduate students from the University of Central Florida were used in this analysis. Results indicated that performance was optimal for teams of three operators using audio systems for distributed communication. Results with the NASA TLX showed patterns where workload was lower in the audio condition. Results with the Multiple Resources Questionnaire (MRQ) showed a reversed trend with a higher score in the audio condition, which was attributed to increases in items associated with audio processing.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2008

Interactive Effects of Backup Behavior and Spatial Abilities in the Prediction of Teammate Workload Using Multiple Unmanned Vehicles

Thomas Fincannon; A. William Evans; Florian Jentsch; Joseph R. Keebler

This study examined the interactive effects of spatial ability and team process on operator workload, while using multiple unmanned vehicles. The hypotheses also focused on how these effects might change when using different measures of spatial ability. In order to examine this, the Guilford-Zimmerman Spatial Visualization and Spatial Orientation scores of an unmanned aerial vehicle (UAV) operator and navigation support provided by this UAV operator to an unmanned ground vehicle (UGV) operator were used as variables predicting UGV operator workload while performing a reconnaissance task. Results indicated that the interaction of the “guiders” spatial visualization and navigation support and the interaction between spatial orientation and navigation support not only accounted for unique variances in the prediction of his/her teammates workload, but they also produced qualitatively different patterns of results. In identifying these unique contributions, the importance of using multiple spatial ability measures in (unmanned vehicle) research is highlighted.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2001

Mental Model Assessments: Is There Convergence Among Different Methods?

A. William Evans; Florian Jentsch; James M. Hitt; Clint A. Bowers; Eduardo Salas

Knowledge elicitation and mental model assessment methods are becoming increasingly popular in applied psychology. However, there continue to be questions about the psychometrics of knowledge elicitation methods. Specifically, more needs to be known regarding the stability and consistency of the results over time (i.e., whether the methods are reliable) and regarding the degree to which the results correctly represent the underlying knowledge structures (i.e., whether the methods are valid). This paper focuses on the convergence among three different assessment methods: (a) pairwise relatedness ratings using Pathfinder, (b) concept mapping, and (c) card sorting. Thirty-six participants completed all three assessments using the same set of twenty driving-related terms. Assessment sequences were counterbalanced, and participants were randomly assigned to one of the six assessment sequences. It was found that the three assessment methods showed very low convergence as measured by the average correlation across the three methods within the same person. Indeed, convergence was lower than the sharedness across participants (as measured by the average correlation across participants within the same assessment method). Additionally, there were order effects among the different assessment sequences. Implications for research and practice are discussed.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting. Santa Monica, CA: Human Factors and Ergonomics Society | 2007

Effects of 2-Dimensional and 3-Dimensional Media Exposure Training in a Tank Recognition Task:

Joseph R. Keebler; Michelle Harper-Sciarini; Michael T. Curtis; David Schuster; Florian Jentsch; Meredith Bell-Carroll

This investigation explores the differences between two types of military vehicle training: a current training method (2-dimensional, military-issued cards) and a novel method using 3-dimensional 1:35 scale models. Participant performance was tested in 3 areas: an identification task (can you name this vehicle?), a recognition task (have you seen this vehicle before?) and a friend/foe differentiation task. All three tasks were tested in both two dimensions (Training cards) and three dimensions (1:35 models). The performance results of the tasks support the integration of 3D training.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2006

Familiarity and Expertise in the Recognition of Vehicles from an Unmanned Ground Vehicle

Thomas Fincannon; Michael T. Curtis; Florian Jentsch

The purpose of this study was to examine the role of familiarity and expertise in remote perception from unmanned ground vehicles (UGVs). Fifty-two volunteers, of whom 23 were Army ROTC cadets, participated. They were first asked to identify vehicles on a written test, and scores from the test were used to predict the amount of information reported from a video recording, captured from a UGV camera, in a scaled MOUT facility. ROTC cadets are compared with the general subject pool in order to explore differences between civilian and military vehicle recognition. Results from a written vehicle recognition test indicate that all participants were most familiar with civilian vehicles and ROTC cadets were more familiar with military vehicles than the general population. Regression analyses revealed that both ROTC experience and vehicle familiarity were predictive of the amount of information correctly reported from the UGV camera video. We believe that training for expertise and motivation should be considered for future research.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2004

Scoring Concepts Maps: Can a Practical Method of Scoring Concept Maps be Used to Assess Trainee's Knowledge Structures?

Michelle E. Harper; Raegan M. Hoeft; A. W. Evans; Florian Jentsch

Previous research has indicated that the structure of an individuals knowledge may be just as important as the quantity of knowledge. Given this, using assessment methods that elicit trainees knowledge structures seems imperative for predicting performance. Unfortunately, incorporation of these methods has been hindered, in part, due to the complexity of methods used to derive a score and the belief that simpler methods of scoring will not provide accurate information about an individuals knowledge. Presented here is a study that investigated whether this claim was true for the structural knowledge elicitation method, concept mapping. Twenty-six participants were run through a same-day training and assessment session. Following, concept maps were scored using a simple method and a complex method. Results indicated that both scoring methods produced significantly higher scores for the trained group and significantly lower scores for the untrained group. In addition, there was a very strong, positive relationship between the two scoring methods. Finally, both methods produced a moderately high correlation with the paper-pencil assessment. Results and implications are discussed further within the paper.


Archive | 2004

11. “A FRENCHMAN, A GERMAN, AND AN ENGLISHMAN …”: THE IMPACT OF CULTURAL HETEROGENEITY ON TEAMS

Florian Jentsch; Raegan M. Hoeft; Stephen M. Fiore; Clint A. Bowers

Most traditional research on work groups has studied groups and teams that are homogeneous with respect to culture. To alleviate the dearth of material on culturally heterogeneous teams, this chapter provides an overview of the impact of cultural diversity on groups and teams in today’s workforce. First, we focus on the problems involved in defining the constructs of “teams” and “culture.” Second, we provide a brief review of the cultural factors that have been identified as affecting human performance. This review serves as the basis for the third section of this chapter, which investigates if – and how – cultural heterogeneity affects team performance. Finally, we conclude with how culturally diverse workplaces can be managed and how to improve performance when faced with cultural diversity.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2005

Demonstration: Advancing Robotics Research Through the Use of a Scale Mout Facility

A. William Evans; Raegan M. Hoeft; Sherri A. Rehfeld; Moshe Feldman; Michael T. Curtis; Thomas Fincannon; Jessica Ottlinger; Florian Jentsch

This demonstration serves as an introduction to the CARAT scale MOUT (Military Operation in Urban Terrain) facility developed at the Team Performance Laboratory (TPL) at the University of Central Florida (UCF). Advances in automated military vehicles require research to understand how best to allocate control of these vehicles. Whether, discussing uninhabited ground vehicles (UGVs) or air vehicles (UAVs), many questions still exist as to the optimum level of performance with respect to the ratio of human controls to vehicles. The scale MOUT facility at UCF allows researchers to investigate these issues without sacrificing large costly equipment and without requiring vast physical areas, within which to test such equipment. This demonstration provides an introduction to the scale MOUT facility, describes the basic need for this tool, presents its advantages over full size counterparts, as well as several other possible uses for the facility.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2002

Computerized Card Sort Training Tool: Is it Comparable to Manual Card Sorting?

Michelle E. Harper; Florian Jentsch; Lori Rhodenizer Van Duyne; Kimberly A. Smith-Jentsch; Alicia D. Sanchez

One way to determine training needs and to evaluate learning is to measure how trainees organize knowledge using a card sorting task. While card sorting is a valid tool for assessing knowledge organization, it can be work intensive and error-prone when it is manually administered. For this reason, we developed a software tool that computerizes the card sort task. We present a study that was conducted to determine whether the computerized version of card sorting is comparable to the manual sort. One-hundred eight participants completed two card sorts, either two manual sorts, one manual and one computerized sort, or two computerized sorts. No differences were found between the administration methods with respect to card sort accuracy, test-retest scores, and number of piles created. Differences between the two methods were found in administration time and length of the pile labels. These differences disappeared after one computerized administration. We conclude that a computerized card sorting task is just as effective at eliciting knowledge as a manual card sort.

Collaboration


Dive into the Florian Jentsch's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thomas Fincannon

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

David Schuster

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Michael T. Curtis

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Stephen M. Fiore

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Clint A. Bowers

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

A. William Evans

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Raegan M. Hoeft

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Scott Ososky

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Eduardo Salas

University of Southern California

View shared research outputs
Researchain Logo
Decentralizing Knowledge