Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Barber is active.

Publication


Featured researches published by Daniel Barber.


Human Factors | 2015

The Psychometrics of Mental Workload Multiple Measures Are Sensitive but Divergent

Gerald Matthews; Lauren Reinerman-Jones; Daniel Barber; Julian Abich

Objective: A study was run to test the sensitivity of multiple workload indices to the differing cognitive demands of four military monitoring task scenarios and to investigate relationships between indices. Background: Various psychophysiological indices of mental workload exhibit sensitivity to task factors. However, the psychometric properties of multiple indices, including the extent to which they intercorrelate, have not been adequately investigated. Method: One hundred fifty participants performed in four task scenarios based on a simulation of unmanned ground vehicle operation. Scenarios required threat detection and/or change detection. Both single- and dual-task scenarios were used. Workload metrics for each scenario were derived from the electroencephalogram (EEG), electrocardiogram, transcranial Doppler sonography, functional near infrared, and eye tracking. Subjective workload was also assessed. Results: Several metrics showed sensitivity to the differing demands of the four scenarios. Eye fixation duration and the Task Load Index metric derived from EEG were diagnostic of single-versus dual-task performance. Several other metrics differentiated the two single tasks but were less effective in differentiating single- from dual-task performance. Psychometric analyses confirmed the reliability of individual metrics but failed to identify any general workload factor. An analysis of difference scores between low- and high-workload conditions suggested an effort factor defined by heart rate variability and frontal cortex oxygenation. Conclusions: General workload is not well defined psychometrically, although various individual metrics may satisfy conventional criteria for workload assessment. Application: Practitioners should exercise caution in using multiple metrics that may not correspond well, especially at the level of the individual operator.


Human Factors | 2016

Intelligent Agent Transparency in Human–Agent Teaming for Multi-UxV Management

Joseph E. Mercado; Michael A. Rupp; Jessie Y. C. Chen; Michael J. Barnes; Daniel Barber; Katelyn Procci

Objective: We investigated the effects of level of agent transparency on operator performance, trust, and workload in a context of human–agent teaming for multirobot management. Background: Participants played the role of a heterogeneous unmanned vehicle (UxV) operator and were instructed to complete various missions by giving orders to UxVs through a computer interface. An intelligent agent (IA) assisted the participant by recommending two plans—a top recommendation and a secondary recommendation—for every mission. Method: A within-subjects design with three levels of agent transparency was employed in the present experiment. There were eight missions in each of three experimental blocks, grouped by level of transparency. During each experimental block, the IA was incorrect three out of eight times due to external information (e.g., commander’s intent and intelligence). Operator performance, trust, workload, and usability data were collected. Results: Results indicate that operator performance, trust, and perceived usability increased as a function of transparency level. Subjective and objective workload data indicate that participants’ workload did not increase as a function of transparency. Furthermore, response time did not increase as a function of transparency. Conclusion: Unlike previous research, which showed that increased transparency resulted in increased performance and trust calibration at the cost of greater workload and longer response time, our results support the benefits of transparency for performance effectiveness without additional costs. Application: The current results will facilitate the implementation of IAs in military settings and will provide useful data to the design of heterogeneous UxV teams.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2011

Defining Next-Generation Multi-Modal Communication in Human Robot Interaction

Stephanie J. Lackey; Daniel Barber; Lauren Reinerman; Norman I. Badler; Irwin Hudson

With teleoperation being the contemporary standard for Human Robot Interaction (HRI), research into multi-modal communication (MMC) has focused on development of advanced Operator Control Units (OCU) supporting control of one or more robots. However, with advances being made to improve the perception, intelligence, and mobility of robots, a need exists to revolutionize the ways in which Soldiers interact with robotic team members. Within this future vision, mixed-initiative Soldier-Robot (SR) teams will work collaboratively sharing information back-and-forth in a fluid natural manner using combinations of communication methods. Therefore, new definitions are required to focus research efforts to support next-generation MMC. After a thorough survey of the literature and a scientific workshop on the topic, this paper aims to operationally define MMC, Explicit Communication, and Implicit Communication to encompass the shifting paradigm of HRI from a controller/controlled relationship to a cooperative team mate relationship. This paper presents the results from a survey of the literature and a scientific workshop that inform proposed definitions for multi-modal, explicit, and implicit communication. An illustrative scenario vignette provides context and specific examples of each communication type. Finally, future research efforts are summarized.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2013

Towards Modeling Social-Cognitive Mechanisms in Robots to Facilitate Human-Robot Teaming

Travis J. Wiltshire; Daniel Barber; Stephen M. Fiore

For effective human-robot teaming, robots must gain the appropriate social-cognitive mechanisms that allow them to function naturally and intuitively in social interactions with humans. However, there is a lack of consensus on social cognition broadly, and how to design such mechanisms for embodied robotic systems. To this end, recommendations are advanced that are drawn from HRI, psychology, robotics, neuroscience and philosophy as well as theories of embodied cognition, dual process theory, ecological psychology, and dynamical systems. These interdisciplinary and multi-theoretic recommendations are meant to serve as integrative and foundational guidelines for the design of robots with effective social-cognitive mechanisms.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2005

Anthropomorphism of Robotic Forms: A Response to Affordances?

Valerie K. Sims; Matthew G. Chin; David J. Sushil; Daniel Barber; Tatiana Ballion; Bryan Clark; Keith Garfield; Michael J. Dolezal; Randall Shumaker; Neal Finkelstein

Participants rated robotic forms on three scales: perceived aggression, intelligence, and animation. The robot bodies varied along five dimensions: Types of edges (beveled or squared), method of movement (wheels, legs, spider legs, or treads), number of movement generators (2 or 4), body position (upright or down), and presence of arms (present or absent). Across ratings, movement method and presence of arms were the strongest predictors of participant perceptions. Legs and arms, both human characteristics, were associated with more positive attributions. Minimal affective characteristics, as displayed by the body design, are important in user perceptions of use and ability.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2015

Field Assessment of Multimodal Communication for Dismounted Human-Robot Teams

Daniel Barber; Julian Abich; Elizabeth Phillips; Andrew B. Talone; Florian Jentsch; Susan G. Hill

A field assessment of multimodal communication (MMC) was conducted as part of a program integration demonstration to support and enable bi-directional communication between a dismounted Soldier and a robot teammate. In particular, the assessment was focused on utilizing auditory and visual/gesture based communications. The task involved commanding a robot using semantically-based MMC. Initial participant data indicates a positive experience with the multimodal interface (MMI) prototype. The results of the experiment inform recommendations for multimodal designers regarding perceived usability and functionality of the currently implemented MMI.


Human Factors | 2015

Toward a Tactile Language for Human–Robot Interaction Two Studies of Tacton Learning and Performance

Daniel Barber; Lauren Reinerman-Jones; Gerald Matthews

Objective: Two experiments were performed to investigate the feasibility for robot-to-human communication of a tactile language using a lexicon of standardized tactons (tactile icons) within a sentence. Background: Improvements in autonomous systems technology and a growing demand within military operations are spurring interest in communication via vibrotactile displays. Tactile communication may become an important element of human–robot interaction (HRI), but it requires the development of messaging capabilities approaching the communication power of the speech and visual signals used in the military. Method: In Experiment 1 (N = 38), we trained participants to identify sets of directional, dynamic, and static tactons and tested performance and workload following training. In Experiment 2 (N = 76), we introduced an extended training procedure and tested participants’ ability to correctly identify two-tacton phrases. We also investigated the impact of multitasking on performance and workload. Individual difference factors were assessed. Results: Experiment 1 showed that participants found dynamic and static tactons difficult to learn, but the enhanced training procedure in Experiment 2 produced competency in performance for all tacton categories. Participants in the latter study also performed well on two-tacton phrases and when multitasking. However, some deficits in performance and elevation of workload were observed. Spatial ability predicted some aspects of performance in both studies. Conclusions: Participants may be trained to identify both single tactons and tacton phrases, demonstrating the feasibility of developing a tactile language for HRI. Application: Tactile communication may be incorporated into multi-modal communication systems for HRI. It also has potential for human–human communication in challenging environments.


Proceedings of SPIE | 2014

Speech and gesture interfaces for squad-level human-robot teaming

Jonathan Harris; Daniel Barber

As the military increasingly adopts semi-autonomous unmanned systems for military operations, utilizing redundant and intuitive interfaces for communication between Soldiers and robots is vital to mission success. Currently, Soldiers use a common lexicon to verbally and visually communicate maneuvers between teammates. In order for robots to be seamlessly integrated within mixed-initiative teams, they must be able to understand this lexicon. Recent innovations in gaming platforms have led to advancements in speech and gesture recognition technologies, but the reliability of these technologies for enabling communication in human robot teaming is unclear. The purpose for the present study is to investigate the performance of Commercial-Off-The-Shelf (COTS) speech and gesture recognition tools in classifying a Squad Level Vocabulary (SLV) for a spatial navigation reconnaissance and surveillance task. The SLV for this study was based on findings from a survey conducted with Soldiers at Fort Benning, GA. The items of the survey focused on the communication between the Soldier and the robot, specifically in regards to verbally instructing them to execute reconnaissance and surveillance tasks. Resulting commands, identified from the survey, were then converted to equivalent arm and hand gestures, leveraging existing visual signals (e.g. U.S. Army Field Manual for Visual Signaling). A study was then run to test the ability of commercially available automated speech recognition technologies and a gesture recognition glove to classify these commands in a simulated intelligence, surveillance, and reconnaissance task. This paper presents classification accuracy of these devices for both speech and gesture modalities independently.


Proceedings of SPIE | 2013

Visual and tactile interfaces for bi-directional human robot communication

Daniel Barber; Stephanie J. Lackey; Lauren Reinerman-Jones; Irwin Hudson

Seamless integration of unmanned and systems and Soldiers in the operational environment requires robust communication capabilities. Multi-Modal Communication (MMC) facilitates achieving this goal due to redundancy and levels of communication superior to single mode interaction using auditory, visual, and tactile modalities. Visual signaling using arm and hand gestures is a natural method of communication between people. Visual signals standardized within the U.S. Army Field Manual and in use by Soldiers provide a foundation for developing gestures for human to robot communication. Emerging technologies using Inertial Measurement Units (IMU) enable classification of arm and hand gestures for communication with a robot without the requirement of line-of-sight needed by computer vision techniques. These devices improve the robustness of interpreting gestures in noisy environments and are capable of classifying signals relevant to operational tasks. Closing the communication loop between Soldiers and robots necessitates them having the ability to return equivalent messages. Existing visual signals from robots to humans typically require highly anthropomorphic features not present on military vehicles. Tactile displays tap into an unused modality for robot to human communication. Typically used for hands-free navigation and cueing, existing tactile display technologies are used to deliver equivalent visual signals from the U.S. Army Field Manual. This paper describes ongoing research to collaboratively develop tactile communication methods with Soldiers, measure classification accuracy of visual signal interfaces, and provides an integration example including two robotic platforms.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2014

Psychophysiological Metrics for Workload are Demand-Sensitive but Multifactorial

Lauren Reinerman-Jones; Gerald Matthews; Daniel Barber; Julian Abich

Various psychophysiological indices of mental workload exhibit sensitivity to task demand factors, but the psychometrics of indices has been neglected. In particular, the extent to which different metrics converge on a common latent factor is unclear. In the present study, 150 participants performed in four task scenarios based on a simulation of unmanned vehicle operation. Scenarios required threat detection and/or change detection. Both single- and dual-task scenarios were used. Workload metrics were derived from the electroencephalogram (EEG), electrocardiogram (ECG), transcranial Doppler sonography (TCD), functional Near Infra-Red (fNIR) and eyetracking. Subjective workload was also assessed. Several metrics were appropriately sensitive to the differing levels of task load presented by the four scenarios. However, factor analysis identified multiple factors, each of which was associated with a single response system only, with no general factor. Caution should be used in assessing workload in the individual operator.

Collaboration


Dive into the Daniel Barber's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Julian Abich

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Denise Nicholson

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Stephanie J. Lackey

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Gerald Matthews

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Grace Teo

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Jonathan Harris

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joseph Mercado

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Larry Davis

University of Central Florida

View shared research outputs
Researchain Logo
Decentralizing Knowledge