Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gloria L. Calhoun is active.

Publication


Featured researches published by Gloria L. Calhoun.


Proceedings of SPIE, the International Society for Optical Engineering | 2005

Synthetic Vision System for Improving Unmanned Aerial Vehicle Operator Situation Awareness

Gloria L. Calhoun; Mark H. Draper; Mike Abernathy; Frank J. Delgado; Michael Patzek

The Air Force Research Laboratorys Human Effectiveness Directorate (AFRL/HE) supports research addressing human factors associated with Unmanned Aerial Vehicle (UAV) operator control stations. Recent research, in collaboration with Rapid Imaging Software, Inc., has focused on determining the value of combining synthetic vision data with live camera video presented on a UAV control station display. Information is constructed from databases (e.g., terrain, cultural features, pre-mission plan, etc.), as well as numerous information updates via networked communication with other sources (e.g., weather, intel). This information is overlaid conformal, in real time, onto the dynamic camera video image display presented to operators. Synthetic vision overlay technology is expected to improve operator situation awareness by highlighting key spatial information elements of interest directly onto the video image, such as threat locations, expected locations of targets, landmarks, emergency airfields, etc. Also, it may help maintain an operator’s situation awareness during periods of video datalink degradation/dropout and when operating in conditions of poor visibility. Additionally, this technology may serve as an intuitive means of distributed communications between geographically separated users. This paper discusses the tailoring of synthetic overlay technology for several UAV applications. Pertinent human factors issues are detailed, as well as the usability, simulation, and flight test evaluations required to determine how best to combine synthetic visual data with live camera video presented on a ground control station display and validate that a synthetic vision system is beneficial for UAV applications.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2012

Adaptable and Adaptive Automation for Supervisory Control of Multiple Autonomous Vehicles

Brian Kidwell; Gloria L. Calhoun; Heath A. Ruff; Raja Parasuraman

Supervisory control of multiple autonomous vehicles raises many issues concerning the balance of system autonomy with human interaction for optimal operator situation awareness and system performance. An unmanned vehicle simulation designed to manipulate the application of automation was used to evaluate participants’ performance on image analysis tasks under two automation control schemes: adaptable (level of automation directly manipulated by participant throughout trials) and adaptive (level of automation adapted as a function of participants’ performance on four types of tasks). The results showed that while adaptable automation increased workload, it also improved change detection, as well as operator confidence in task-related decision-making.


Proceedings Fourth Annual Symposium on Human Interaction with Complex Systems | 1998

Hands-free input devices for wearable computers

Gloria L. Calhoun; Grant R. McMillan

The advent of wearable computers marks a potential revolution in human-machine interaction and necessitates an expansion of control and display capabilities. Several emerging technologies can provide operators with a variety of new channels for interacting with wearable computers. Enabling technologies use signals from the brain, muscles, voice, lips, head position, eye position, and gestures for the control of computers. These hands-free, head-up controllers may be required to fully exploit the advantages of wearable computers. This paper describes several hands-free controllers that are candidate input devices, either individually or as part of a multimodal interface. Controller design, task-controller mapping and other application issues are also presented.


International Journal of Speech Technology | 2005

Commercial Speech Recognition Technology in the Military Domain: Results of Two Recent Research Efforts

David T. Williamson; Mark H. Draper; Gloria L. Calhoun; Timothy P. Barry

While speech recognition technology has long held the potential for improving the effectiveness of military operations, it has only been within the last several years that speech systems have enabled the realization of that potential. Commercial speech recognition technology developments aimed at improving robustness for automotive and cellular phone applications have capabilities that can be exploited in various military systems. This paper discusses the results of two research efforts directed toward applying commercial-off-the-shelf speech recognition technology in the military domain. The first effort discussed is the development and evaluation of a speech recognition interface to the Theater Air Planning system responsible for the generation of air tasking orders in a military Air Operations Center. The second effort examined the utility of speech versus conventional manual input for tasks performed by operators in an unmanned aerial vehicle control station simulator. Both efforts clearly demonstrate the military benefits obtainable from the proper application of speech technology.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2015

Video Game Experience and Gender as Predictors of Performance and Stress During Supervisory Control of Multiple Unmanned Aerial Vehicles

Jinchao Lin; Ryan Wohleber; Gerald Matthews; Peter Y. Chiu; Gloria L. Calhoun; Heath A. Ruff; Gregory J. Funke

To keep pace with increasing applications of Unmanned Aerial Vehicles (UAVs), recruitment of operators will need to be expanded to include groups not traditionally engaged in UAV pilot training. The present study may inform this process as it investigated the relationship between video game experience and gender on performance of imaging and weapon release tasks in a simulated multi-UAV supervisory control station. Each of 101 participants completed a 60 minute experimental trial. Workload and Level of Automation (LOA) were manipulated. Video gaming expertise correlated with performance on a demanding surveillance task component. Video gamers also placed more trust in the automation in demanding conditions and exhibited higher subjective task engagement and lower distress and worry. Results may encourage recruitment of UAV operators from nontraditional populations. Gamers may have a particular aptitude, and with gaming experience controlled, women show no disadvantage relative to men.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2014

Human-Machine Interface Development for Common Airborne Sense and Avoid Program

Mark H. Draper; Jessica S. Pack; Sara J. Darrah; Sean N. Moulton; Gloria L. Calhoun

Unmanned aerial systems (UAS) are starting to access manned airspace today and this trend will grow substantially as the number of UAS and their associated missions expand. A key challenge to safely integrating UAS into the National Airspace System (NAS) is providing a reliable means for UAS to sense and avoid (SAA) other aircraft. The US Air Force is addressing this challenge through the Common Airborne Sense and Avoid (C-ABSAA) program. C-ABSAA is developing a sophisticated “sense-and-avoid” capability that will be integrated onboard larger UAS. This paper summarizes human factors activities associated with enabling this revolutionary capability. Existing knowledge was reviewed and crosschecked to formulate a first draft set of minimum information requirements for SAA tasks. A gap analysis spawned an intruder depiction study and an operator requirements survey. Finally, operator interface prototypes were designed to support: 1) a minimum information set for SAA, as well as 2) the availability of several advanced situation assessment and maneuver guidance aids. Through collaboration with NASA’s UAS in the NAS project, these concepts were incorporated into a UAS ground control station for formal evaluation through a high fidelity human-in-the-loop simulation.


Archive | 2017

Operator-Autonomy Teaming Interfaces to Support Multi-Unmanned Vehicle Missions

Gloria L. Calhoun; Heath A. Ruff; Kyle J. Behymer; Elizabeth M. Mersch

Advances in automation technology are leading to the development of operational concepts in which a single operator teams with multiple autonomous vehicles. This requires the design and evaluation of interfaces that support operator-autonomy collaborations. This paper describes interfaces designed to support a base defense mission performed by a human operator and heterogeneous unmanned vehicles. Flexible operator-autonomy teamwork is facilitated with interfaces that highlight the tradeoffs of autonomy-generated plans, support allocation of assets to tasks, and communicate mission progress. The interfaces include glyphs and video gaming type icons that present information in a concise, integrated manner and multi-modal controls that augment an adaptable architecture to enable seamless transition across control levels, from manual to fully autonomous. Examples of prototype displays and controls are provided, as well as usability data collected from multi-task simulation evaluations.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2016

Automation Reliability and Other Contextual Factors in Multi-UAV Operator Selection

Jinchao Lin; Gerald Matthews; Ryan Wohleber; C.-Y. Peter Chiu; Gloria L. Calhoun; Gregory J. Funke; Heath A. Ruff

Multi-unmanned air vehicle (UAV) operation requires a unique set of skills and high demand for new operators requires selection from populations without previous flight training. To support developing criteria for multi-UAV operator selection, the present study investigated the role of multiple individual difference factors in performance under different multi-UAV specific contexts. Specifically, we compared performance under fatigue using a high- and low-reliability automated aid. Accuracy on surveillance tasks, as well as reliance on automation were assessed. Video gaming expertise was associated with reduced stress and less reliance with a low-reliability automated aid. Distress was the most robust predictor of performance accuracy, but high distress was harmful only when reliability was low. Personality correlates of performance varied with both automation reliability and gender. Our findings suggest that multi-UAV operator selection should take into account the reliability of the automated systems.


AIAA Infotech@Aerospace (I@A) Conference | 2013

Human-Computer Interface Concepts for Verifiable Mission Specification, Planning, and Management

Clayton Rothwell; Alexa Eggert; Michael Patzek; George Bearden; Gloria L. Calhoun; Laura R. Humphrey

Operators of unmanned aerial vehicles (UAVs) may soon be controlling multiple sophisticated autonomous systems in complex, dynamic mission contexts. A vital element for successful collaboration between human and autonomous agents is communication, which could be improved through the use of formal methods such as model checking. In model checking, the desired behavior of an autonomous system is specified using temporal logic, and a tool called a model checker is used to verify that the desired behaviors can be carried out by the system. Formal methods themselves, however, are challenging to learn and use but could be adapted to improve usability. This paper introduces a tool for writing formal specifications in a natural language representation, automatically translating them to temporal logic, and interfacing with model checking software. This tool is implemented within a UAV ground control station testbed and can be used for mission planning and monitoring. The tool and overall system are described and qualitatively compared to other tools designed to increase the usability of formal methods via translation of temporal logic.


Theoretical Issues in Ergonomics Science | 2018

Human-autonomy teaming interface design considerations for multi-unmanned vehicle control

Gloria L. Calhoun; H.A. Ruff; K.J. Behymer; E.M. Frost

ABSTRACT Future applications are envisioned in which a single human operator manages multiple heterogeneous unmanned vehicles (UVs) by working together with an autonomy teammate that consists of several intelligent decision-aiding agents/services. This article describes recent advancements in developing a new interface paradigm that will support human-autonomy teaming for air, ground, and surface (sea craft) UVs in defence of a military base. Several concise and integrated candidate control station interfaces are described by which the operator determines the role of autonomy in UV management using an adaptable automation control scheme. An extended play calling based control approach is used to support human-autonomy communication and teaming in managing how UV assets respond to potential threats (e.g. asset allocation, routing, and execution details). The design process for the interfaces is also described including: analysis of a base defence scenario used to guide this effort, consideration of ecological interface design constructs, and generation of UV and task-related pictorial symbology.

Collaboration


Dive into the Gloria L. Calhoun's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark H. Draper

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Gerald Matthews

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Gregory J. Funke

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Jessica Bartik

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Jinchao Lin

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Ryan Wohleber

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Patzek

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Sarah Spriggs

Air Force Research Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge