Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mark H. Draper is active.

Publication


Featured researches published by Mark H. Draper.


Teleoperators and Virtual Environments | 2002

Human interaction with levels of automation and decision-aid fidelity in the supervisory control of multiple simulated unmanned air vehicles

Heath A. Ruff; S. Narayanan; Mark H. Draper

Remotely operated vehicles (ROVs) are vehicular robotic systems that are teleoperated by a geographically separated user. Advances in computing technology have enabled ROV operators to manage multiple ROVs by means of supervisory control techniques. The challenge of incorporating telepresence in any one vehicle is replaced by the need to keep the human in the loop of the activities of all vehicles. An evaluation was conducted to compare the effects of automation level and decision-aid fidelity on the number of simulated remotely operated vehicles that could be successfully controlled by a single operator during a target acquisition task. The specific ROVs instantiated for the study were unmanned air vehicles (UAVs). Levels of automation (LOAs) included manual control, management-by-consent, and management-by-exception. Levels of decision-aid fidelity (100 correct and 95 correct) were achieved by intentionally injecting error into the decision-aiding capabilities of the simulation. Additionally, the number of UAVs to be controlled varied (one, two, and four vehicles). Twelve participants acted as UAV operators. A mixed-subject design was utilized (with decision-aid fidelity as the between-subjects factor), and participants were not informed of decision-aid fidelity prior to data collection. Dependent variables included mission efficiency, percentage correct detection of incorrect decision aids, workload and situation awareness ratings, and trust in automation ratings. Results indicate that an automation level incorporating management-by-consent had some clear performance advantages over the more autonomous (management-by-exception) and less autonomous (manual control) levels of automation. However, automation level interacted with the other factors for subjective measures of workload, situation awareness, and trust. Additionally, although a 3D perspective view of the mission scene was always available, it was used only during low-workload periods and did not appear to improve the operators sense of presence. The implications for ROV interface design are discussed, and future research directions are proposed.


Proceedings of SPIE, the International Society for Optical Engineering | 2005

Synthetic Vision System for Improving Unmanned Aerial Vehicle Operator Situation Awareness

Gloria L. Calhoun; Mark H. Draper; Mike Abernathy; Frank J. Delgado; Michael Patzek

The Air Force Research Laboratorys Human Effectiveness Directorate (AFRL/HE) supports research addressing human factors associated with Unmanned Aerial Vehicle (UAV) operator control stations. Recent research, in collaboration with Rapid Imaging Software, Inc., has focused on determining the value of combining synthetic vision data with live camera video presented on a UAV control station display. Information is constructed from databases (e.g., terrain, cultural features, pre-mission plan, etc.), as well as numerous information updates via networked communication with other sources (e.g., weather, intel). This information is overlaid conformal, in real time, onto the dynamic camera video image display presented to operators. Synthetic vision overlay technology is expected to improve operator situation awareness by highlighting key spatial information elements of interest directly onto the video image, such as threat locations, expected locations of targets, landmarks, emergency airfields, etc. Also, it may help maintain an operator’s situation awareness during periods of video datalink degradation/dropout and when operating in conditions of poor visibility. Additionally, this technology may serve as an intuitive means of distributed communications between geographically separated users. This paper discusses the tailoring of synthetic overlay technology for several UAV applications. Pertinent human factors issues are detailed, as well as the usability, simulation, and flight test evaluations required to determine how best to combine synthetic visual data with live camera video presented on a ground control station display and validate that a synthetic vision system is beneficial for UAV applications.


15th AIAA Aviation Technology, Integration, and Operations Conference | 2015

An Evaluation of Detect and Avoid (DAA) Displays for Unmanned Aircraft Systems: The Effect of Information Level and Display Location on Pilot Performance

Lisa Fern; R. Conrad Rorie; Jessica S. Pack; R. Jay Shively; Mark H. Draper

A consortium of government, industry and academia is currently working to establish minimum operational performance standards for Detect and Avoid (DAA) and Control and Communications (C2) systems in order to enable broader integration of Unmanned Aircraft Systems (UAS) into the National Airspace System (NAS). One subset of these performance standards will need to address the DAA display requirements that support an acceptable level of pilot performance. From a pilots perspective, the DAA task is the maintenance of self separation and collision avoidance from other aircraft, utilizing the available information and controls within the Ground Control Station (GCS), including the DAA display. The pilot-in-the-loop DAA task requires the pilot to carry out three major functions: 1) detect a potential threat, 2) determine an appropriate resolution maneuver, and 3) execute that resolution maneuver via the GCS control and navigation interface(s). The purpose of the present study was to examine two main questions with respect to DAA display considerations that could impact pilots ability to maintain well clear from other aircraft. First, what is the effect of a minimum (or basic) information display compared to an advanced information display on pilot performance? Second, what is the effect of display location on UAS pilot performance? Two levels of information level (basic, advanced) were compared across two levels of display location (standalone, integrated), for a total of four displays. The authors propose an eight-stage pilot-DAA interaction timeline from which several pilot response time metrics can be extracted. These metrics were compared across the four display conditions. The results indicate that the advanced displays had faster overall response times compared to the basic displays, however, there were no significant differences between the standalone and integrated displays. Implications of the findings on understanding pilot performance on the DAA task, the development of DAA display performance standards, as well as the need for future research are discussed.


International Journal of Speech Technology | 2005

Commercial Speech Recognition Technology in the Military Domain: Results of Two Recent Research Efforts

David T. Williamson; Mark H. Draper; Gloria L. Calhoun; Timothy P. Barry

While speech recognition technology has long held the potential for improving the effectiveness of military operations, it has only been within the last several years that speech systems have enabled the realization of that potential. Commercial speech recognition technology developments aimed at improving robustness for automotive and cellular phone applications have capabilities that can be exploited in various military systems. This paper discusses the results of two research efforts directed toward applying commercial-off-the-shelf speech recognition technology in the military domain. The first effort discussed is the development and evaluation of a speech recognition interface to the Theater Air Planning system responsible for the generation of air tasking orders in a military Air Operations Center. The second effort examined the utility of speech versus conventional manual input for tasks performed by operators in an unmanned aerial vehicle control station simulator. Both efforts clearly demonstrate the military benefits obtainable from the proper application of speech technology.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2014

Human-Machine Interface Development for Common Airborne Sense and Avoid Program

Mark H. Draper; Jessica S. Pack; Sara J. Darrah; Sean N. Moulton; Gloria L. Calhoun

Unmanned aerial systems (UAS) are starting to access manned airspace today and this trend will grow substantially as the number of UAS and their associated missions expand. A key challenge to safely integrating UAS into the National Airspace System (NAS) is providing a reliable means for UAS to sense and avoid (SAA) other aircraft. The US Air Force is addressing this challenge through the Common Airborne Sense and Avoid (C-ABSAA) program. C-ABSAA is developing a sophisticated “sense-and-avoid” capability that will be integrated onboard larger UAS. This paper summarizes human factors activities associated with enabling this revolutionary capability. Existing knowledge was reviewed and crosschecked to formulate a first draft set of minimum information requirements for SAA tasks. A gap analysis spawned an intruder depiction study and an operator requirements survey. Finally, operator interface prototypes were designed to support: 1) a minimum information set for SAA, as well as 2) the availability of several advanced situation assessment and maneuver guidance aids. Through collaboration with NASA’s UAS in the NAS project, these concepts were incorporated into a UAS ground control station for formal evaluation through a high fidelity human-in-the-loop simulation.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2011

Human-Automation Challenges for the Control of Unmanned Aerial Systems:

Lisa Fern; R. Jay Shively; Mark H. Draper; Nancy J. Cooke; Christopher A. Miller

The continuing proliferation in the use of Unmanned Aerial Systems (UAS) in both civil and military operations has presented a multitude of human factors challenges from how to bridge the gap between the demand and availability of trained operators, to how to organize and present data in meaningful ways. Underlying many of these challenges is the issue of how automation capabilities can best be utilized to assist human operators manage increasing complexity and workload. The purpose of this discussion panel is to examine current research and perspectives on human automation interaction and how it relates to the future of UAS control. The panel is composed of five well-known researchers, all experts in the area of human-automation interaction. The range of topics that the panelists will discuss includes: how automation taxonomies can be applied to UAS design; opportunities to exploit automation capabilities in multi-vehicle contexts; current examples of automation research results, particularly in the area of multiple UAS control, and how they can be applied for future UAS; and how to design automation to maximize UAS mission effectiveness.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2011

Tools and Techniques for MOMU (Multiple Operator Multiple UAV) Environments; an Operational Perspective

Tal Oron-Gilad; Talya Porat; Lisa Fern; Mark H. Draper; R. Jay Shively; Jacob Silbiger; Michal Rottem-Hovev

Multiple operators controlling multiple unmanned aerial vehicles (MOMU) can be an efficient operational setup for reconnaissance and surveillance missions. However, it dictates switching and coordination among operators. Efficient switching is time-critical and cognitively demanding, thus vitally affecting mission accomplishment. As such, tools and techniques (T&Ts) to facilitate switching and coordination among operators are required. Furthermore, development of metrics and test-scenarios becomes essential to evaluate, refine, and adjust T&Ts to the specifics of the operational environment. To illustrate, tools that were designed and developed for MOMU operations as part of a US-Israel collaborative research project are described and associated research findings are summarized.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2012

Where’s the Beef: How Pervasive Is Cognitive Engineering in Military Research & Development Today?

Cynthia O. Dominguez; Patricia L. McDermott; Lawrence G. Shattuck; Pamela Savage-Knepshield; Christopher Nemeth; Mark H. Draper; Kristin Moore

Cognitive Engineering methods were developed to enable human factors practitioners to understand and systematically support the cognitive work of people working “at the sharp end of the spear.” Military members for whom DoD acquisition organizations develop systems are the quintessential “sharp end of the spear.” This panel is proposed to share present-day experience from military and industry reflecting how pervasively Cognitive Engineering is contributing to research and development for the highly complex military systems being operated under conditions of stress, time pressure, and uncertainty today. The implications for human factors practitioners will be highlighted, both in terms of practices to continue and areas for improvement.


Proceedings of SPIE, the International Society for Optical Engineering | 2006

Simulation Assessment of Synthetic Vision Concepts for UAV Operations

Gloria L. Calhoun; Mark H. Draper; Heath A. Ruff; Jeremy T. Nelson; Austen T. Lefebvre

The Air Force Research Laboratorys Human Effectiveness Directorate supports research addressing human factors associated with Unmanned Aerial Vehicle (UAV) operator control stations. One research thrust explores the value of combining synthetic vision data with live camera video presented on a UAV control station display. Information is constructed from databases (e.g., terrain, etc.), as well as numerous information updates via networked communication with other sources. This information is overlaid conformal, in real time, onto the dynamic camera video image display presented to operators. Synthetic vision overlay technology is expected to improve operator situation awareness by highlighting elements of interest within the video image. Secondly, it can assist the operator in maintaining situation awareness of an environment if the video datalink is temporarily degraded. Synthetic vision overlays can also serve to facilitate intuitive communications of spatial information between geographically separated users. This paper discusses results from a high-fidelity UAV simulation evaluation of synthetic symbology overlaid on a (simulated) live camera display. Specifically, the effects of different telemetry data update rates for synthetic visual data were examined for a representative sensor operator task. Participants controlled the zoom and orientation of the camera to find and designate targets. The results from both performance and subjective data demonstrated the potential benefit of an overlay of synthetic symbology for improving situation awareness, reducing workload, and decreasing time required to designate points of interest. Implications of symbology update rate are discussed, as well as other human factors issues.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2015

Exploring Performance Differences Between UAS Sense-and-Avoid Displays

Jessica S. Pack; Mark H. Draper; Sara J. Darrah; Mark P. Squire; Andrea Cooks

The desire for Unmanned Aerial Systems (UAS) to routinely access manned airspace has grown substantially due to the proliferation of UAS and their associated applications. A key challenge to safely integrating UAS into the National Airspace System (NAS) is providing a reliable means for UAS to sense and avoid (SAA) other aircraft. The US Air Force is addressing this challenge through the Common Airborne Sense and Avoid (C-ABSAA) program. C-ABSAA is developing a sophisticated SAA capability that will be integrated onboard larger UAS. This paper summarizes key human factors efforts to develop a SAA traffic display with the appropriate level of information needed to aid the pilot in successfully maintaining self-separation and collision avoidance from other aircraft. The present study examined performance differences between candidate SAA displays as well as the most efficient manner to communicate recommended maneuvers. Fifteen Class 3-5 UAS military pilots compared five stand-alone SAA displays across two weather constraint levels (no weather, weather). Results indicated that the Banding Display tended to be most effective in aiding pilot performance during a SAA situation, with faster response times, less change in response time between weather conditions, no collision avoidance alert violations, and favorable subjective feedback. Implications of these findings on determining the acceptable level of information needed on a SAA display to aid pilot performance are discussed.

Collaboration


Dive into the Mark H. Draper's collaboration.

Top Co-Authors

Avatar

Lisa Fern

San Jose State University

View shared research outputs
Top Co-Authors

Avatar

Gloria L. Calhoun

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Tal Oron-Gilad

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar

Talya Porat

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Patzek

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Austen T. Lefebvre

Air Force Research Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge