Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where B. Danette Allen is active.

Publication


Featured researches published by B. Danette Allen.


Archive | 2017

A Natural Interaction Interface for UAVs Using Intuitive Gesture Recognition

Meghan Chandarana; Anna C. Trujillo; Kenji Shimada; B. Danette Allen

The popularity of unmanned aerial vehicles (UAVs) is increasing as technological advancements boost their favorability for a broad range of applications. One application is science data collection. In fields like earth and atmospheric science, researchers are seeking to use UAVs to augment their current portfolio of platforms and increase their accessibility to geographic areas of interest. By increasing the number of data collection platforms, UAVs will significantly improve system robustness and allow for more sophisticated studies. Scientists would like the ability to deploy an available fleet of UAVs to traverse a desired flight path and collect sensor data without needing to understand the complex low-level controls required to describe and coordinate such a mission. A natural interaction interface for a Ground Control System (GCS) using gesture recognition is developed to allow non-expert users (e.g., scientists) to define a complex flight path for a UAV using intuitive hand gesture inputs from the constructed gesture library. The GCS calculates the combined trajectory on-line, verifies the trajectory with the user, and sends it to the UAV controller to be flown.


15th AIAA Aviation Technology, Integration, and Operations Conference | 2015

Collaborating with Autonomous Agents

Anna C. Trujillo; Charles D. Cross; Henry Fan; Lucas E. Hempley; Mark A. Motter; James H. Neilan; Garry Qualls; Paul M. Rothhaar; Loc Tran; B. Danette Allen

With the anticipated increase of small unmanned aircraft systems (sUAS) entering into the National Airspace System, it is highly likely that vehicle operators will be teaming with fleets of small autonomous vehicles. The small vehicles may consist of sUAS, which are 55 pounds or less that typically will y at altitudes 400 feet and below, and small ground vehicles typically operating in buildings or defined small campuses. Typically, the vehicle operators are not concerned with manual control of the vehicle; instead they are concerned with the overall mission. In order for this vision of high-level mission operators working with fleets of vehicles to come to fruition, many human factors related challenges must be investigated and solved. First, the interface between the human operator and the autonomous agent must be at a level that the operator needs and the agents can understand. This paper details the natural language human factors e orts that NASA Langleys Autonomy Incubator is focusing on. In particular these e orts focus on allowing the operator to interact with the system using speech and gestures rather than a mouse and keyboard. With this ability of the system to understand both speech and gestures, operators not familiar with the vehicle dynamics will be able to easily plan, initiate, and change missions using a language familiar to them rather than having to learn and converse in the vehicles language. This will foster better teaming between the operator and the autonomous agent which will help lower workload, increase situation awareness, and improve performance of the system as a whole.


15th AIAA Aviation Technology, Integration, and Operations Conference | 2015

Operating in "Strange New Worlds" and Measuring Success - Test and Evaluation in Complex Environments

Garry Qualls; Charles D. Cross; Matthew Mahlin; Gilbert Montague; Mark A. Motter; James H. Neilan; Paul M. Rothhaar; Loc Tran; Anna C. Trujillo; B. Danette Allen

Software tools are being developed by the Autonomy Incubator at NASAs Langley Research Center that will provide an integrated and scalable capability to support research and non-research flight operations across several flight domains, including urban and mixed indoor-outdoor operations. These tools incorporate a full range of data products to support mission planning, approval, flight operations, and post-flight review. The system can support a number of different operational scenarios that can incorporate live and archived data streams for UAS operators, airspace regulators, and other important stakeholders. Example use cases are described that illustrate how the tools will benefit a variety of users in nominal and off-nominal operational scenarios. An overview is presented for the current state of the toolset, including a summary of current demonstrations that have been completed. Details of the final, fully operational capability are also presented, including the interfaces that will be supported to ensure compliance with existing and future airspace operations environments.


international conference on unmanned aircraft systems | 2017

Speech-based natural language interface for UAV trajectory generation

Erica L. Meszaros; Meghan Chandarana; Anna C. Trujillo; B. Danette Allen

In recent years, natural language machine interfaces have become increasingly common. These interfaces allow for more intuitive communication with machines, reducing the complexity of interacting with these systems and enabling their use by non-expert users. Most of these natural language interfaces rely on speech, including such well-known devices as the iPhones Siri application, Cortana, Amazons Alexa and Echo devices, and others. Given their intuitive functionality, natural language interfaces have also been investigated as a method for controlling unmanned aerial vehicles (UAVs), allowing non-subject matter experts to use these tools in their scientific pursuits. This paper examines a speech-based natural language interface for defining UAV trajectories. To determine the efficacy of this interface, a user study is also presented that examines how users perform with this interface compared to a traditional mouse-based interface. The results of the user study are described in order to show how accurately users were able to define trajectories as well as user preference for using the speech-based system both before and after participating in the user study. Additional data are presented on whether users had previous experience with speech-based interfaces and how long they spent training with the interface before participating in the study. The user study demonstrates the potential of speech- based interfaces for UAV trajectory generation and suggests methods for future improvement and incorporation of natural language interfaces for UAV pilots.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2017

Natural Language Based Multimodal Interface for UAV Mission Planning

Meghan Chandarana; Erica L. Meszaros; Anna C. Trujillo; B. Danette Allen

As the number of viable applications for unmanned aerial vehicle (UAV) systems increases at an exponential rate, interfaces that reduce the reliance on highly skilled engineers and pilots must be developed. Recent work aims to make use of common human communication modalities such as speech and gesture. This paper explores a multimodal natural language interface that uses a combination of speech and gesture input modalities to build complex UAV flight paths by defining trajectory segment primitives. Gesture inputs are used to define the general shape of a segment while speech inputs provide additional geometric information needed to fully characterize a trajectory segment. A user study is conducted in order to evaluate the efficacy of the multimodal interface.


international geoscience and remote sensing symposium | 2010

Management of NASA's Earth Venture-1 (EV-1) airborne science selections

B. Danette Allen; Todd C. Denkins; Jon H. Kilgore

The Earth System Science Pathfinder (ESSP) Program Office (PO) is responsible for programmatic management of National Aeronautics and Space Administration (NASA) Science Mission Directorates (SMD) Earth Venture (EV) missions. EV is composed of both orbital and suborbital Earth science missions. The first of the Earth Venture missions is EV-1, which are Principal Investigator-led, temporally-sustained, suborbital (airborne) science investigations cost-capped at


International Conference on Applied Human Factors and Ergonomics | 2017

Compensating for Limitations in Speech-Based Natural Language Processing with Multimodal Interfaces in UAV Operation

Erica L. Meszaros; Meghan Chandarana; Anna C. Trujillo; B. Danette Allen

30M each over five years. Traditional orbital procedures, processes and standards used to manage previous ESSP missions, while effective, are disproportionally comprehensive for suborbital missions. Conversely, existing airborne practices are primarily intended for smaller, temporally shorter investigations, and traditionally managed directly by a program scientist as opposed to a program office such as ESSP. ESSP has crafted a management approach for the successful implementation of the EV-1 missions within the constructs of current governance models. NASA Research and Technology Program and Project Management Requirements form the foundation of the approach for EV-1. Additionally, requirements from other existing NASA Procedural Requirements (NPRs), systems engineering guidance and management handbooks were adapted to manage programmatic, technical, schedule, cost elements and risk. The program management approach presented here for EV-1 will set the precedent for future suborbital EV missions.


International Conference on Applied Human Factors and Ergonomics | 2017

Challenges of Using Gestures in Multimodal HMI for Unmanned Mission Planning

Meghan Chandarana; Erica L. Meszaros; Anna C. Trujillo; B. Danette Allen

Natural language interfaces are becoming more ubiquitous. By allowing for more natural communication, reducing the complexity of interacting with machines, and enabling non-expert users, these interfaces have found homes in numerous common products. However, these natural language interfaces still have great room for growth and development in order to better reflect human speech patterns. Intuitive speech communication is often accompanied by gestural information that is currently lacking from most speech interfaces. Exclusion of gestural data reduces a machine’s ability to interpret deictic information and understand some semantic intent. To allow for truly intuitive communication between humans and machines, a natural language interface must understand not only speech but also gestural data. This paper will outline the limitations and restrictions of some of the most popular and common speech-only natural language processing algorithms and systems in use today. Focus will be given to extra-linguistic communication aspects, including gestural information. Current research trends will then be presented that have been designed to compensate for these gaps by incorporating extra-linguistic information. The success of each of these trends will then be evaluated, as well as the hopefulness of continued investigative efforts. Additionally, a model multimodal interface will be presented that incorporates language and gesture data in order to demonstrate the effectiveness of such an interface. The gestural portion of this interface is included to compensate for some of the limitations of speech-only natural language interfaces. Combining these two types of natural language interfaces thereby works to reduce the limitations of natural language interfaces and increase their success. This presentation will discuss how the two interfaces work together and will specify how the speech interface limitations are addressed through the inclusion of a gestural system.


Proceedings of SPIE | 2013

Management approach for NASA's Earth Venture-1 (EV-1) airborne science investigations

Anthony R. Guillory; Todd C. Denkins; B. Danette Allen

As the use of autonomous systems continues to proliferate, their user base has transitioned from one primarily comprised of pilots and engineers with knowledge of the low level systems and algorithms to non-expert UAV users like scientists. This shift has highlighted the need to develop more intuitive and easy-to-use interfaces such that the strengths of the autonomous system can still be utilized without requiring any prior knowledge about the complexities of running such a system. Gesture-based natural language interfaces have emerged as a promising new alternative input modality. While on their own gesture-based interfaces can build general descriptions of desired inputs (e.g., flight path shapes), it is difficult to define more specific information (e.g., lengths, radii, height) while simultaneously preserving the intuitiveness of the interface. In order to assuage this issue, multimodal interfaces that integrate both gesture and speech can be used. These interfaces are intended to model typical human-human communication patterns which supplement gestures with speech. However, challenges arise when integrating gestures into a multimodal HMI architecture such as user perception of their ability vs. actual performance, system feedback, synchronization between input modalities, and the bounds on gesture execution requirements. We discuss these challenges, their possible causes and provide suggestions for mitigating these issues in the design of future multimodal interfaces. Although this paper discusses these challenges in the context of unmanned aerial vehicle mission planning, similar issues and solutions can be extended to unmanned ground and underwater missions.


advances in computer-human interaction | 2017

'Fly Like This': Natural Language Interface for UAV Mission Planning

Meghan Chandarana; Erica L. Meszaros; Anna C. Trujillo; B. Danette Allen

The Earth System Science Pathfinder (ESSP) Program Office (PO) is responsible for programmatic management of National Aeronautics and Space Administration’s (NASA) Science Mission Directorate’s (SMD) Earth Venture (EV) missions. EV is composed of both orbital and suborbital Earth science missions. The first of the Earth Venture missions is EV-1, which are Principal Investigator-led, temporally-sustained, suborbital (airborne) science investigations costcapped at

Collaboration


Dive into the B. Danette Allen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Meghan Chandarana

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Loc Tran

Langley Research Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anna Tmjillo

Langley Research Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge