Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Meghan Chandarana is active.

Publication


Featured researches published by Meghan Chandarana.


Archive | 2017

A Natural Interaction Interface for UAVs Using Intuitive Gesture Recognition

Meghan Chandarana; Anna C. Trujillo; Kenji Shimada; B. Danette Allen

The popularity of unmanned aerial vehicles (UAVs) is increasing as technological advancements boost their favorability for a broad range of applications. One application is science data collection. In fields like earth and atmospheric science, researchers are seeking to use UAVs to augment their current portfolio of platforms and increase their accessibility to geographic areas of interest. By increasing the number of data collection platforms, UAVs will significantly improve system robustness and allow for more sophisticated studies. Scientists would like the ability to deploy an available fleet of UAVs to traverse a desired flight path and collect sensor data without needing to understand the complex low-level controls required to describe and coordinate such a mission. A natural interaction interface for a Ground Control System (GCS) using gesture recognition is developed to allow non-expert users (e.g., scientists) to define a complex flight path for a UAV using intuitive hand gesture inputs from the constructed gesture library. The GCS calculates the combined trajectory on-line, verifies the trajectory with the user, and sends it to the UAV controller to be flown.


international conference on unmanned aircraft systems | 2017

Speech-based natural language interface for UAV trajectory generation

Erica L. Meszaros; Meghan Chandarana; Anna C. Trujillo; B. Danette Allen

In recent years, natural language machine interfaces have become increasingly common. These interfaces allow for more intuitive communication with machines, reducing the complexity of interacting with these systems and enabling their use by non-expert users. Most of these natural language interfaces rely on speech, including such well-known devices as the iPhones Siri application, Cortana, Amazons Alexa and Echo devices, and others. Given their intuitive functionality, natural language interfaces have also been investigated as a method for controlling unmanned aerial vehicles (UAVs), allowing non-subject matter experts to use these tools in their scientific pursuits. This paper examines a speech-based natural language interface for defining UAV trajectories. To determine the efficacy of this interface, a user study is also presented that examines how users perform with this interface compared to a traditional mouse-based interface. The results of the user study are described in order to show how accurately users were able to define trajectories as well as user preference for using the speech-based system both before and after participating in the user study. Additional data are presented on whether users had previous experience with speech-based interfaces and how long they spent training with the interface before participating in the study. The user study demonstrates the potential of speech- based interfaces for UAV trajectory generation and suggests methods for future improvement and incorporation of natural language interfaces for UAV pilots.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2017

Natural Language Based Multimodal Interface for UAV Mission Planning

Meghan Chandarana; Erica L. Meszaros; Anna C. Trujillo; B. Danette Allen

As the number of viable applications for unmanned aerial vehicle (UAV) systems increases at an exponential rate, interfaces that reduce the reliance on highly skilled engineers and pilots must be developed. Recent work aims to make use of common human communication modalities such as speech and gesture. This paper explores a multimodal natural language interface that uses a combination of speech and gesture input modalities to build complex UAV flight paths by defining trajectory segment primitives. Gesture inputs are used to define the general shape of a segment while speech inputs provide additional geometric information needed to fully characterize a trajectory segment. A user study is conducted in order to evaluate the efficacy of the multimodal interface.


International Conference on Applied Human Factors and Ergonomics | 2017

Compensating for Limitations in Speech-Based Natural Language Processing with Multimodal Interfaces in UAV Operation

Erica L. Meszaros; Meghan Chandarana; Anna C. Trujillo; B. Danette Allen

Natural language interfaces are becoming more ubiquitous. By allowing for more natural communication, reducing the complexity of interacting with machines, and enabling non-expert users, these interfaces have found homes in numerous common products. However, these natural language interfaces still have great room for growth and development in order to better reflect human speech patterns. Intuitive speech communication is often accompanied by gestural information that is currently lacking from most speech interfaces. Exclusion of gestural data reduces a machine’s ability to interpret deictic information and understand some semantic intent. To allow for truly intuitive communication between humans and machines, a natural language interface must understand not only speech but also gestural data. This paper will outline the limitations and restrictions of some of the most popular and common speech-only natural language processing algorithms and systems in use today. Focus will be given to extra-linguistic communication aspects, including gestural information. Current research trends will then be presented that have been designed to compensate for these gaps by incorporating extra-linguistic information. The success of each of these trends will then be evaluated, as well as the hopefulness of continued investigative efforts. Additionally, a model multimodal interface will be presented that incorporates language and gesture data in order to demonstrate the effectiveness of such an interface. The gestural portion of this interface is included to compensate for some of the limitations of speech-only natural language interfaces. Combining these two types of natural language interfaces thereby works to reduce the limitations of natural language interfaces and increase their success. This presentation will discuss how the two interfaces work together and will specify how the speech interface limitations are addressed through the inclusion of a gestural system.


International Conference on Applied Human Factors and Ergonomics | 2017

Challenges of Using Gestures in Multimodal HMI for Unmanned Mission Planning

Meghan Chandarana; Erica L. Meszaros; Anna C. Trujillo; B. Danette Allen

As the use of autonomous systems continues to proliferate, their user base has transitioned from one primarily comprised of pilots and engineers with knowledge of the low level systems and algorithms to non-expert UAV users like scientists. This shift has highlighted the need to develop more intuitive and easy-to-use interfaces such that the strengths of the autonomous system can still be utilized without requiring any prior knowledge about the complexities of running such a system. Gesture-based natural language interfaces have emerged as a promising new alternative input modality. While on their own gesture-based interfaces can build general descriptions of desired inputs (e.g., flight path shapes), it is difficult to define more specific information (e.g., lengths, radii, height) while simultaneously preserving the intuitiveness of the interface. In order to assuage this issue, multimodal interfaces that integrate both gesture and speech can be used. These interfaces are intended to model typical human-human communication patterns which supplement gestures with speech. However, challenges arise when integrating gestures into a multimodal HMI architecture such as user perception of their ability vs. actual performance, system feedback, synchronization between input modalities, and the bounds on gesture execution requirements. We discuss these challenges, their possible causes and provide suggestions for mitigating these issues in the design of future multimodal interfaces. Although this paper discusses these challenges in the context of unmanned aerial vehicle mission planning, similar issues and solutions can be extended to unmanned ground and underwater missions.


2018 Aviation Technology, Integration, and Operations Conference | 2018

Swarm Size Planning Tool for Multi-Job Type Missions

Meghan Chandarana; Michael Lewis; Bonnie D. Allen; Katia P. Sycara; Sebastian Scherer

As part of swarm search and service (SSS) missions, swarms are tasked with searching an area while simultaneously servicing jobs as they are encountered. Jobs must be immediately serviced and can be one of multiple types. Each type requires that vehicle(s) break off from the swarm and travel to the job site for a specified amount of time. The number of vehicles needed and the service time for each job type are known. Once a job has been successfully serviced, vehicles return to the swarm and are available for reallocation. When planning SSS missions, human operators are taskedwith determining the required number of vehicles needed to handle the expected job demand. The complex relationship between job type parameters makes this choice challenging. This work presents a prediction model used to estimate the swarm size necessary to achieve a given performance. User studies were conducted to determine the usefulness and ease of use of such a prediction model as an aid during mission planning. Results show that using the planning tool leads to 7x less missed area and a 50% cost reduction.


advances in computer-human interaction | 2017

'Fly Like This': Natural Language Interface for UAV Mission Planning

Meghan Chandarana; Erica L. Meszaros; Anna C. Trujillo; B. Danette Allen


international conference on unmanned aircraft systems | 2017

Analysis of a gesture-based interface for UAV flight path generation

Meghan Chandarana; Erica L. Meszaros; Anna Tmjillo; B. Danette Allen


advances in computer-human interaction | 2017

Multi-Operator Gesture Control of Robotic Swarms Using Wearable Devices

Sasanka Nagavalli; Meghan Chandarana; Michael Lewis; Katia P. Sycara


16th AIAA Aviation Technology, Integration, and Operations Conference, 2016 | 2016

A Safe Cooperative Framework for Atmospheric Science Missions with Multiple Heterogeneous UAS using Piecewise Bezier Curves

S. Bilal Mehdi; Javier Puig-Navarro; Ronald Choe; Venanzio Cichella; Naira Hovakimyan; Meghan Chandarana; Anna C. Trujillo; Paul M. Rothhaar; Loc Tran; James H. Neilan; B. Allen Danettett

Collaboration


Dive into the Meghan Chandarana's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Katia P. Sycara

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Loc Tran

Langley Research Center

View shared research outputs
Top Co-Authors

Avatar

Michael Lewis

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anna Tmjillo

Langley Research Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge