Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mary L. Cummings is active.

Publication


Featured researches published by Mary L. Cummings.


systems man and cybernetics | 2008

Predicting Controller Capacity in Supervisory Control of Multiple UAVs

Mary L. Cummings; Paul J. Mitchell

In the future vision of allowing a single operator to remotely control multiple unmanned vehicles, it is not well understood what cognitive constraints limit the number of vehicles and related tasks that a single operator can manage. This paper illustrates that, when predicting the number of unmanned aerial vehicles (UAVs) that a single operator can control, it is important to model the sources of wait times (WTs) caused by human-vehicle interaction, particularly since these times could potentially lead to a system failure. Specifically, these sources of vehicle WTs include cognitive reorientation and interaction WT (WTI), queues for multiple-vehicle interactions, and loss of situation awareness (SA) WTs. When WTs were included, predictions using a multiple homogeneous and independent UAV simulation dropped by up to 67%, with a loss of SA as the primary source of WT delays. Moreover, this paper demonstrated that even in a highly automated management-by-exception system, which should alleviate queuing and WTIs, operator capacity is still affected by the SA WT, causing a 36% decrease over the capacity model with no WT included.


AIAA 1st Intelligent Systems Technical Conference | 2004

Automation Bias in Intelligent Time Critical Decision Support Systems

Mary L. Cummings

Various levels of automation can be introduced by intelligent decision support systems, from fully automated, where the operator is completely left out of the decision process, to minimal levels of automation, where the automation only makes recommendations and the operator has the final say. For rigid tasks that require no flexibility in decision-making and with a low probability of system failure, higher levels of automation often provide the best solution. However, in time critical environments with many external and changing constraints such as air traffic control and military command and control operations, higher levels of automation are not advisable because of the risks and the complexity of both the system and the inability of the automated decision aid to be perfectly reliable. Human-inthe-loop designs, which employ automation for redundant, manual, and monotonous tasks and allow operators active participation, provide not only safety benefits, but also allow a human operator and a system to respond more flexibly to uncertain and unexpected events. However, there can be measurable costs to human performance when automation is used, such as loss of situational awareness, complacency, skill degradation, and automation bias. This paper will discuss the influence of automation bias in intelligent decision support systems, particularly those in aviation domains. Automation bias occurs in decision-making because humans have a tendency to disregard or not search for contradictory information in light of a computer-generated solution that is accepted as correct and can be exacerbated in time critical domains. Automated decision aids are designed to reduce human error but actually can cause new errors in the operation of a system if not designed with human cognitive limitations in mind.


Human Factors | 2010

The Role of Human-Automation Consensus in Multiple Unmanned Vehicle Scheduling

Mary L. Cummings; Andrew S. Clare; Christin S. Hart

Objective: This study examined the impact of increasing automation replanning rates on operator performance and workload when supervising a decentralized network of heterogeneous unmanned vehicles. Background: Futuristic unmanned vehicles systems will invert the operator-to-vehicle ratio so that one operator can control multiple dissimilar vehicles connected through a decentralized network. Significant human-automation collaboration will be needed because of automation brittleness, but such collaboration could cause high workload. Method: Three increasing levels of replanning were tested on an existing multiple unmanned vehicle simulation environment that leverages decentralized algorithms for vehicle routing and task allocation in conjunction with human supervision. Results: Rapid replanning can cause high operator workload, ultimately resulting in poorer overall system performance. Poor performance was associated with a lack of operator consensus for when to accept the automation’s suggested prompts for new plan consideration as well as negative attitudes toward unmanned aerial vehicles in general. Participants with video game experience tended to collaborate more with the automation, which resulted in better performance. Conclusion: In decentralized unmanned vehicle networks, operators who ignore the automation’s requests for new plan consideration and impose rapid replans both increase their own workload and reduce the ability of the vehicle network to operate at its maximum capacity. Application: These findings have implications for personnel selection and training for futuristic systems involving human collaboration with decentralized algorithms embedded in networks of autonomous systems.


human-robot interaction | 2007

Developing performance metrics for the supervisory control of multiple robots

Jacob W. Crandall; Mary L. Cummings

Efforts are underway to make it possible for a single operator to effectively control multiple robots. In these high workload situations, many questions arise including how many robots should be in the team (Fan-out), what level of autonomy should the robots have, and when should this level of autonomy change (i.e., dynamic autonomy). We propose that a set of metric classes should be identified that can adequately answer these questions. Toward this end, we present a potential set of metric classes for human-robot teams consisting of a single human operator and multiple robots. To test the usefulness and appropriateness of this set of metric classes, we conducted a user study with simulated robots. Using the data obtained from this study, we explore the ability of this set of metric classes to answer these questions.


Innovations in Intelligent Machines (1) | 2007

Predicting Operator Capacity for Supervisory Control of Multiple UAVs

Mary L. Cummings; Carl E. Nehme; Jacob W. Crandall; Paul J. Mitchell

With reduced radar signatures, increased endurance and the removal of humans from immediate threat, uninhabited (also known as unmanned) aerial vehicles (UAVs) have become indispensable assets to militarized forces. UAVs require human guidance to varying degrees and often through several operators. However, with current military focus on streamlining operations, increasing automation, and reducing manning, there has been an increasing effort to design systems such that the current many-toone ratio of operators to vehicles can be inverted. An increasing body of literature has examined the effectiveness of a single operator controlling multiple uninhabited aerial vehicles. While there have been numerous experimental studies that have examined contextually how many UAVs a single operator could control, there is a distinct gap in developing predictive models for operator capacity. In this chapter, we will discuss previous experimental research for multiple UAV control, as well as previous attempts to develop predictive models for operator capacity based on temporal measures. We extend this previous research by explicitly considering a cost-performance model that relates operator performance to mission costs and complexity. We conclude with a meta-analysis of the temporal methods outlined and provide recommendation for future applications.


Proceedings of the IEEE | 2012

The Impact of Human–Automation Collaboration in Decentralized Multiple Unmanned Vehicle Control

Mary L. Cummings; Jonathan P. How; Andrew K. Whitten; Olivier Toupet

For future systems that require one or a small team of operators to supervise a network of automated agents, automated planners are critical since they are faster than humans for path planning and resource allocation in multivariate, dynamic, time-pressured environments. However, such planners can be brittle and unable to respond to emergent events. Human operators can aid such systems by bringing their knowledge-based reasoning and experience to bear. Given a decentralized task planner and a goal-based operator interface for a network of unmanned vehicles in a search, track, and neutralize mission, we demonstrate with a human-on-the-loop experiment that humans guiding these decentralized planners improved system performance by up to 50%. However, those tasks that required precise and rapid calculations were not significantly improved with human aid. Thus, there is a shared space in such complex missions for human-automation collaboration.


IEEE Transactions on Robotics | 2007

Identifying Predictive Metrics for Supervisory Control of Multiple Robots

Jacob W. Crandall; Mary L. Cummings

In recent years, much research has focused on making possible single-operator control of multiple robots. In these high workload situations, many questions arise including how many robots should be in the team, which autonomy levels should they employ, and when should these autonomy levels change? To answer these questions, sets of metric classes should be identified that capture these aspects of the human-robot team. Such a set of metric classes should have three properties. First, it should contain the key performance parameters of the system. Second, it should identify the limitations of the agents in the system. Third, it should have predictive power. In this paper, we decompose a human-robot team consisting of a single human and multiple robots in an effort to identify such a set of metric classes. We assess the ability of this set of metric classes to: 1) predict the number of robots that should be in the team and 2) predict system effectiveness. We do so by comparing predictions with actual data from a user study, which is also described.


Interacting with Computers | 2013

Boredom and Distraction in Multiple Unmanned Vehicle Supervisory Control

Mary L. Cummings; C. Mastracchio; Kristopher M. Thornburg; Armen Mkrtchyan

This work was supported by Aurora Flight Sciences under the ONR Science of Autonomy program as well as the Office of Naval Research (ONR) under Code 34 and MURI [grant number N00014-08-C-070].


IEEE Intelligent Systems | 2007

Operator Performance and Intelligent Aiding in Unmanned Aerial Vehicle Scheduling

Mary L. Cummings; Amy S. Brzezinski; John D. Lee

Unmanned vehicles (UVs) are quickly becoming ubiquitous in almost every aspect of hostile-environment operations. A key challenge in designing futuristic one-controlling-many systems will be minimizing periods of excessive operator workload that can arise when critical tasks for several UVs occur simultaneously. To a certain degree, you can predict and mitigate such periods in advance. However, actions that mitigate a particular period of high workload in the short term might create long-term episodes of high workload that were previously nonexistent. So, we need decision support that helps an operator evaluate alternative actions for managing a mission schedule in real time. To this end, we present an iterative design cycle that tries to leverage intelligent, predictive aiding together with human judgment and pattern recognition to maximize both system and human performance in the supervision of four UAVs. Automated decision support tools that provide more local, as opposed to global, visual recommendations can produce better performance in multiple UAV scheduling


Handbook of Automation | 2009

Collaborative Human–Automation Decision Making

Mary L. Cummings; Sylvain Bruni

The development of a comprehensive collaborative human–computer decision-making model is needed that demonstrates not only what decision-making functions should or could be assigned to humans or computers, but how many functions can best be served in a mutually supportive environment in which the human and computer collaborate to arrive at a solution superior to that which either would have come to independently. To this end, we present the human–automation collaboration taxonomy (HACT), which builds on previous research by expanding the Parasuraman information processing model [26.1], specifically the decision-making component. Instead of defining a simple level of automation for decision making, we deconstruct the process to include three distinct roles: the moderator, generator, and decider. We propose five levels of collaboration (LOCs) for each of these roles, which form a three-tuple that can be analyzed to evaluate system collaboration, and possibly identify areas for design intervention. A resource allocation mission planning case study is presented using this framework to illustrate the benefit for system designers.

Collaboration


Dive into the Mary L. Cummings's collaboration.

Top Co-Authors

Avatar

Carl E. Nehme

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Andrew S. Clare

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jacob W. Crandall

Masdar Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sylvain Bruni

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jason C. Ryan

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

David Pitman

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luca F. Bertuccelli

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge