Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jacob W. Crandall is active.

Publication


Featured researches published by Jacob W. Crandall.


systems man and cybernetics | 2001

Experiments in adjustable autonomy

Jacob W. Crandall; Michael A. Goodrich

Human-robot interaction is becoming an increasingly important research area. In this paper, we present our work on designing a human-robot system with adjustable autonomy and describe not only the prototype interface but also the corresponding, robot behaviors. In our approach, we grant the human meta-level control over the level of robot autonomy, but we allow the robot a varying amount of self-direction with each level. Within this framework of adjustable autonomy, we explore how existing, robot control approaches can be adapted and extended to be compatible with adjustable autonomy.


systems man and cybernetics | 2005

Validating human-robot interaction schemes in multitasking environments

Jacob W. Crandall; Michael A. Goodrich; Dan R. Olsen; Curtis W. Nielsen

The ability of robots to autonomously perform tasks is increasing. More autonomy in robots means that the human managing the robot may have available free time. It is desirable to use this free time productively, and a current trend is to use this available free time to manage multiple robots. We present the notion of neglect tolerance as a means for determining how robot autonomy and interface design determine how free time can be used to support multitasking, in general, and multirobot teams, in particular. We use neglect tolerance to 1) identify the maximum number of robots that can be managed; 2) identify feasible configurations of multirobot teams; and 3) predict performance of multirobot teams under certain independence assumptions. We present a measurement methodology, based on a secondary task paradigm, for obtaining neglect tolerance values that allow a human to balance workload with robot performance.


intelligent robots and systems | 2002

Characterizing efficiency of human robot interaction: a case study of shared-control teleoperation

Jacob W. Crandall; Michael A. Goodrich

Human-robot interaction is becoming an increasingly important research area. In this paper, we present a theoretical characterization of interaction efficiency with an aim towards designing a human-robot system with adjustable robot autonomy. In our approach, we analyze how modifying robot control schemes for a given autonomy mode can increase system performance and decrease workload demands on the human operator. We then perform a case study of the design of a shared-control teleoperation scheme and compare interaction efficiency against a traditional manual-control teleoperation scheme.


human-robot interaction | 2007

Managing autonomy in robot teams: observations from four experiments

Michael A. Goodrich; Timothy W. McLain; Jeffrey D. Anderson; Ji-Sang Sun; Jacob W. Crandall

It is often desirable for a human to manage multiple robots. Autonomy is required to keep workload within tolerable ranges, and dynamically adapting the type of autonomy may be useful for responding to environment and workload changes. We identify two management styles for managing multiple robots and present results from four experiments that have relevance to dynamic autonomy within these two management styles. These experiments, which involved 80 subjects, suggest that individual and team autonomy benefit from attention management aids, adaptive autonomy, and proper information abstraction.


human-robot interaction | 2007

Developing performance metrics for the supervisory control of multiple robots

Jacob W. Crandall; Mary L. Cummings

Efforts are underway to make it possible for a single operator to effectively control multiple robots. In these high workload situations, many questions arise including how many robots should be in the team (Fan-out), what level of autonomy should the robots have, and when should this level of autonomy change (i.e., dynamic autonomy). We propose that a set of metric classes should be identified that can adequately answer these questions. Toward this end, we present a potential set of metric classes for human-robot teams consisting of a single human operator and multiple robots. To test the usefulness and appropriateness of this set of metric classes, we conducted a user study with simulated robots. Using the data obtained from this study, we explore the ability of this set of metric classes to answer these questions.


Innovations in Intelligent Machines (1) | 2007

Predicting Operator Capacity for Supervisory Control of Multiple UAVs

Mary L. Cummings; Carl E. Nehme; Jacob W. Crandall; Paul J. Mitchell

With reduced radar signatures, increased endurance and the removal of humans from immediate threat, uninhabited (also known as unmanned) aerial vehicles (UAVs) have become indispensable assets to militarized forces. UAVs require human guidance to varying degrees and often through several operators. However, with current military focus on streamlining operations, increasing automation, and reducing manning, there has been an increasing effort to design systems such that the current many-toone ratio of operators to vehicles can be inverted. An increasing body of literature has examined the effectiveness of a single operator controlling multiple uninhabited aerial vehicles. While there have been numerous experimental studies that have examined contextually how many UAVs a single operator could control, there is a distinct gap in developing predictive models for operator capacity. In this chapter, we will discuss previous experimental research for multiple UAV control, as well as previous attempts to develop predictive models for operator capacity based on temporal measures. We extend this previous research by explicitly considering a cost-performance model that relates operator performance to mission costs and complexity. We conclude with a meta-analysis of the temporal methods outlined and provide recommendation for future applications.


IEEE Transactions on Robotics | 2007

Identifying Predictive Metrics for Supervisory Control of Multiple Robots

Jacob W. Crandall; Mary L. Cummings

In recent years, much research has focused on making possible single-operator control of multiple robots. In these high workload situations, many questions arise including how many robots should be in the team, which autonomy levels should they employ, and when should these autonomy levels change? To answer these questions, sets of metric classes should be identified that capture these aspects of the human-robot team. Such a set of metric classes should have three properties. First, it should contain the key performance parameters of the system. Second, it should identify the limitations of the agents in the system. Third, it should have predictive power. In this paper, we decompose a human-robot team consisting of a single human and multiple robots in an effort to identify such a set of metric classes. We assess the ability of this set of metric classes to: 1) predict the number of robots that should be in the team and 2) predict system effectiveness. We do so by comparing predictions with actual data from a user study, which is also described.


international conference on machine learning | 2005

Learning to compete, compromise, and cooperate in repeated general-sum games

Jacob W. Crandall; Michael A. Goodrich

Learning algorithms often obtain relatively low average payoffs in repeated general-sum games between other learning agents due to a focus on myopic best-response and one-shot Nash equilibrium (NE) strategies. A less myopic approach places focus on NEs of the repeated game, which suggests that (at the least) a learning agent should possess two properties. First, an agent should never learn to play a strategy that produces average payoffs less than the minimax value of the game. Second, an agent should learn to cooperate/compromise when beneficial. No learning algorithm from the literature is known to possess both of these properties. We present a reinforcement learning algorithm (M-Qubed) that provably satisfies the first property and empirically displays (in self play) the second property in a wide range of games.


systems, man and cybernetics | 2003

Towards predicting robot team performance

Jacob W. Crandall; Curtis W. Nielsen; Michael A. Goodrich

In this paper we develop a method for predicting the performance of human-robot teams consisting of a single user and multiple robots. To predict the performance of a team, we first measure the neglect tolerance and interface efficiency of the interaction schemes employed by the team. We then describe a method that shows how these measurements can be used to estimate the teams performance. We validate the performance prediction algorithm by comparing predictions to actual results when a user guides three robots in an exploration and goal-finding mission; comparisons are made for various system configurations.


Machine Learning | 2011

Learning to compete, coordinate, and cooperate in repeated games using reinforcement learning

Jacob W. Crandall; Michael A. Goodrich

We consider the problem of learning in repeated general-sum matrix games when a learning algorithm can observe the actions but not the payoffs of its associates. Due to the non-stationarity of the environment caused by learning associates in these games, most state-of-the-art algorithms perform poorly in some important repeated games due to an inability to make profitable compromises. To make these compromises, an agent must effectively balance competing objectives, including bounding losses, playing optimally with respect to current beliefs, and taking calculated, but profitable, risks. In this paper, we present, discuss, and analyze M-Qubed, a reinforcement learning algorithm designed to overcome these deficiencies by encoding and balancing best-response, cautious, and optimistic learning biases. We show that M-Qubed learns to make profitable compromises across a wide-range of repeated matrix games played with many kinds of learners. Specifically, we prove that M-Qubed’s average payoffs meet or exceed its maximin value in the limit. Additionally, we show that, in two-player games, M-Qubed’s average payoffs approach the value of the Nash bargaining solution in self play. Furthermore, it performs very well when associating with other learners, as evidenced by its robust behavior in round-robin and evolutionary tournaments of two-player games. These results demonstrate that an agent can learn to make good compromises, and hence receive high payoffs, in repeated games by effectively encoding and balancing best-response, cautious, and optimistic learning biases.

Collaboration


Dive into the Jacob W. Crandall's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carl E. Nehme

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vimitha Manohar

Masdar Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Iyad Rahwan

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Wen Shen

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zaid Almahmoud

University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Malek H. Altakrori

Masdar Institute of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge