Maayan Roth
Carnegie Mellon University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Maayan Roth.
adaptive agents and multi-agents systems | 2004
Ranjit Nair; Milind Tambe; Maayan Roth; Makoto Yokoo
Distributed Partially Observable Markov Decision Problems (POMDPs) are emerging as a popular approach for modeling multiagent teamwork where a group of agents work together to jointly maximize a reward function. Since the problem of finding the optimal joint policy for a distributed POMDP has been shown to be NEXP-Complete if no assumptions are made about the domain conditions, several locally optimal approaches have emerged as a viable solution. However, the use of communicative actions as part of these locally optimal algorithms has been largely ignored or has been applied only under restrictive assumptions about the domain. In this paper, we show how communicative acts can be explicitly introduced in order to find locally optimal joint policies that allow agents to coordinate better through synchronization achieved via communication. Furthermore, the introduction of communication allows us to develop a novel compact policy representation that results in savings of both space and time which are verified empirically. Finally, through the imposition of constraints on communication such as not going without communicating for more than K steps, even greater space and time savings can be obtained.
adaptive agents and multi-agents systems | 2007
Maayan Roth; Reid G. Simmons; Manuela M. Veloso
In many cooperative multiagent domains, there exist some states in which the agents can act independently and others in which they need to coordinate with their teammates. In this paper, we explore how factored representations of state can be used to generate factored policies that can, with minimal communication, be executed distributedly by a multiagent team. The factored policies indicate those portions of the state where no coordination is necessary, automatically alert the agents when they reach a state in which they do need to coordinate, and determine what the agents should communicate in order to achieve this coordination. We evaluate the success of our approach experimentally by comparing the amount of communication needed by a team executing a factored policy to a team that needs to communicate in every timestep.
Archive | 2006
Maayan Roth; Reid G. Simmons; Manuela M. Veloso
In recent years, multi-agent Partially Observable Markov Decision Processes (POMDP) have emerged as a popular decision-theoretic framework for modeling and generating policies for the control of multi-agent teams. Teams controlled by multi-agent POMDPs can use communication to share observations and coordinate. Therefore, policies are needed to enable these teams to reason about communication. Previous work on generating communication policies for multi-agent POMDPs has focused on the question of when to communicate. In this paper, we address the question of what to communicate. We describe two paradigms for representing limitations on communication and present an algorithm that enables multi-agent teams to make execution-time decisions on how to effectively utilize available communication resources.
intelligent robots and systems | 2003
Maayan Roth; Douglas L. Vail; Manuela M. Veloso
In this paper, we present in detail our approach to constructing a world model in a multi-robot team. We introduce two separate world models, namely an individual world model that stores one robots state, and a shared world model that stores the state of the team. We present procedures to effectively merge information in these two world models in real-time. We overcome the problem of high communication latency by using shared information on an as-needed basis. The success of our world model approach is validated by experimentation in the robot soccer domain. The results show that a team using a world model that incorporates shared information is more successful at tracking a dynamic object in its environment than a team that does not use shared information.
Archive | 2005
Maayan Roth; Reid G. Simmons; Manuela M. Veloso
Although the presence of free communication reduces the complexity of multi-agent POMDPs to that of single-agent POMDPs, in practice, communication is not free and reducing the amount of communication is often desirable. We present a novel approach for using centralized “single-agent” policies in decentralized multi-agent systems by maintaining and reasoning over the possible joint beliefs of the team. We describe how communication is used to integrate local observations into the team belief as needed to improve performance. We show both experimentally and through a detailed example how our approach reduces communication while improving the performance of distributed xecution.
adaptive agents and multi-agents systems | 2005
Maayan Roth; Reid G. Simmons; Manuela M. Veloso
intelligent robots and systems | 2002
Maayan Roth; Douglas L. Vail; Manuela M. Veloso
Archive | 2007
Reid G. Simmons; Manuela M. Veloso; Maayan Roth
Archive | 2002
Manuela M. Veloso; Scott Lenser; Douglas L. Vail; Maayan Roth; Ashley W. Stroupe; Sonia Chernova
ieee-ras international conference on humanoid robots | 2006
Manuela M. Veloso; Nicholas Armstrong-Crews; Sonia Chernova; Colin McMillen; Maayan Roth; Douglas L. Vail