Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ranjit Nair is active.

Publication


Featured researches published by Ranjit Nair.


adaptive agents and multi-agents systems | 2003

Role allocation and reallocation in multiagent teams: towards a practical analysis

Ranjit Nair; Milind Tambe; Stacy Marsella

Despite the success of the BDI approach to agent teamwork, initial role allocation (i.e. deciding which agents to allocate to key roles in the team) and role reallocation upon failure remain open challenges. What remain missing are analysis techniques to aid human developers in quantitatively comparing different initial role allocations and competing role reallocation algorithms. To remedy this problem, this paper makes three key contributions. First, the paper introduces RMTDP (Role-based Multiagent Team Decision Problem), an extension to MTDP [9], for quantitative evaluations of role allocation and reallocation approaches. Second, the paper illustrates an RMTDP-based methodology for not only comparing two competing algorithms for role reallocation, but also for identifying the types of domains where each algorithm is suboptimal, how much each algorithm can be improved and at what computational cost (complexity). Such algorithmic improvements are identified via a new automated procedure that generates a family of locally optimal policies for comparative evaluations. Third, since there are combinatorially many initial role allocations, evaluating each in RMTDP to identify the best is extremely difficult. Therefore, we introduce methods to exploit task decompositions among subteams to significantly prune the search space of initial role allocations. We present experimental results from two distinct domains.


adaptive agents and multi-agents systems | 2004

Communications for improving policy computation in distributed POMDPs

Ranjit Nair; Milind Tambe; Maayan Roth; Makoto Yokoo

Distributed Partially Observable Markov Decision Problems (POMDPs) are emerging as a popular approach for modeling multiagent teamwork where a group of agents work together to jointly maximize a reward function. Since the problem of finding the optimal joint policy for a distributed POMDP has been shown to be NEXP-Complete if no assumptions are made about the domain conditions, several locally optimal approaches have emerged as a viable solution. However, the use of communicative actions as part of these locally optimal algorithms has been largely ignored or has been applied only under restrictive assumptions about the domain. In this paper, we show how communicative acts can be explicitly introduced in order to find locally optimal joint policies that allow agents to coordinate better through synchronization achieved via communication. Furthermore, the introduction of communication allows us to develop a novel compact policy representation that results in savings of both space and time which are verified empirically. Finally, through the imposition of constraints on communication such as not going without communicating for more than K steps, even greater space and time savings can be obtained.


robot soccer world cup | 2002

Task Allocation in the RoboCup Rescue Simulation Domain: A Short Note

Ranjit Nair; Takayuki Ito; Milind Tambe; Stacy Marsella

We consider the problem of disaster mitigation in the RoboCup Rescue Simulation Environment [3] to be a task allocation problem where the tasks arrive dynamically and can change in intensity. These tasks can be performed by ambulance teams, fire brigades and police forces with the help of an ambulance center, a fire station and a police office. However the agents don’t get automatically notified of the tasks as soon as they arrive and hence it is necessary for the agents to explore the simulated world to discover new tasks and to notify other agents of these.


adaptive agents and multi-agents systems | 2005

Conflicts in teamwork: hybrids to the rescue

Milind Tambe; Emma Bowring; Hyuckchul Jung; Gal A. Kaminka; Rajiv T. Maheswaran; Janusz Marecki; Pragnesh Jay Modi; Ranjit Nair; Stephen Okamoto; Jonathan P. Pearce; Praveen Paruchuri; David V. Pynadath; Paul Scerri; Nathan Schurr; Pradeep Varakantham

Today within the AAMAS community, we see at least four competing approaches to building multiagent systems: belief-desire-intention (BDI), distributed constraint optimization (DCOP), distributed POMDPs, and auctions or game-theoretic approaches. While there is exciting progress within each approach, there is a lack of cross-cutting research. This paper highlights hybrid approaches for multiagent teamwork. In particular, for the past decade, the TEAMCORE research group has focused on building agent teams in complex, dynamic domains. While our early work was inspired by BDI, we will present an overview of recent research that uses DCOPs and distributed POMDPs in building agent teams. While DCOP and distributed POMDP algorithms provide promising results, hybrid approaches help us address problems of scalability and expressiveness. For example, in the BDI-POMDP hybrid approach, BDI team plans are exploited to improve POMDP tractability, and POMDPs improve BDI team plan performance. We present some recent results from applying this approach in a Disaster Rescue simulation domain being developed with help from the Los Angeles Fire Department.


robot soccer world cup | 2002

Team Formation for Reformation in Multiagent Domains Like RoboCupRescue

Ranjit Nair; Milind Tambe; Stacy Marsella

Team formation, i.e., allocating agents to roles within a team or subteams of a team, and the reorganization of a team upon team member failure or arrival of new tasks are critical aspects of teamwork. They are very important issues in RoboCupRescue where many tasks need to be done jointly. While empirical comparisons (e.g., in a competition setting as in RoboCup) are useful, we need a quantitative analysis beyond the competition — to understand the strengths and limitations of different approaches, and their tradeoffs as we scale up the domain or change domain properties. To this end, we need to provide complexity-optimality tradeoffs, which have been lacking not only in RoboCup but in the multiagent field in general.


adaptive agents and multi-agents systems | 2006

Winning back the CUP for distributed POMDPs: planning over continuous belief spaces

Pradeep Varakantham; Ranjit Nair; Milind Tambe; Makoto Yokoo

Distributed Partially Observable Markov Decision Problems (Distributed POMDPs) are evolving as a popular approach for modeling multiagent systems, and many different algorithms have been proposed to obtain locally or globally optimal policies. Unfortunately, most of these algorithms have either been explicitly designed or experimentally evaluated assuming knowledge of a starting belief point, an assumption that often does not hold in complex, uncertain domains. Instead, in such domains, it is important for agents to explicitly plan over continuous belief spaces. This paper provides a novel algorithm to explicitly compute finite horizon policies over continuous belief spaces, without restricting the space of policies. By marrying an efficient single-agent POMDP solver with a heuristic distributed POMDP policy-generation algorithm, locally optimal joint policies are obtained, each of which dominates within a different part of the belief region. We provide heuristics that significantly improve the efficiency of the resulting algorithm and provide detailed experimental results. To the best of our knowledge, these are the first run-time results for analytically generating policies over continuous belief spaces in distributed POMDPs.


programming multi agent systems | 2004

Coordinating teams in uncertain environments: a hybrid BDI-POMDP approach

Ranjit Nair; Milind Tambe

Distributed partially observable Markov decision problems (POMDPs) have emerged as a popular decision-theoretic approach for planning for multiagent teams, where it is imperative for the agents to be able to reason about the rewards (and costs) for their actions in the presence of uncertainty. However, finding the optimal distributed POMDP policy is computationally intractable (NEXP-Complete). This paper is focussed on a principled way to combine the two dominant paradigms for building multiagent team plans, namely the “belief-desire-intention” (BDI) approach and distributed POMDPs. In this hybrid BDI-POMDP approach, BDI team plans are exploited to improve distributed POMDP tractability and distributed POMDP-based analysis improves BDI team plan performance. Concretely, we focus on role allocation, a fundamental problem in BDI teams – which agents to allocate to the different roles in the team. The hybrid BDI-POMDP approach provides three key contributions. First, unlike prior work in multiagent role allocation, we describe a role allocation technique that takes into account future uncertainties in the domain. The second contribution is a novel decomposition technique, which exploits the structure in the BDI team plans to significantly prune the search space of combinatorially many role allocations. Our third key contribution is a significantly faster policy evaluation algorithm suited for our BDI-POMDP hybrid approach. Finally, we also present experimental results from two domains: mission rehearsal simulation and RoboCupRescue disaster rescue simulation. In the RoboCupRescue domain, we show that the role allocation technique presented in this paper is capable of performing at human expert levels by comparing with the allocations chosen by humans in the actual RoboCupRescue simulation environment.


International Workshop on Formal Approaches to Agent-Based Systems | 2002

Computational Models for Multiagent Coordination Analysis: Extending Distributed POMDP Models

Hyuckchul Jung; Ranjit Nair; Milind Tambe; Stacy Marsella

Recently researchers in multiagent systems have begun to focus on formal POMDP (Partially Observable Markov Decision Process) models for analysis of multiagent coordination. However, prior work has mostly focused on analysis of communication, such as via the COM-MTDP (Communicative Markov Team Decision Problem) model. This paper provides two extensions to this prior work that goes beyond communication and analyzes other aspects of multiagent coordination. In particular, we first present a formal model called R-COM-MTDP that extends COM-MTDP to analyze team formation and reorganization algorithms. R-COM-MTDP enables a rigorous and systematic analysis of complexity-optimality tradeoffs in team (re)formation approaches in different domain types. It provides the worst-case complexity analysis of the team (re)formation under varying conditions, and illustrates under which conditions role decomposition can provide significant reductions in computational complexity. Next, we propose COM-MTDP as a formal framework to analyze DCSP (Distributed Constraint Satisfaction Problem) strategies for conflict resolution. Different DCSP strategies are mapped onto policies in the COM-MTDP model, and agents compare strategies by evaluating their mapped policies. Thus, the two COM-MTDP based methods could open the door to a range of novel analyses of multiagent team (re)formation, and facilitate automated selection of the most efficient strategy for a given situation.


international joint conference on artificial intelligence | 2003

Taming decentralized POMDPs: towards efficient policy computation for multiagent settings

Ranjit Nair; Milind Tambe; Makoto Yokoo; David V. Pynadath; Stacy Marsella


Journal of Artificial Intelligence Research | 2005

Hybrid BDI-POMDP framework for multiagent teaming

Ranjit Nair; Milind Tambe

Collaboration


Dive into the Ranjit Nair's collaboration.

Top Co-Authors

Avatar

Milind Tambe

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pradeep Varakantham

Singapore Management University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David V. Pynadath

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Hyuckchul Jung

Florida Institute for Human and Machine Cognition

View shared research outputs
Top Co-Authors

Avatar

Jonathan P. Pearce

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Nathan Schurr

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Praveen Paruchuri

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Rajiv T. Maheswaran

University of Southern California

View shared research outputs
Researchain Logo
Decentralizing Knowledge