Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pradeep Varakantham is active.

Publication


Featured researches published by Pradeep Varakantham.


adaptive agents and multi-agents systems | 2004

Taking DCOP to the Real World: Efficient Complete Solutions for Distributed Multi-Event Scheduling

Rajiv T. Maheswaran; Milind Tambe; Emma Bowring; Jonathan P. Pearce; Pradeep Varakantham

Distributed Constraint Optimization (DCOP) is an elegant formalism relevant to many areas in multiagent systems, yet complete algorithms have not been pursued for real world applications due to perceived complexity. To capably capture a rich class of complex problem domains, we introduce the Distributed Multi-Event Scheduling (DiMES) framework and design congruent DCOP formulations with binary constraints which are proven to yield the optimal solution. To approach real-world efficiency requirements, we obtain immense speedups by improving communication structure and precomputing best case bounds. Heuristics for generating better communication structures and calculating bound in a distributed manner are provided and tested on systematically developed domains for meeting scheduling and sensor networks, exemplifying the viability of complete algorithms.


Autonomous Agents and Multi-Agent Systems | 2006

Privacy Loss in Distributed Constraint Reasoning: A Quantitative Framework for Analysis and its Applications

Rajiv T. Maheswaran; Jonathan P. Pearce; Emma Bowring; Pradeep Varakantham; Milind Tambe

It is critical that agents deployed in real-world settings, such as businesses, offices, universities and research laboratories, protect their individual users’ privacy when interacting with other entities. Indeed, privacy is recognized as a key motivating factor in the design of several multiagent algorithms, such as in distributed constraint reasoning (including both algorithms for distributed constraint optimization (DCOP) and distributed constraint satisfaction (DisCSPs)), and researchers have begun to propose metrics for analysis of privacy loss in such multiagent algorithms. Unfortunately, a general quantitative framework to compare these existing metrics for privacy loss or to identify dimensions along which to construct new metrics is currently lacking. This paper presents three key contributions to address this shortcoming. First, the paper presents VPS (Valuations of Possible States), a general quantitative framework to express, analyze and compare existing metrics of privacy loss. Based on a state-space model, VPS is shown to capture various existing measures of privacy created for specific domains of DisCSPs. The utility of VPS is further illustrated through analysis of privacy loss in DCOP algorithms, when such algorithms are used by personal assistant agents to schedule meetings among users. In addition, VPS helps identify dimensions along which to classify and construct new privacy metrics and it also supports their quantitative comparison. Second, the article presents key inference rules that may be used in analysis of privacy loss in DCOP algorithms under different assumptions. Third, detailed experiments based on the VPS-driven analysis lead to the following key results: (i) decentralization by itself does not provide superior protection of privacy in DisCSP/DCOP algorithms when compared with centralization; instead, privacy protection also requires the presence of uncertainty about agents’ knowledge of the constraint graph. (ii) one needs to carefully examine the metrics chosen to measure privacy loss; the qualitative properties of privacy loss and hence the conclusions that can be drawn about an algorithm can vary widely based on the metric chosen. This paper should thus serve as a call to arms for further privacy research, particularly within the DisCSP/DCOP arena.


adaptive agents and multi-agents systems | 2007

Letting loose a SPIDER on a network of POMDPs: generating quality guaranteed policies

Pradeep Varakantham; Janusz Marecki; Yuichi Yabu; Milind Tambe; Makoto Yokoo

Distributed Partially Observable Markov Decision Problems (Distributed POMDPs) are a popular approach for modeling multi-agent systems acting in uncertain domains. Given the significant complexity of solving distributed POMDPs, particularly as we scale up the numbers of agents, one popular approach has focused on approximate solutions. Though this approach is efficient, the algorithms within this approach do not provide any guarantees on solution quality. A second less popular approach focuses on global optimality, but typical results are available only for two agents, and also at considerable computational cost. This paper overcomes the limitations of both these approaches by providing SPIDER, a novel combination of three key features for policy generation in distributed POMDPs: (i) it exploits agent interaction structure given a network of agents (i.e. allowing easier scale-up to larger number of agents); (ii) it uses a combination of heuristics to speedup policy search; and (iii) it allows quality guaranteed approximations, allowing a systematic tradeoff of solution quality for time. Experimental results show orders of magnitude improvement in performance when compared with previous global optimal algorithms.


adaptive agents and multi-agents systems | 2005

Exploiting belief bounds: practical POMDPs for personal assistant agents

Pradeep Varakantham; Rajiv T. Maheswaran; Milind Tambe

Agents or agent teams deployed to assist humans often face the challenges of monitoring the state of key processes in their environment (including the state of their human users themselves) and making periodic decisions based on such monitoring. POMDPs appear well suited to enable agents to address these challenges, given the uncertain environment and cost of actions, but optimal policy generation for POMDPs is computationally expensive. This paper introduces three key techniques to speedup POMDP policy generation that exploit the notion of progress or dynamics in personal assistant domains. Policy computation is restricted to the belief space polytope that remains reachable given the progress structure of a domain. We introduce new algorithms; particularly one based on applying Lagrangian methods to compute a bounded belief space support in polynomial time. Our techniques are complementary to many existing exact and approximate POMDP policy generation algorithms. Indeed, we illustrate this by enhancing two of the fastest existing algorithms for exact POMDP policy generation. The order of magnitude speedups demonstrate the utility of our techniques in facilitating the deployment of POMDPs within agents assisting human users.


adaptive agents and multi-agents systems | 2005

Conflicts in teamwork: hybrids to the rescue

Milind Tambe; Emma Bowring; Hyuckchul Jung; Gal A. Kaminka; Rajiv T. Maheswaran; Janusz Marecki; Pragnesh Jay Modi; Ranjit Nair; Stephen Okamoto; Jonathan P. Pearce; Praveen Paruchuri; David V. Pynadath; Paul Scerri; Nathan Schurr; Pradeep Varakantham

Today within the AAMAS community, we see at least four competing approaches to building multiagent systems: belief-desire-intention (BDI), distributed constraint optimization (DCOP), distributed POMDPs, and auctions or game-theoretic approaches. While there is exciting progress within each approach, there is a lack of cross-cutting research. This paper highlights hybrid approaches for multiagent teamwork. In particular, for the past decade, the TEAMCORE research group has focused on building agent teams in complex, dynamic domains. While our early work was inspired by BDI, we will present an overview of recent research that uses DCOPs and distributed POMDPs in building agent teams. While DCOP and distributed POMDP algorithms provide promising results, hybrid approaches help us address problems of scalability and expressiveness. For example, in the BDI-POMDP hybrid approach, BDI team plans are exploited to improve POMDP tractability, and POMDPs improve BDI team plan performance. We present some recent results from applying this approach in a Disaster Rescue simulation domain being developed with help from the Los Angeles Fire Department.


28th International Symposium on Automation and Robotics in Construction | 2011

Towards Optimization of Building Energy and Occupant Comfort Using Multi-Agent Simulation

Laura Klein; Geoffrey Kavulya; FarrokhJazizadeh; Jun-young Kwak; Burcin Becerik-Gerber; Pradeep Varakantham; MilindTambe

The primary consumers of building energy are heating, cooling, ventilation, and lighting systems, which maintain occupant comfort, and electronics and appliances that enable occupant functionality. The optimization of building energy is therefore a complex problem highly dependent on unique building and environmental conditions as well as on time dependent operational factors. To provide computational support for this optimization, this paper presents and implements a multi-agent comfort and energy simulation (MACES) to model alternative management and control of building systems and occupants. Human and device agents are used to explore current trends in energy consumption and management of a university test bed building. Reactive and predictive control strategies are then imposed on device agents in an attempt to reduce building energy consumption while maintaining occupant comfort. Finally, occupant agents are motivated by simulation feedback to accept more energy conscious scheduling through multi-agent negotiations. Initial results of the MACES demonstrate potential energy savings of 17% while maintaining a high level of occupant comfort. This work is intended to demonstrate a simulation tool, which is implementable in the actual test bed site and compatible with real-world input to instigate and motivate more energy conscious control and occupant behaviors.


Journal of Artificial Intelligence Research | 2012

Robust local search for solving RCPSP/max with durational uncertainty

Na Fu; Hoong Chuin Lau; Pradeep Varakantham; Fei Xiao

Scheduling problems in manufacturing, logistics and project management have frequently been modeled using the framework of Resource Constrained Project Scheduling Problems with minimum and maximum time lags (RCPSP/max). Due to the importance of these problems, providing scalable solution schedules for RCPSP/max problems is a topic of extensive research. However, all existing methods for solving RCPSP/max assume that durations of activities are known with certainty, an assumption that does not hold in real world scheduling problems where unexpected external events such as manpower availability, weather changes, etc. lead to delays or advances in completion of activities. Thus, in this paper, our focus is on providing a scalable method for solving RCPSP/max problems with durational uncertainty. To that end, we introduce the robust local search method consisting of three key ideas: (a) Introducing and studying the properties of two decision rule approximations used to compute start times of activities with respect to dynamic realizations of the durational uncertainty; (b) Deriving the expression for robust makespan of an execution strategy based on decision rule approximations; and (c) A robust local search mechanism to efficiently compute activity execution strategies that are robust against durational uncertainty. Furthermore, we also provide enhancements to local search that exploit temporal dependencies between activities. Our experimental results illustrate that robust local search is able to provide robust execution strategies efficiently.


algorithmic decision theory | 2013

Optimization Approaches for Solving Chance Constrained Stochastic Orienteering Problems

Pradeep Varakantham; Akshat Kumar

Orienteering problems OPs are typically used to model routing and trip planning problems. OP is a variant of the well known traveling salesman problem where the goal is to compute the highest reward path that includes a subset of nodes and has an overall travel time less than the specified deadline. Stochastic orienteering problems SOPs extend OPs to account for uncertain travel times and are significantly harder to solve than deterministic OPs. In this paper, we contribute a scalable mixed integer LP formulation for solving risk aware SOPs, which is a principled approximation of the underlying stochastic optimization problem. Empirically, our approach provides significantly better solution quality than the previous best approach over a range of synthetic benchmarks and on a real-world theme park trip planning problem.


Journal of Scheduling | 2015

Robust execution strategies for project scheduling with unreliable resources and stochastic durations

Na Fu; Hoong Chuin Lau; Pradeep Varakantham

The resource-constrained project scheduling problem with minimum and maximum time lags (RCPSP/max) is a general model for resource scheduling in many real-world problems (such as manufacturing and construction engineering). We consider RCPSP/max problems where the durations of activities are stochastic and resources can have unforeseen breakdowns. Given a level of allowable risk,


Autonomous Agents and Multi-Agent Systems | 2014

TESLA: an extended study of an energy-saving agent that leverages schedule flexibility

Jun-young Kwak; Pradeep Varakantham; Rajiv T. Maheswaran; Yu-Han Chang; Milind Tambe; Burcin Becerik-Gerber; Wendy Wood

Collaboration


Dive into the Pradeep Varakantham's collaboration.

Top Co-Authors

Avatar

Milind Tambe

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Hoong Chuin Lau

Singapore Management University

View shared research outputs
Top Co-Authors

Avatar

Rajiv T. Maheswaran

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

William Yeoh

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Shih-Fen Cheng

Singapore Management University

View shared research outputs
Top Co-Authors

Avatar

Akshat Kumar

Singapore Management University

View shared research outputs
Top Co-Authors

Avatar

Supriyo Ghosh

Singapore Management University

View shared research outputs
Top Co-Authors

Avatar

Jun-young Kwak

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Patrick Jaillet

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge