Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul Scerri is active.

Publication


Featured researches published by Paul Scerri.


Journal of Artificial Intelligence Research | 2002

Towards adjustable autonomy for the real world

Paul Scerri; David V. Pynadath; Milind Tambe

Adjustable autonomy refers to entities dynamically varying their own autonomy, transferring decision-making control to other entities (typically agents transferring control to human users) in key situations. Determining whether and when such transfers-of-control should occur is arguably the fundamental research problem in adjustable autonomy. Previous work has investigated various approaches to addressing this problem but has often focused on individual agent-human interactions. Unfortunately, domains requiring collaboration between teams of agents and humans reveal two key shortcomings of these previous approaches. First, these approaches use rigid one-shot transfers of control that can result in unacceptable coordination failures in multiagent settings. Second, they ignore costs (e.g., in terms of time delays or effects on actions) to an agents team due to such transfers-of-control. To remedy these problems, this article presents a novel approach to adjustable autonomy, based on the notion of a transfer-of-control strategy. A transfer-of-control strategy consists of a conditional sequence of two types of actions: (i) actions to transfer decision-making control (e.g., from an agent to a user or vice versa) and (ii) actions to change an agents pre-specified coordination constraints with team members, aimed at minimizing miscoordination costs. The goal is for high-quality individual decisions to be made with minimal disruption to the coordination of the team. We present a mathematical model of transfer-of-control strategies. The model guides and informs the operationalization of the strategies using Markov Decision Processes, which select an optimal strategy, given an uncertain environment and costs to the individuals and teams. The approach has been carefully evaluated, including via its use in a real-world, deployed multi-agent system that assists a research group in its daily activities.


adaptive agents and multi-agents systems | 2005

Allocating tasks in extreme teams

Paul Scerri; Alessandro Farinelli; Steven Okamoto; Milind Tambe

Extreme teams, large-scale agent teams operating in dynamic environments, are on the horizon. Such environments are problematic for current task allocation algorithms due to the lack of locality in agent interactions. We propose a novel distributed task allocation algorithm for extreme teams, called LA-DCOP, that incorporates three key ideas. First, LA-DCOPs task allocation is based on a dynamically computed minimum capability threshold which uses approximate knowledge of overall task load. Second, LA-DCOP uses tokens to represent tasks and further minimize communication. Third, it creates potential tokens to deal with inter-task constraints of simultaneous execution. We show that LA-DCOP convincingly outperforms competing distributed task allocation algorithms while using orders of magnitude fewer messages, allowing a dramatic scale-up in extreme teams, upto a fully distributed, proxybased team of 200 agents. Varying threshold are seen as a key to outperforming competing distributed algorithms in the domain of simulated disaster rescue.


AIAA Infotech@Aerospace 2007 Conference and Exhibit | 2007

Geolocation of RF Emitters by Many UAVs

Paul Scerri; Robin Glinton; Sean Owens; David Scerri; Katia P. Sycara

This paper presents an approach to using a large team of UAVs to find radio frequency (RF) emitting targets in a large area. Small, inexpensive UAVs that can collectively and rapidly determine the approximate location of intermittently broadcasting and mobile RF emitters have a range of applications in both military, e.g., for finding SAM batteries, and civilian, e.g., for finding lost hikers, domains. Received Signal Strength Indicator (RSSI) sensors on board the UAVs measure the strength of RF signals across a range of frequencies. The signals, although noisy and ambiguous due to structural noise, e.g., multipath effects, overlapping signals and sensor noise, allow estimates to be made of emitter locations. Generating a probability distribution over emitter locations requires integrating multiple signals from different UAVs into a Bayesian filter, hence requiring cooperation between the UAVs. Once likely target locations are identified, EO-camera equipped UAVs must be tasked to provide a video stream of the area to allow a user to identify the emitter.


adaptive agents and multi-agents systems | 2005

An integrated token-based algorithm for scalable coordination

Yang Xu; Paul Scerri; Bin Yu; Steven Okamoto; Michael Lewis; Katia P. Sycara

Efficient coordination among large numbers of heterogeneous agents promises to revolutionize the way in which some complex tasks, such as responding to urban disasters can be performed. However, state of the art coordination algorithms are not capable of achieving efficient and effective coordination when a team is very large. Building on recent successful token-based algorithms for task allocation and information sharing, we have developed an integrated and efficient approach to effective coordination of large scale teams. We use tokens to encapsulate anything that needs to be shared by the team, including information, tasks and resources. The tokens are efficiently routed through the team via the use of local decision theoretic models. Each token is used to improve the routing of other tokens leading to a dramatic performance improvement when the algorithms work together. We present results from an implementation of this approach which demonstrates its ability to coordinate large teams.


adaptive agents and multi-agents systems | 2001

Adjustable autonomy in real-world multi-agent environments

Paul Scerri; David V. Pynadath; Milind Tambe

Through {\em adjustable autonomy} (AA), an agent can dynamically vary the degree to which it acts autonomously, allowing it to exploit human abilities to improve its performance, but without becoming overly dependent and intrusive in its human interaction. AA research is critical for successful deployment of multi-agent systems in support of important human activities. While most previous AA work has focused on individual agent-human interactions, this paper focuses on {\em teams} of agents operating in real-world human organizations. The need for agent teamwork and coordination in such environments introduces novel AA challenges. First, agents must be more judicious in asking for human intervention, because, although human input can prevent erroneous actions that have high team costs, one agents inaction while waiting for a human response can lead to potential miscoordination with the other agents in the team. Second, despite appropriate local decisions by individual agents, the overall team of agents can potentially make global decisions that are unacceptable to the human team. Third, the diversity in real-world human organizations requires that agents gradually learn individualized models of the human members, while still making reasonable decisions even before sufficient data are available. We address these challenges using a multi-agent AA framework based on an adaptive model of users (and teams) that reasons about the uncertainty, costs, and constraints of decisions at {\em all} levels of the team hierarchy, from the individual users to the overall human organization. We have implemented this framework through Markov decision processes, which are well suited to reason about the costs and uncertainty of individual and team actions. Our approach to AA has proven essential to the success of our deployed multi-agent Electric Elves system that assists our research group in rescheduling meetings, choosing presenters, tracking peoples locations, and ordering meals.


adaptive agents and multi-agents systems | 2004

Scaling Teamwork to Very Large Teams

Paul Scerri; Yang Xu; Elizabeth Liao; Justin Lai; Katia P. Sycara

As a paradigm for coordinating cooperative agents in dynamic environments, teamwork has been shown to be capable of leading to flexible and robust behavior. However, when we apply teamwork to the problem of building teams with hundreds of members, fundamental limitations become apparent. We have developed a model of teamwork that addresses the limitations of existing models as they apply to very large teams. A central idea of the model is to organize team members into dynamically evolving subteams. Additionally, we present a novel approach to sharing information, leveraging the properties of small worlds networks. The algorithm provides targeted, efficient information delivery. We have developed domain independant software proxies with which we demonstrate teams at least an order of magnitude bigger than previously published. Moreover, the same proxies proved effective for teamwork in two distinct domains, illustrating the generality of the approach.


Archive | 2003

Adjustable Autonomy for the Real World

Paul Scerri; David V. Pynadath; Melind Tambe

Adjustable autonomy refers to agents’ dynamically varying their own autonomy, transferring decision making control to other entities (typically human users) in key situations. Determining whether and when such transfers of control must occur is arguably the fundamental research question in adjustable autonomy. Previous work, often focused on individual agent-human interactions, has provided several different techniques to address this question. Unfortunately, domains requiring collaboration between teams of agents and humans reveals two key shortcomings of these previous techniques. First, these techniques use rigid one-shot transfers of control that can result in unacceptable coordination failures in multiagent settings. Second, they ignore costs (e.g., in terms of time delays or effects of actions) to an agent’ team due to such transfers of control.


ieee international conference on evolutionary computation | 1998

Real time genetic scheduling of aircraft landing times

V. Ciesielski; Paul Scerri

Evolutionary approaches are not usually considered for real time scheduling problems due to long computation times and uncertainty about the length of the computation time. The authors argue that for some kinds of problems, such as optimizing aircraft landing times, genetic algorithms have advantages over other methods as a best solution is always available when needed, and, since the computation is inherently parallel, more processors can be added to get higher quality solutions if necessary. Furthermore, the computation time can be decreased and the quality of the generated schedules increased by seeding the genetic algorithm from a previous population. They have performed a series of experiments on landing data for Sydney airport on the busiest day of the year. Their results show that high quality solutions can be computed in the time window between aircraft landings.


Archive | 2009

Principles of Practice in Multi-Agent Systems

Jung-Jin Yang; Makoto Yokoo; Takayuki Ito; Zhi Jin; Paul Scerri

Technical Papers.- A Market-Based Multi-Issue Negotiation Model Considering Multiple Preferences in Dynamic E-Marketplaces.- Designing Protocols for Collaborative Translation.- An Affective Agent Playing Tic-Tac-Toe as Part of a Healing Environment.- A Multi-agent Model for Emotion Contagion Spirals Integrated within a Supporting Ambient Agent Model.- Statistical Utterance Selection Using Word Co-occurrence for a Dialogue Agent.- On the Impact of Witness-Based Collusion in Agent Societies.- Efficient Methods for Multi-agent Multi-issue Negotiation: Allocating Resources.- Token Based Resource Sharing in Heterogeneous Multi-agent Teams.- Gaia Agents Implementation through Models Transformation.- ONTOMO: Development of Ontology Building Service.- Syncretic Argumentation by Means of Lattice Homomorphism.- Adaptive Adjustment of Starting Price for Agents in Continuous Double Auctions.- SIM-MADARP: An Agent-Based Tool for Dial-a-Ride Simulation.- An Empirical Study of Agent Programs.- A Multiagent Model for Provider-Centered Trust in Composite Web Services.- Memory Complexity of Automated Trust Negotiation Strategies.- Layered Distributed Constraint Optimization Problem for Resource Allocation Problem in Distributed Sensor Networks.- NegoExplorer: A Region-Based Recursive Approach to Bilateral Multi-attribute Negotiation.- Applying User Feedback and Query Learning Methods to Multiple Communities.- An Adaptive Human-Aware Software Agent Supporting Attention-Demanding Tasks.- Designing a Two-Sided Matching Protocol under Asymmetric Information.- Emotion Detection from Body Motion of Human Form Robot Based on Laban Movement Analysis.- HoneySpam 2.0: Profiling Web Spambot Behaviour.- Multimedia Papers.- A Modeling Tool for Service-Oriented Open Multiagent Systems.- Analysis, Comparison and Selection of MAS Software Engineering Processes and Tools.- A Synchronous Model of Mental Rhythm Using Paralanguage for Communication Robots.- Generating Association-Based Motion through Human-Robot Interaction.- SmartContractor: A Distributed Task Assignment System Based on the Simple Contract Net Protocol.- Participatory Simulation Environment gumonji/Q: A Network Game Empowered by Agents.- Industrial Papers.- A Multi-Agent System Based Approach to Intelligent Process Automation Systems.- Non-equity Joints among Small and Medium Enterprises and Innovation Management: An Empirical Analysis Based on Simulation.- Wide-Area Traffic Simulation Based on Driving Behavior Model.- An Agent-Based Framework for Healthcare Support System.- Interpolation System of Traffic Condition by Estimation/Learning Agents.- Poster Papers.- A Fuzzy Rule-Based System for Ontology Mapping.- Where Are All the Agents? On the Gap between Theory and Practice of Agent-Based Referral Networks.- SADE: A Development Environment for Adaptive Multi-Agent Systems.- Recursive Adaptation of Stepsize Parameter for Non-stationary Environments.- Mechanism Design Simulation for Healthcare Reform in China.- Case Learning in CBR-Based Agent Systems for Ship Collision Avoidance.- An Adaptive Agent Model for Emotion Reading by Mirroring Body States and Hebbian Learning.- Agent Evacuation Simulation Using a Hybrid Network and Free Space Models.- Designing Agent Behaviour in Agent-Based Simulation through Participatory Method.- Influence of Social Networks on Recovering Large Scale Distributed Systems.- Dynamic Evolution of Role Taxonomies through Multidimensional Clustering in Multiagent Organizations.- Adaptation and Validation of an Agent Model of Functional State and Performance for Individuals.- A Cooperation Trading Method with Hybrid Traders.- GPGCloud: Model Sharing and Execution Environment Service for Simulation of International Politics and Economics.- Creating and Using Reputation-Based Agreements in Organisational Environments.- Directory Service in the Language Grid for System Integration.- SBDO: A New Robust Approach to Dynamic Distributed Constraint Optimisation.- Evacuation Planning Assist System with Network Model-Based Pedestrian Simulator.


adaptive agents and multi-agents systems | 2005

Conflicts in teamwork: hybrids to the rescue

Milind Tambe; Emma Bowring; Hyuckchul Jung; Gal A. Kaminka; Rajiv T. Maheswaran; Janusz Marecki; Pragnesh Jay Modi; Ranjit Nair; Stephen Okamoto; Jonathan P. Pearce; Praveen Paruchuri; David V. Pynadath; Paul Scerri; Nathan Schurr; Pradeep Varakantham

Today within the AAMAS community, we see at least four competing approaches to building multiagent systems: belief-desire-intention (BDI), distributed constraint optimization (DCOP), distributed POMDPs, and auctions or game-theoretic approaches. While there is exciting progress within each approach, there is a lack of cross-cutting research. This paper highlights hybrid approaches for multiagent teamwork. In particular, for the past decade, the TEAMCORE research group has focused on building agent teams in complex, dynamic domains. While our early work was inspired by BDI, we will present an overview of recent research that uses DCOPs and distributed POMDPs in building agent teams. While DCOP and distributed POMDP algorithms provide promising results, hybrid approaches help us address problems of scalability and expressiveness. For example, in the BDI-POMDP hybrid approach, BDI team plans are exploited to improve POMDP tractability, and POMDPs improve BDI team plan performance. We present some recent results from applying this approach in a Disaster Rescue simulation domain being developed with help from the Los Angeles Fire Department.

Collaboration


Dive into the Paul Scerri's collaboration.

Top Co-Authors

Avatar

Katia P. Sycara

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Michael Lewis

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar

Milind Tambe

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Huadong Wang

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar

Nathan Brooks

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Nathan Schurr

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Robin Glinton

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

David V. Pynadath

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge