Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sarit Kraus is active.

Publication


Featured researches published by Sarit Kraus.


Artificial Intelligence | 1996

Collaborative plans for complex group action

Barbara J. Grosz; Sarit Kraus

Abstract The original formulation of SharedPlans by B. Grosz and C. Sidner (1990) was developed to provide a model of collaborative planning in which it was not necessary for one agent to have intentions-to toward an act of a different agent. Unlike other contemporaneous approaches (J.R. Searle, 1990), this formulation provided for two agents to coordinate their activities without introducing any notion of irreducible joint intentions. However, it only treated activities that directly decomposed into single-agent actions, did not address the need for agents to commit to their joint activity, and did not adequately deal with agents having only partial knowledge of the way in which to perform an action. This paper provides a revised and expanded version of SharedPlans that addresses these shortcomings. It also reformulates Pollacks (1990) definition of individual plans to handle cases in which a single agent has only partial knowledge; this reformulation meshes with the definition of SharedPlans. The new definitions also allow for contracting out certain actions. The formalization that results has the features required by Bratmans (1992) account of shared cooperative activity and is more general than alternative accounts (H. Levesque et al., 1990; E. Sonenberg et al., 1992).


Artificial Intelligence | 1998

Reaching agreements through argumentation: a logical model and implementation

Sarit Kraus; Katia P. Sycara; Amir Evenchik

In a multi-agent environment, where self-motivated agents try to pursue their own goals, cooperation cannot be taken for granted. Cooperation must be planned for and achieved through communication and negotiation. We present a logical model of the mental states of the agents based on a representation of their beliefs, desires, intentions, and goals. We present argumentation as an iterative process emerging from exchanges among agents to persuade each other and bring about a change in intentions. We look at argumentation as a mechanism for achieving cooperation and agreements. Using categories identified from human multi-agent negotiation, we demonstrate how the logic can be used to specify argument formulation and evaluation. We also illustrate how the developed logic can be used to describe different types of agents. Furthermore, we present a general Automated Negotiation Agent which we implemented, based on the logical model. Using this system, a user can analyze and explore different methods to negotiate and argue in a noncooperative environment where no centralized mechanism for coordination exists. The development of negotiating agents in the framework of the Automated Negotiation Agent is illustrated with an example where the agents plan, act, and resolve conflicts via negotiation in a Blocks World environment.


Artificial Intelligence | 1997

Negotiation and cooperation in multi-agent environments

Sarit Kraus

Abstract Automated intelligent agents inhabiting a shared environment must coordinate their activities. Cooperation—not merely coordination—may improve the performance of the individual agents or the overall behavior of the system they form. Research in Distributed Artificial Intelligence (DAI) addresses the problem of designing automated intelligent systems which interact effectively. DAI is not the only field to take on the challenge of understanding cooperation and coordination. There are a variety of other multi-entity environments in which the entities coordinate their activity and cooperate. Among them are groups of people, animals, particles, and computers. We argue that in order to address the challenge of building coordinated and collaborated intelligent agents, it is beneficial to combine AI techniques with methods and techniques from a range of multi-entity fields, such as game theory, operations research, physics and philosophy. To support this claim, we describe some of our projects, where we have successfully taken an interdisciplinary approach. We demonstrate the benefits in applying multi-entity methodologies and show the adaptations, modifications and extensions necessary for solving the DAI problems


Artificial Intelligence | 1995

Multiagent negotiation under time constraints

Sarit Kraus; Jonathan Wilkenfeld; Gilad Zlotkin

Research in distributed artificial intelligence (DAI) is concerned with how automated agents can be designed to interact effectively. Negotiation is proposed as a means for agents to communicate and compromise to reach mutually beneficial agreements. The paper examines the problems of resource allocation and task distribution among autonomous agents which can benefit from sharing a common resource or distributing a set of common tasks. We propose a strategic model of negotiation that takes the passage of time during the negotiation process itself into account. A distributed negotiation mechanism is introduced that is simple, efficient, stable, and flexible in various situations. The model considers situations characterized by complete as well as incomplete information, and ones in which some agents lose over time while others gain over time. Using this negotiation mechanism autonomous agents have simple and stable negotiation strategies that result in efficient agreements without delays even when there are dynamic changes in the environment.


adaptive agents and multi agents systems | 2008

Deployed ARMOR protection: the application of a game theoretic model for security at the Los Angeles International Airport

James Pita; Manish Jain; Janusz Marecki; Christopher Portway; Milind Tambe; Craig Western; Praveen Paruchuri; Sarit Kraus

Security at major locations of economic or political importance is a key concern around the world, particularly given the threat of terrorism. Limited security resources prevent full security coverage at all times, which allows adversaries to observe and exploit patterns in selective patrolling or monitoring, e.g. they can plan an attack avoiding existing patrols. Hence, randomized patrolling or monitoring is important, but randomization must provide distinct weights to different actions based on their complex costs and benefits. To this end, this paper describes a promising transition of the latest in multi-agent algorithms -- in fact, an algorithm that represents a culmination of research presented at AAMAS - into a deployed application. In particular, it describes a software assistant agent called ARMOR (Assistant for Randomized Monitoring over Routes) that casts this patrolling/monitoring problem as a Bayesian Stackelberg game, allowing the agent to appropriately weigh the different actions in randomization, as well as uncertainty over adversary types. ARMOR combines three key features: (i) It uses the fastest known solver for Bayesian Stackelberg games called DOBSS, where the dominant mixed strategies enable randomization; (ii) Its mixed-initiative based interface allows users to occasionally adjust or override the automated schedule based on their local constraints; (iii) It alerts the users if mixed-initiative overrides appear to degrade the overall desired randomization. ARMOR has been successfully deployed since August 2007 at the Los Angeles International Airport (LAX) to randomize checkpoints on the roadways entering the airport and canine patrol routes within the airport terminals. This paper examines the information, design choices, challenges, and evaluation that went into designing ARMOR.


IEEE Transactions on Knowledge and Data Engineering | 1991

Combining multiple knowledge bases

Chitta Baral; Sarit Kraus; Jack Minker

Combining knowledge present in multiple knowledge base systems into a single knowledge base is discussed. A knowledge based system can be considered an extension of a deductive database in that it permits function symbols as part of the theory. Alternative knowledge bases that deal with the same subject matter are considered. The authors define the concept of combining knowledge present in a set of knowledge bases and present algorithms to maximally combine them so that the combination is consistent with respect to the integrity constraints associated with the knowledge bases. For this, the authors define the concept of maximality and prove that the algorithms presented combine the knowledge bases to generate a maximal theory. The authors also discuss the relationships between combining multiple knowledge bases and the view update problem. >


adaptive agents and multi-agents systems | 2003

Coalition formation with uncertain heterogeneous information

Sarit Kraus; Onn Shehory; Gilad Taase

Coalition formation methods allow agents to join together and are thus necessary in cases where tasks can only be performed cooperatively by groups. This is the case in the Request For Proposal (RFP) domain, where some requester business agent issues an RFP - a complex task comprised of sub-tasks - and several service provider agents need to join together to address this RFP. In such environments the value of the RFP may be common knowledge, however the costs that an agent incurs for performing a specific sub-task are unknown to other agents. Additionally, time for addressing RFPs is limited. These constraints make it hard to apply traditional coalition formation mechanisms, since those assume complete information, and time constraints are of lesser significance there.To address this problem, we have developed a protocol that enables agents to negotiate and form coalitions, and provide them with simple heuristics for choosing coalition partners. The protocol and the heuristics allow the agents to form coalitions in the face of time constraints and incomplete information. The overall payoff of agents using our heuristics is very close to an experimentally measured optimal value, as our extensive experimental evaluation shows.


international conference on robotics and automation | 2008

Multi-robot perimeter patrol in adversarial settings

Noa Agmon; Sarit Kraus; Gal A. Kaminka

This paper considers the problem of multi-robot patrol around a closed area with the existence of an adversary attempting to penetrate into the area. In case the adversary knows the patrol scheme of the robots and the robots use a deterministic patrol algorithm, then in many cases it is possible to penetrate with probability 1. Therefore this paper considers a non-deterministic patrol scheme for the robots, such that their movement is characterized by a probability p. This patrol scheme allows reducing the probability of penetration, even under an assumption of a strong opponent that knows the patrol scheme. We offer an optimal polynomial-time algorithm for finding the probability p such that the minimal probability of penetration detection throughout the perimeter is maximized. We describe three robotic motion models, defined by the movement characteristics of the robots. The algorithm described herein is suitable for all three models.


Applied Logic Series | 1999

The Evolution of Sharedplans

Barbara J. Grosz; Sarit Kraus

Rational agents often need to work together. There are jobs that cannot be done by one agent—for example, singing a duet or operating a computer network—and jobs that are more efficiently done by more than one agent—for example, hanging a door or searching the Internet. Collaborative behavior—coordinated activity in which the participants work jointly with each other to satisfy a shared goal—is more than the sum of individual acts [24, 8] and may be distinguished from both interaction and simple coordination in terms of the commitments agents make to each other [4, 10, 9]. A theory of collaboration must therefore treat not only the intentions, abilities, and knowledge about action of individual agents, but also their coordination in group planning and acting. It also must account for the ways in which plans are incrementally formed and executed by the participants.


Communications of The ACM | 2010

Can automated agents proficiently negotiate with humans

Raz Lin; Sarit Kraus

Exciting research in the design of automated negotiators is making great progress.

Collaboration


Dive into the Sarit Kraus's collaboration.

Top Co-Authors

Avatar

Avi Rosenfeld

Jerusalem College of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Milind Tambe

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Amos Azaria

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Praveen Paruchuri

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge