Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Inon Zuckerman is active.

Publication


Featured researches published by Inon Zuckerman.


Synthese | 2012

Combining psychological models with machine learning to better predict people's decisions

Avi Rosenfeld; Inon Zuckerman; Amos Azaria; Sarit Kraus

Creating agents that proficiently interact with people is critical for many applications. Towards creating these agents, models are needed that effectively predict people’s decisions in a variety of problems. To date, two approaches have been suggested to generally describe people’s decision behavior. One approach creates a-priori predictions about people’s behavior, either based on theoretical rational behavior or based on psychological models, including bounded rationality. A second type of approach focuses on creating models based exclusively on observations of people’s behavior. At the forefront of these types of methods are various machine learning algorithms.This paper explores how these two approaches can be compared and combined in different types of domains. In relatively simple domains, both psychological models and machine learning yield clear prediction models with nearly identical results. In more complex domains, the exact action predicted by psychological models is not even clear, and machine learning models are even less accurate. Nonetheless, we present a novel approach of creating hybrid methods that incorporate features from psychological models in conjunction with machine learning in order to create significantly improved models for predicting people’s decisions. To demonstrate these claims, we present an overview of previous and new results, taken from representative domains ranging from a relatively simple optimization problem and complex domains such as negotiation and coordination without communication.


international joint conference on artificial intelligence | 2011

Manipulating boolean games through communication

John Grant; Sarit Kraus; Michael Wooldridge; Inon Zuckerman

We address the issue of manipulating games through communication. In the specific setting we consider (a variation of Boolean games), we assume there is some set of environment variables, the value of which is not directly accessible to players; each player has their own beliefs about these variables, and makes decisions about what actions to perform based on these beliefs. The communication we consider takes the form of (truthful) announcements about the value of some environment variables; the effect of an announcement about some variable is to modify the beliefs of the players who hear the announcement so that they accurately reflect the value of the announced variables. By choosing announcements appropriately, it is possible to perturb the game away from certain rational outcomes and towards others. We specifically focus on the issue of stabilisation: making announcements that transform a game from having no stable states to one that has stable configurations.


Autonomous Agents and Multi-Agent Systems | 2016

NegoChat-A: a chat-based negotiation agent with bounded rationality

Avi Rosenfeld; Inon Zuckerman; Erel Segal-Halevi; Osnat Drein; Sarit Kraus

To date, a variety of automated negotiation agents have been created. While each of these agents has been shown to be effective in negotiating with people in specific environments, they typically lack the natural language processing support required to enable real-world types of interactions. To address this limitation, we present NegoChat-A, an agent that incorporates several significant research contributions. First, we found that simply modifying existing agents to include an natural language processing module is insufficient to create these agents. Instead, agents that support natural language must have strategies that allow for partial agreements and issue-by-issue interactions. Second, we present NegoChat-A’s negotiation algorithm. This algorithm is based on bounded rationality, and specifically anchoring and aspiration adaptation theory. The agent begins each negotiation interaction by proposing a full offer, which serves as its anchor. Assuming this offer is not accepted, the agent then proceeds to negotiate via partial agreements, proposing the next issue for negotiation based on people’s typical urgency, or order of importance. We present a rigorous evaluation of NegoChat-A, showing its effectiveness in two different negotiation roles.


Autonomous Agents and Multi-Agent Systems | 2011

Using focal point learning to improve human---machine tacit coordination

Inon Zuckerman; Sarit Kraus; Jeffrey S. Rosenschein

We consider an automated agent that needs to coordinate with a human partner when communication between them is not possible or is undesirable (tacit coordination games). Specifically, we examine situations where an agent and human attempt to coordinate their choices among several alternatives with equivalent utilities. We use machine learning algorithms to help the agent predict human choices in these tacit coordination domains. Experiments have shown that humans are often able to coordinate with one another in communication-free games, by using focal points, “prominent” solutions to coordination problems. We integrate focal point rules into the machine learning process, by transforming raw domain data into a new hypothesis space. We present extensive empirical results from three different tacit coordination domains. The Focal Point Learning approach results in classifiers with a 40–80% higher correct classification rate, and shorter training time, than when using regular classifiers, and a 35% higher correct classification rate than classical focal point techniques without learning. In addition, the integration of focal points into learning algorithms results in agents that are more robust to changes in the environment. We also present several results describing various biases that might arise in Focal Point based coordination.


Studia Logica | 2014

Manipulating Games by Sharing Information

John Grant; Sarit Kraus; Michael Wooldridge; Inon Zuckerman

We address the issue of manipulating games through communication. In the specific setting we consider (a variation of Boolean games), we assume there is some set of environment variables, the values of which are not directly accessible to players; the players have their own beliefs about these variables, and make decisions about what actions to perform based on these beliefs. The communication we consider takes the form of (truthful) announcements about the values of some environment variables; the effect of an announcement is the modification of the beliefs of the players who hear the announcement so that they accurately reflect the values of the announced variables. By choosing announcements appropriately, it is possible to perturb the game away from certain outcomes and towards others. We specifically focus on the issue of stabilisation: making announcements that transform a game from having no stable states to one that has stable configurations.


european conference on artificial intelligence | 2012

Improving local decisions in adversarial search

Brandon Wilson; Inon Zuckerman; Austin Parker; Dana S. Nau

Until recently, game-tree pathology (in which a deeper game-tree search results in worse play) has been thought to be quite rare. We provide an analysis that shows that every game should have some sections that are locally pathological, assuming that both players can potentially win the game. We also modify the minimax algorithm to recognize local pathologies in arbitrary games, and cut off search accordingly (shallower search is more effective than deeper search when local pathologies occur). We show experimentally that our modified search procedure avoids local pathologies and consequently provides improved performance, in terms of decision accuracy, when compared with the ordinary minimax algorithm.


european conference on artificial intelligence | 2012

Guiding user choice during discussion by silence, examples and justifications

Maier Fenster; Inon Zuckerman; Sarit Kraus

This paper describes an approach for guiding human choice-making by a computerized agent, in a conversational setting, where both user and agent provide meaningful input. In the proposed approach, the agent attempts to convince a person by providing examples for the person to emulate or by providing justifications for the person to internalize and build or change her preferences accordingly. The agent can take into account examples and justifications provided by the person. In a series of experiments where the task was selecting a location for a school, a computer agent interacted with subjects using a textual chat-type interface, with different agent designs being used in different experiments. The results show that the example-providing agent outperformed the justification providing agent and both, surprisingly, outperformed an agent which presented the subject with both examples and justifications. In addition, it was demonstrated that in some cases the best strategy for the agent is to keep silent.


Autonomous Agents and Multi-Agent Systems | 2012

The adversarial activity model for bounded rational agents

Inon Zuckerman; Sarit Kraus; Jeffrey S. Rosenschein

Multiagent research provides an extensive literature on formal Beliefs-Desires-Intentions (BDI) based models describing the notion of teamwork and cooperation. However, multiagent environments are often not cooperative nor collaborative; in many cases, agents have conflicting interests, leading to adversarial interactions. This form of interaction has not yet been formally defined in terms of the agents mental states, beliefs, desires and intentions. This paper presents the Adversarial Activity model, a formal Beliefs-Desires-Intentions (BDI) based model for bounded rational agents operating in a zero-sum environment. In complex environments, attempts to use classical utility-based search methods with bounded rational agents can raise a variety of difficulties (e.g. implicitly modeling the opponent as an omniscient utility maximizer, rather than leveraging a more nuanced, explicit opponent model). We define the Adversarial Activity by describing the mental states of an agent situated in such environment. We then present behavioral axioms that are intended to serve as design principles for building such adversarial agents. We illustrate the advantages of using the model as an architectural guideline by building agents for two adversarial environments: the Connect Four game and the Risk strategic board game. In addition, we explore the application of our approach by analyzing log files of completed Connect Four games, and gain additional insights on the axioms’ appropriateness.


web intelligence | 2011

Reasoning about Groups: A Cognitive Model for the Social Behavior Spectrum

Inon Zuckerman; Meirav Hadad

An important aspect of social intelligence is the ability to correctly capture the social structure and use it to navigate and achieve ones goals. In this work we suggest a mental model that provides agents with similar social capabilities. The model captures the entire social behavior spectrum, and provides design principles that allow agents to reason and change their behavior according to their perception of the cooperative/competitive nature of the society. We also describe computationally the maximum attainable benefits when agents belong to different kinds of social groups. We conclude by exploring the group membership problem as a constraints satisfaction problem, and evaluate few heuristics.


Knowledge and Information Systems | 2017

Decision-making and opinion formation in simple networks

Matan Leibovich; Inon Zuckerman; Avi Pfeffer; Ya'akov Gal

In many networked decision-making settings, information about the world is distributed across multiple agents and agents’ success depends on their ability to aggregate and reason about their local information over time. This paper presents a computational model of information aggregation in such settings in which agents’ utilities depend on an unknown event. Agents initially receive a noisy signal about the event and take actions repeatedly while observing the actions of their neighbors in the network at each round. Such settings characterize many distributed systems such as sensor networks for intrusion detection and routing systems for Internet traffic. Using the model, we show that (1) agents converge in action and in knowledge for a general class of decision-making rules and for all network structures; (2) all networks converge to playing the same action regardless of the network structure; and (3) for particular network configurations, agents can converge to the correct action when using a well-defined class of myopic decision rules. These theoretical results are also supported by a new simulation-based open-source empirical test-bed for facilitating the study of information aggregation in general networks.

Collaboration


Dive into the Inon Zuckerman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Avi Rosenfeld

Jerusalem College of Technology

View shared research outputs
Top Co-Authors

Avatar

Jeffrey S. Rosenschein

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ariel Felner

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge