Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ya'akov Gal is active.

Publication


Featured researches published by Ya'akov Gal.


Artificial Intelligence | 2010

Agent decision-making in open mixed networks

Ya'akov Gal; Barbara J. Grosz; Sarit Kraus; Avi Pfeffer; Stuart M. Shieber

Computer systems increasingly carry out tasks in mixed networks, that is in group settings in which they interact both with other computer systems and with people. Participants in these heterogeneous human-computer groups vary in their capabilities, goals, and strategies; they may cooperate, collaborate, or compete. The presence of people in mixed networks raises challenges for the design and the evaluation of decision-making strategies for computer agents. This paper describes several new decision-making models that represent, learn and adapt to various social attributes that influence peoples decision-making and presents a novel approach to evaluating such models. It identifies a range of social attributes in an open-network setting that influence peoples decision-making and thus affect the performance of computer-agent strategies, and establishes the importance of learning and adaptation to the success of such strategies. The settings vary in the capabilities, goals, and strategies that people bring into their interactions. The studies deploy a configurable system called Colored Trails (CT) that generates a family of games. CT is an abstract, conceptually simple but highly versatile game in which players negotiate and exchange resources to enable them to achieve their individual or group goals. It provides a realistic analogue to multi-agent task domains, while not requiring extensive domain modeling. It is less abstract than payoff matrices, and people exhibit less strategic and more helpful behavior in CT than in the identical payoff matrix decision-making context. By not requiring extensive domain modeling, CT enables agent researchers to focus their attention on strategy design, and it provides an environment in which the influence of social factors can be better isolated and studied.


adaptive agents and multi agents systems | 2011

A study of computational and human strategies in revelation games

Noam Peled; Ya'akov Gal; Sarit Kraus

Many negotiations in the real world are characterized by incomplete information, and participants’ success depends on their ability to reveal information in a way that facilitates agreements without compromising their individual gain. This paper presents an agent-design that is able to negotiate proficiently with people in settings in which agents can choose to truthfully reveal their private information before engaging in multiple rounds of negotiation. Such settings are analogous to real-world situations in which people need to decide whether to disclose information such as when negotiating over health plans and business transactions. The agent combined a decision-theoretic approach with traditional machine-learning techniques to reason about the social factors that affect the players’ revelation decisions on people’s negotiation behavior. It was shown to outperform people as well as agents playing the equilibrium strategy of the game in empirical studies spanning hundreds of subjects. It was also more likely to reach agreement than people or agents playing equilibrium strategies. In addition, it had a positive effect on people’s play, allowing them to reach significantly better performance when compared to people’s play with other people. These results are shown to generalize for two different settings that varied how players depend on each other in the negotiation.


Autonomous Agents and Multi-Agent Systems | 2016

Strategic advice provision in repeated human-agent interactions

Amos Azaria; Ya'akov Gal; Sarit Kraus; Claudia V. Goldman

This paper addresses the problem of automated advice provision in scenarios that involve repeated interactions between people and computer agents. This problem arises in many applications such as route selection systems, office assistants and climate control systems. To succeed in such settings agents must reason about how their advice influences people’s future actions or decisions over time. This work models such scenarios as a family of repeated bilateral interaction called “choice selection processes”, in which humans or computer agents may share certain goals, but are essentially self-interested. We propose a social agent for advice provision (SAP) for such environments that generates advice using a social utility function which weighs the sum of the individual utilities of both agent and human participants. The SAP agent models human choice selection using hyperbolic discounting and samples the model to infer the best weights for its social utility function. We demonstrate the effectiveness of SAP in two separate domains which vary in the complexity of modeling human behavior as well as the information that is available to people when they need to decide whether to accept the agent’s advice. In both of these domains, we evaluated SAP in extensive empirical studies involving hundreds of human subjects. SAP was compared to agents using alternative models of choice selection processes informed by behavioral economics and psychological models of decision-making. Our results show that in both domains, the SAP agent was able to outperform alternative models. This work demonstrates the efficacy of combining computational methods with behavioral economics to model how people reason about machine-generated advice and presents a general methodology for agent-design in such repeated advice settings.


Computers in Human Behavior | 2012

Human-agent teamwork in dynamic environments

A. van Wissen; Ya'akov Gal; Bart A. Kamphorst; M.V. Dignum

Teamwork between humans and computer agents has become increasingly prevalent. This paper presents a behavioral study of fairness and trust in a heterogeneous setting comprising both computer agents and human participants. It investigates peoples choice of teammates and their commitment to their teams in a dynamic environment in which actions occur at a fast pace and decisions are made within tightly constrained time frames, under conditions of uncertainty and partial information. In this setting, participants could form teams by negotiating over the division of a reward for the successful completion of a group task. Participants could also choose to defect from their existing teams in order to join or create other teams. Results show that when people form teams, they offer significantly less reward to agents than they offer to people. The most significant factor affecting peoples decisions whether to defect from their existing teams is the extent to which they had successful previous interactions with other team members. Also, there is no significant difference in peoples rate of defection from agent-led teams as compared to their defection from human-led teams. These results are significant for agent designers and behavioral researchers who study human-agent interactions.


ACM Transactions on Intelligent Systems and Technology | 2011

An Adaptive Agent for Negotiating with People in Different Cultures

Ya'akov Gal; Sarit Kraus; Michele J. Gelfand; Hilal Khashan; Elizabeth Salmon

The rapid dissemination of technology such as the Internet across geographical and ethnic lines is opening up opportunities for computer agents to negotiate with people of diverse cultural and organizational affiliations. To negotiate proficiently with people in different cultures, agents need to be able to adapt to the way behavioral traits of other participants change over time. This article describes a new agent for repeated bilateral negotiation that was designed to model and adapt its behavior to the individual traits exhibited by its negotiation partner. The agent’s decision-making model combined a social utility function that represented the behavioral traits of the other participant, as well as a rule-based mechanism that used the utility function to make decisions in the negotiation process. The agent was deployed in a strategic setting in which both participants needed to complete their individual tasks by reaching agreements and exchanging resources, the number of negotiation rounds was not fixed in advance and agreements were not binding. The agent negotiated with human subjects in the United States and Lebanon in situations that varied the dependency relationships between participants at the onset of negotiation. There was no prior data available about the way people would respond to different negotiation strategies in these two countries. Results showed that the agent was able to adopt a different negotiation strategy to each country. Its average performance across both countries was equal to that of people. However, the agent outperformed people in the United States, because it learned to make offers that were likely to be accepted by people, while being more beneficial to the agent than to people. In contrast, the agent was outperformed by people in Lebanon, because it adopted a high reliability measure which allowed people to take advantage of it. These results provide insight for human-computer agent designers in the types of multicultural settings that we considered, showing that adaptation is a viable approach towards the design of computer agents to negotiate with people when there is no prior data of their behavior.


decision support systems | 2014

Training with automated agents improves people's behavior in negotiation and coordination tasks

Raz Lin; Ya'akov Gal; Sarit Kraus; Yaniv Mazliah

There is inconclusive evidence whether practicing tasks with computer agents improves peoples performance on these tasks. This paper studies this question empirically using extensive experiments involving bilateral negotiation and three-player coordination tasks played by hundreds of human subjects. We used different training methods for subjects, including practice interactions with other human participants, interacting with agents from the literature, and asking participants to design an automated agent to serve as their proxy in the task. Following training, we compared the performance of subjects when playing state-of-the-art agents from the literature. The results revealed that in the negotiation settings, in most cases, training with computer agents increased peoples performance as compared to interacting with people. In the three player coordination game, training with computer agents increased peoples performance when matched with the state-of-the-art agent. These results demonstrate the efficacy of using computer agents as tools for improving peoples skills when interacting in strategic settings, saving considerable effort and providing better performance than when interacting with human counterparts.


adaptive agents and multi-agents systems | 2006

Predicting people's bidding behavior in negotiation

Ya'akov Gal; Avi Pfeffer

This paper presents a statistical learning approach to predicting peoples bidding behavior in negotiation. Our study consists multiple 2-player negotiation scenarios where bids of multi-valued goods can be accepted or rejected. The bidding task is formalized as a selection process in which a proposer player chooses a single bid to offer to a responder player from a set of candidate proposals. Each candidate is associated with features that affect whether not it is the chosen bid. These features represent social factors that affect peoples play. We present and compare several algorithms for predicting the chosen bid and for learning a model from data. Data collection and evaluation of these algorithms is performed on both human and synthetic data sets. Results on both data sets show that an algorithm that reasons about dependencies between the features of candidate proposals is significantly more successful than an algorithm which assumes that candidates are independent. In the synthetic data set, this algorithm achieved near optimal performance. We also study the problem of inferring the features of a proposal given the fact that it was the chosen bid. A baseline importance sampling algorithm is first presented, and then compared with several approximations that attain much better performance.


Ksii Transactions on Internet and Information Systems | 2013

Plan Recognition and Visualization in Exploratory Learning Environments

Ofra Amir; Ya'akov Gal

Modern pedagogical software is open-ended and flexible, allowing students to solve problems through exploration and trial-and-error. Such exploratory settings provide for a rich educational environment for students, but they challenge teachers to keep track of students’ progress and to assess their performance. This article presents techniques for recognizing students’ activities in such pedagogical software and visualizing these activities to teachers. It describes a new plan recognition algorithm that uses a recursive grammar that takes into account repetition and interleaving of activities. This algorithm was evaluated empirically using an exploratory environment for teaching chemistry used by thousands of students in several countries. It was always able to correctly infer students’ plans when the appropriate grammar was available. We designed two methods for visualizing students’ activities for teachers: one that visualizes students’ inferred plans, and one that visualizes students’ interactions over a timeline. Both of these visualization methods were preferred to and found more helpful than a baseline method which showed a movie of students’ interactions. These results demonstrate the benefit of combining novel AI techniques and visualization methods for the purpose of designing collaborative systems that support students in their problem solving and teachers in their understanding of students’ performance.


adaptive agents and multi-agents systems | 2004

Reasoning about Rationality and Beliefs

Ya'akov Gal; Avi Pfeffer

In order to succeed, agents playing games must reason about the mechanics of the game, the strategies of other agents, other agentsý reasoning about their strategies, and the rationality of agents. This paper presents a compact, natural and highly expressive language for reasoning about the beliefs and rationality of agentsý decision-making processes in games. It extends a previous version of the language in a number of important ways. Agents can reason directly about the rationality of other agents; agentsý beliefs are allowed to conflict with one another, including situations in which these beliefs form a cyclic structure; agentsý play can deviate from the normative game theoretic solution. The paper formalizes the equilibria that holds with respect to agentsý models and behavior, and provides algorithms for computing it. It also shows that the language is strictly more expressive than that of Bayesian games.


computational science and engineering | 2009

Modeling User Perception of Interaction Opportunities for Effective Teamwork

Ece Kamar; Ya'akov Gal; Barbara J. Grosz

This paper presents a model of collaborative decision-making for groups that involve people and computer agents. The model distinguishes between actions relating to participants’ commitment to the group and actions relating to their individual tasks, uses this distinction to decompose group decision making into smaller problems that can be solved efficiently. It allows computer agents to reason about the benefits of their actions on a collaboration and the ways in which human participants perceive these benefits. The model was tested in a setting in which computer agents need to decide whether to interrupt people to obtain potentially valuable information. Results show that the magnitude of the benefit of interruption to the collaboration is a major factor influencing the likelihood that people will accept interruption requests. They further establish that people’s perceived type of their partners (whether humans or computers) significantly affected their perceptions of the usefulness of interruptions when the benefit of the interruption is not clear-cut. These results imply that system designers need to consider not only the possible benefits of interruptions to collaborative human-computer teams but also the way that such benefits are perceived by people.

Collaboration


Dive into the Ya'akov Gal's collaboration.

Top Co-Authors

Avatar

Avi Pfeffer

Charles River Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Reuth Mirsky

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Avi Segal

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar

Oriel Uzan

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar

Amos Azaria

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Meir Kalech

Ben-Gurion University of the Negev

View shared research outputs
Researchain Logo
Decentralizing Knowledge