Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yair Zick is active.

Publication


Featured researches published by Yair Zick.


ieee symposium on security and privacy | 2016

Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems

Anupam Datta; Shayak Sen; Yair Zick

Algorithmic systems that employ machine learning play an increasing role in making substantive decisions in modern society, ranging from online personalization to insurance and credit decisions to predictive policing. But their decision-making processes are often opaque-it is difficult to explain why a certain decision was made. We develop a formal foundation to improve the transparency of such decision-making systems. Specifically, we introduce a family of Quantitative Input Influence (QII) measures that capture the degree of influence of inputs on outputs of systems. These measures provide a foundation for the design of transparency reports that accompany system decisions (e.g., explaining a specific credit decision) and for testing tools useful for internal and external oversight (e.g., to detect algorithmic discrimination). Distinctively, our causal QII measures carefully account for correlated inputs while measuring influence. They support a general class of transparency queries and can, in particular, explain decisions about individuals (e.g., a loan decision) and groups (e.g., disparate impact based on gender). Finally, since single inputs may not always have high influence, the QII measures also quantify the joint influence of a set of inputs (e.g., age and income) on outcomes (e.g. loan decisions) and the marginal influence of individual inputs within such a set (e.g., income). Since a single input may be part of multiple influential sets, the average marginal influence of the input is computed using principled aggregation measures, such as the Shapley value, previously applied to measure influence in voting. Further, since transparency reports could compromise privacy, we explore the transparency-privacy tradeoff and prove that a number of useful transparency reports can be made differentially private with very little addition of noise. Our empirical validation with standard machine learning algorithms demonstrates that QII measures are a useful transparency mechanism when black box access to the learning system is available. In particular, they provide better explanations than standard associative measures for a host of scenarios that we consider. Further, we show that in the situations we consider, QII is efficiently approximable and can be made differentially private while preserving accuracy.


Archive | 2017

Algorithmic Transparency via Quantitative Input Influence

Anupam Datta; Shayak Sen; Yair Zick

Algorithmic systems that employ machine learning are often opaque—it is difficult to explain why a certain decision was made. We present a formal foundation to improve the transparency of such decision-making systems. Specifically, we introduce a family of Quantitative Input Influence (QII) measures that capture the degree of input influence on system outputs. These measures provide a foundation for the design of transparency reports that accompany system decisions (e.g., explaining a specific credit decision) and for testing tools useful for internal and external oversight (e.g., to detect algorithmic discrimination). Distinctively, our causal QII measures carefully account for correlated inputs while measuring influence. They support a general class of transparency queries and can, in particular, explain decisions about individuals and groups. Finally, since single inputs may not always have high influence, the QII measures also quantify the joint influence of a set of inputs (e.g., age and income) on outcomes (e.g. loan decisions) and the average marginal influence of individual inputs within such a set (e.g., income) using principled aggregation measures, such as the Shapley value, previously applied to measure influence in voting.


international joint conference on artificial intelligence | 2011

The Shapley value as a function of the quota in weighted voting games

Yair Zick; Alexander Skopalik; Edith Elkind

In weighted voting games, each agent has a weight, and a coalition of players is deemed to be winning if its weight meets or exceeds the given quota. An agents power in such games is usually measured by her Shapley value, which depends both on the agents weight and the quota. [Zuckerman et al., 2008] show that one can alter a players power significantly by modifying the quota, and investigate some of the related algorithmic issues. In this paper, we answer a number of questions that were left open by [Zuckerman et al., 2008]: we show that, even though deciding whether a quota maximizes or minimizes an agents Shapley value is coNP-hard, finding a Shapley value-maximizing quota is easy. Minimizing a players power appears to be more difficult. However, we propose and evaluate a heuristic for this problem, which takes into account the voters rank and the overall weight distribution. We also explore a number of other algorithmic issues related to quota manipulation.


Journal of Artificial Intelligence Research | 2014

Arbitration and stability in cooperative games with overlapping coalitions

Yair Zick; Evangelos Markakis; Edith Elkind

In cooperative games with overlapping coalitions (overlapping coalition formation games, or OCF games), agents may devote only some of their resources to a coalition, allowing the formation of overlapping coalition structures. Having formed coalitions and generated profits, agents must agree on some reasonable manner in which to divide the payoffs from the coalitions they are members of. In this thesis, we study stability in OCF games. As shown in Chalkiadakis et al. (2010), stability in OCF games strongly depends on the way non-deviators react to deviation; this is because when a set deviates, it may still interact with non-deviators post deviation. We begin by proposing a formal framework for handling deviation in OCF games, which we term arbitration functions. Given a set deviation, the arbitration function specifies the payoff each coalition involving non-deviators is willing to give the deviating set. Using our framework, we define and analyze several OCF solution concepts. We show that under some assumptions on the underlying structure of the OCF game, one can find core outcomes in OCF games in polynomial time. Finally, we provide sufficient conditions for core stability for the conservative, refined and optimistic arbitration functions,


Sigecom Exchanges | 2014

Arbitration and stability in cooperative games

Yair Zick; Edith Elkind

We present a formal framework for handling deviation in settings where players divide resources among multiple projects, forming overlapping coalition structures. Having formed such a coalition structure, players share the revenue generated among themselves. Given a profit division, some players may decide that they are underpaid, and deviate from the outcome. The main insight of the work presented in this survey is that when players want to deviate, they must know how the non-deviators would react to their deviation: after the deviation, the deviators may still work with some of the non-deviators, which presents an opportunity for the non-deviators to exert leverage on deviators. We extend the overlapping coalition formation (OCF) model of Chalkiadakis et al. [2010] for cooperation with partial coalitions, by introducing arbitration functions, a general framework for handling deviation in OCF games. We review some interesting aspects of the model, characterizations of stability in this model, as well as methods for computing stable outcomes.


international joint conference on artificial intelligence | 2017

Which is the Fairest (Rent Division) of Them All? [Extended Abstract]

Ya'akov Gal; Moshe Mash; Ariel D. Procaccia; Yair Zick

What is a fair way to assign rooms to several housemates, and divide the rent between them? This is not just a theoretical question: many people have used the Spliddit website to obtain envy-free solutions to rent division instances. But envy freeness, in and of itself, is insufficient to guarantee outcomes that people view as intuitive and acceptable. We therefore focus on solutions that optimize a criterion of social justice, subject to the envy freeness constraint, in order to pinpoint the “fairest” solutions. We develop a general algorithmic framework that enables the computation of such solutions in polynomial time. We then study the relations between natural optimization objectives, and identify the maximin solution, which maximizes the minimum utility subject to envy freeness, as the most attractive. We demonstrate, in theory and using experiments on real data from Spliddit, that the maximin solution gives rise to significant gains in terms of our optimization objectives. Finally, a user study with Spliddit users as subjects demonstrates that people find the maximin solution to be significantly fairer than arbitrary envy-free solutions; this user study is unprecedented in that it asks people about their real-world rent division instances. Based on these results, the maximin solution has been deployed on Spliddit since April 2015.


international joint conference on artificial intelligence | 2017

How to Form Winning Coalitions in Mixed Human-Computer Settings

Yair Zick; Kobi Gal; Moshe Mash

This paper proposes a new negotiation game, based on the weighted voting paradigm in cooperative game theory, where agents need to form coalitions and agree on how to share the gains. Despite the prevalence of weighted voting in the real world, there has been little work studying people’s behavior in such settings. We show that solution concepts from cooperative game theory (in particular, an extension of the Deegan-Packel Index) provide a good prediction of people’s decisions to join coalitions in an online version of a weighted voting game. We design an agent that combines supervised learning with decision theory to make offers to people in this game. We show that the agent was able to obtain higher shares from coalitions than did people playing other people, without reducing the acceptance rate of its offers. We also find that people display certain biases in weighted voting settings, like creating unnecessarily large coalitions, and not rewarding strong players. These results demonstrate the benefit of incorporating concepts from cooperative game theory in the design of agents that interact with other people in weighted voting systems.


international joint conference on artificial intelligence | 2017

Learning Hedonic Games

Jakub Sliwinski; Yair Zick

Coalitional stability in hedonic games has usually been considered in the setting where agent preferences are fully known. We consider the setting where agent preferences are unknown; we lay the theoretical foundations for studying the interplay between coalitional stability and (PAC) learning in hedonic games. We introduce the notion of PAC stability — the equivalent of core stability under uncertainty — and examine the PAC stabilizability and learnability of several popular classes of hedonic games.


algorithmic game theory | 2016

Analyzing Power in Weighted Voting Games with Super-Increasing Weights

Yuval Filmus; Joel Oren; Yair Zick

Weighted voting games (WVGs) are a class of cooperative games that capture settings of group decision making in various domains, such as parliaments or committees. Earlier work has revealed that the effective decision making power, or influence of agents in WVGs is not necessarily proportional to their weight. This gave rise to measures of influence for WVGs. However, recent work in the algorithmic game theory community have shown that computing agent voting power is computationally intractable. In an effort to characterize WVG instances for which polynomial-time computation of voting power is possible, several classes of WVGs have been proposed and analyzed in the literature. One of the most prominent of these are super increasing weight sequences. Recent papers show that when agent weights are super-increasing, it is possible to compute the agents’ voting power (as measured by the Shapley value) in polynomial-time. We provide the first set of explicit closed-form formulas for the Shapley value for super-increasing sequences. We bound the effects of changes to the quota, and relate the behavior of voting power to a novel function. This set of results constitutes a complete characterization of the Shapley value in weighted voting games, and answers a number of open questions presented in previous work.


adaptive agents and multi agents systems | 2011

Arbitrators in overlapping coalition formation games

Yair Zick; Edith Elkind

Collaboration


Dive into the Yair Zick's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anupam Datta

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Evangelos Markakis

Athens University of Economics and Business

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Moshe Mash

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar

Georgios Chalkiadakis

Technical University of Crete

View shared research outputs
Top Co-Authors

Avatar

Amit Datta

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Shayak Sen

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Jeffrey S. Rosenschein

Hebrew University of Jerusalem

View shared research outputs
Researchain Logo
Decentralizing Knowledge