Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hoda Heidari is active.

Publication


Featured researches published by Hoda Heidari.


Games and Economic Behavior | 2014

Competitive contagion in networks

Sanjeev Goyal; Hoda Heidari; Michael J. Kearns

We develop a game-theoretic framework for the study of competition between firms who have budgets to “seed” the initial adoption of their products by consumers located in a social network. We identify a general property of the adoption dynamics — namely, decreasing returns to local adoption — for which the inefficiency of resource use at equilibrium (the Price of Anarchy) is uniformly bounded above, across all networks. We also show that if this property is violated, even the Price of Stability can be unbounded, thus yielding sharp threshold behavior for a broad class of dynamics. We provide similar results for a new notion, the Budget Multiplier, that measures the extent to which the imbalances in player budgets can be amplified at equilibrium.


Sociological Methods & Research | 2018

Fairness in Criminal Justice Risk Assessments: The State of the Art

Richard A. Berk; Hoda Heidari; Shahin Jabbari; Michael J. Kearns; Aaron Roth

Objectives: Discussions of fairness in criminal justice risk assessments typically lack conceptual precision. Rhetoric too often substitutes for careful analysis. In this article, we seek to clarify the trade-offs between different kinds of fairness and between fairness and accuracy. Methods: We draw on the existing literatures in criminology, computer science, and statistics to provide an integrated examination of fairness and accuracy in criminal justice risk assessments. We also provide an empirical illustration using data from arraignments. Results: We show that there are at least six kinds of fairness, some of which are incompatible with one another and with accuracy. Conclusions: Except in trivial cases, it is impossible to maximize accuracy and fairness at the same time and impossible simultaneously to satisfy all kinds of fairness. In practice, a major complication is different base rates across different legally protected groups. There is a need to consider challenging trade-offs. These lessons apply to applications well beyond criminology where assessments of risk can be used by decision makers. Examples include mortgage lending, employment, college admissions, child welfare, and medical diagnoses.


international world wide web conferences | 2011

Toward optimal vaccination strategies for probabilistic models

Zeinab Abbassi; Hoda Heidari

Epidemic outbreaks such as the recent H1N1 influenza show how susceptible large communities are toward the spread of such outbreaks. The occurrence of a widespread disease transmission raises the question of vaccination strategies that are appropriate and close to optimal. The seemingly different problem of viruses disseminating through email networks, shares a common structure with disease epidemics. While it is not possible to vaccinate every individual during a virus outbreak, due to economic and logistical constraints, fortunately, we can leverage the structure and properties of face-to-face social networks to identify individuals whose vaccination would result in a lower number of infected people. The models that have been studied so far [3, 4] assume that once an individual is infected all its adjacent individuals would be infected with probability 1. However, this assumption is not realistic. In reality, if an individual is infected by a virus, the neighboring individuals would get infected with some probability (depending on the type of the disease and the contact). This modification to the model makes the problem more challenging as the simple version is already NP-complete [3]. Here we consider the following epidemiological model computationally: A number of individuals in the community get vaccinated which makes them immune to the disease. The disease then outbreaks and a number of nodes that are not vaccinated get infected at random. These nodes can transmit the infection to their friends with some probability. In this work we consider the optimization problem in which the number of nodes that get vaccinated is limited to k and our objective is to minimize the number of infected people overall. We design various algorithms that take into account the properties of social networks to select k nodes for vaccination in order to achieve the goal. We perform experiments on a real dataset of 34,546 vertices and 421,578 edges and assess their effectiveness and scalability.


economics and computation | 2015

Integrating Market Makers, Limit Orders, and Continuous Trade in Prediction Markets

Hoda Heidari; Sébastien Lahaie; David M. Pennock; Jennifer Wortman Vaughan

We provide the first concrete algorithm for combining market makers and limit orders in a prediction market with continuous trade. Our mechanism is general enough to handle both bundle orders and arbitrary securities defined over combinatorial outcome spaces. We define the notion of an e-fair trading path, a path in security space along which no order executes at a price more than e above its limit, and every order executes when its market price falls more than e below its limit. We show that, under a certain supermodularity condition, a fair trading path exists for which the endpoint is efficient, but that under general conditions reaching an efficient endpoint via an e-fair trading path is not possible. We develop an algorithm for operating a continuous market maker with limit orders that respects the e-fairness conditions in the general case. We conduct simulations of our algorithm using real combinatorial predictions made during the 2008 US presidential election and evaluate it against a natural baseline according to trading volume, social welfare, and violations of the two fairness conditions.


knowledge discovery and data mining | 2018

A Unified Approach to Quantifying Algorithmic Unfairness: Measuring Individual &Group Unfairness via Inequality Indices

Till Speicher; Hoda Heidari; Nina Grgić-Hlača; Krishna P. Gummadi; Adish Singla; Adrian Weller; Muhammad Bilal Zafar

Discrimination via algorithmic decision making has received considerable attention. Prior work largely focuses on defining conditions for fairness, but does not define satisfactory measures of algorithmic unfairness. In this paper, we focus on the following question: Given two unfair algorithms, how should we determine which of the two is more unfair? Our core idea is to use existing inequality indices from economics to measure how unequally the outcomes of an algorithm benefit different individuals or groups in a population. Our work offers a justified and general framework to compare and contrast the (un)fairness of algorithmic predictors. This unifying approach enables us to quantify unfairness both at the individual and the group level. Further, our work reveals overlooked tradeoffs between different fairness notions: using our proposed measures, the overall individual-level unfairness of an algorithm can be decomposed into a between-group and a within-group component. Earlier methods are typically designed to tackle only between-group un- fairness, which may be justified for legal or other reasons. However, we demonstrate that minimizing exclusively the between-group component may, in fact, increase the within-group, and hence the overall unfairness. We characterize and illustrate the tradeoffs between our measures of (un)fairness and the prediction accuracy.


international joint conference on artificial intelligence | 2018

Preventing Disparate Treatment in Sequential Decision Making

Hoda Heidari; Andreas Krause

We study fairness in sequential decision making environments, where at each time step a learning algorithm receives data corresponding to a new individual (e.g. a new job application) and must make an irrevocable decision about him/her (e.g. whether to hire the applicant) based on observations made so far. In order to prevent cases of disparate treatment, our time-dependent notion of fairness requires algorithmic decisions to be consistent: if two individuals are similar in the feature space and arrive during the same time epoch, the algorithm must assign them to similar outcomes. We propose a general framework for post-processing predictions made by a black-box learning model, that guarantees the resulting sequence of outcomes is consistent. We show theoretically that imposing consistency will not significantly slow down learning. Our experiments on two real-world data sets illustrate and confirm this finding in practice.


international conference on machine learning | 2014

Learning from Contagion (Without Timestamps)

Kareem Amin; Hoda Heidari; Michael J. Kearns


national conference on artificial intelligence | 2013

Depth-Workload Tradeoffs for Workforce Organization

Hoda Heidari; Michael J. Kearns


arXiv: Learning | 2017

A Convex Framework for Fair Regression.

Richard A. Berk; Hoda Heidari; Shahin Jabbari; Matthew Joseph; Michael J. Kearns; Jamie Morgenstern; Seth Neel; Aaron Roth


international conference on machine learning | 2016

Pricing a low-regret seller

Hoda Heidari; Mohammad Mahdian; Umar Syed; Sergei Vassilvitskii; Sadra Yazdanbod

Collaboration


Dive into the Hoda Heidari's collaboration.

Top Co-Authors

Avatar

Michael J. Kearns

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aaron Roth

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Richard A. Berk

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Shahin Jabbari

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge