Alexandra Wood
Harvard University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alexandra Wood.
Philosophical Transactions of the Royal Society A | 2018
Kobbi Nissim; Alexandra Wood
This position paper observes how different technical and normative conceptions of privacy have evolved in parallel and describes the practical challenges that these divergent approaches pose. Notably, past technologies relied on intuitive, heuristic understandings of privacy that have since been shown not to satisfy expectations for privacy protection. With computations ubiquitously integrated in almost every aspect of our lives, it is increasingly important to ensure that privacy technologies provide protection that is in line with relevant social norms and normative expectations. Similarly, it is also important to examine social norms and normative expectations with respect to the evolving scientific study of privacy. To this end, we argue for a rigorous analysis of the mapping from normative to technical concepts of privacy and vice versa. We review the landscape of normative and technical definitions of privacy and discuss specific examples of gaps between definitions that are relevant in the context of privacy in statistical computation. We then identify opportunities for overcoming their differences in the design of new approaches to protecting privacy in accordance with both technical and normative standards. This article is part of a discussion meeting issue ‘The growing ubiquity of algorithms in society: implications, impacts and innovations’.
ieee symposium on security and privacy | 2018
Micah Altman; Alexandra Wood; Effy Vayena
In this article, we recognize the profound effects that algorithmic decision making can have on people’s lives and propose a harm-reduction framework for algorithmic fairness. We argue that any evaluation of algorithmic fairness must take into account the foreseeable effects that algorithmic design, implementation, and use have on the well-being of individuals. We further demonstrate how counterfactual frameworks for causal inference developed in statistics and computer science can be used as the basis for defining and estimating the foreseeable effects of algorithmic decisions. Finally, we argue that certain patterns of foreseeable harms are unfair. An algorithmic decision is unfair if it imposes predictable harms on sets of individuals that are unconscionably disproportionate to the benefits these same decisions produce elsewhere. Also, an algorithmic decision is unfair when it is regressive, that is, when members of disadvantaged groups pay a higher cost for the social benefits of that decision.
Social Science Research Network | 2017
Finale Doshi-Velez; Mason Kortz; Ryan Hal Budish; Christopher T. Bavitz; Samuel J. Gershman; David O'Brien; Stuart M. Shieber; Jim Waldo; David Weinberger; Alexandra Wood
Berkman Center Research Publication | 2015
David O'Brien; Jonathan Ullman; Micah Altman; Urs Gasser; Michael Bar-Sinai; Kobbi Nissim; Salil P. Vadhan; Michael John Wojcik; Alexandra Wood
Harvard Journal of Law & Technology | 2018
Kobbi Nissim; Aaron Bembenek; Alexandra Wood; Mark Bun; Marco Gaboardi; Urs Gasser; David O'Brien; Salil P. Vadhan
Berkeley Technology Law Journal | 2016
Micah Altman; Alexandra Wood; David O'Brien; Salil P. Vadhan; Urs Gasser
Archive | 2014
Alexandra Wood; David O'Brien; Micah Altman; Alan F. Karr; Urs Gasser; Michael Bar-Sinai; Kobbi Nissim; Jonathan Ullman; Salil P. Vadhan; Michael John Wojcik
International Data Privacy Law | 2018
Micah Altman; Alexandra Wood; David R O’Brien; Urs Gasser
Archive | 2017
Kobbi Nissim; Urs Gasser; Adam Smith; Salil P. Vadhan; David O'Brien; Alexandra Wood
Washington and Lee Law Review Online | 2016
Effy Vayena; Urs Gasser; Alexandra Wood; David R. O'Brien; Micah Altman