Discrimination | 2019

From Individual Control to Social Protection: New Paradigms for Privacy Law in the Age of Predictive Analytics

 

Abstract


What comes after the control paradigm? For decades, privacy law has sought to provide individuals with notice and choice and so give them control over their personal data. But what happens when this regulatory paradigm breaks down? Predictive analytics forces us to confront this challenge. Individuals cannot understand how predictive analytics uses their surface data to infer latent, far more sensitive data about them. This prevents individuals from making meaningful choices about whether to share their surface data in the first place. It also creates threats (such as harmful bias, manipulation and procedural unfairness) that go well beyond the privacy interests that the control paradigm seeks to safeguard. In order to protect people in the algorithmic economy, privacy law must shift from a liberalist legal paradigm that focuses on individual control, to one in which public authorities set substantive standards that defend people against algorithmic threats. Leading scholars such as Jack Balkin (information fiduciaries), Helen Nissenbaum (contextual integrity), Danielle Citron (technological due process), Craig Mundie (use-based regulation) and others recognize the need for such a shift and propose ways to achieve it. This article ties these proposals together, views them as attempts to define a new regulatory paradigm for the age of predictive analytics, and evaluates whether each achieves this aim. It then argues that the solution may be hiding in plain sight in the form of the FTC’s Section 5 unfairness authority. It explores whether the FTC could use its unfairness authority to draw substantive lines between data analytics practices that are socially appropriate and fair, and those that are inappropriate and unfair, and examines how the Commission would make such determinations. It argues that this existing authority, which requires no new legislation, provides a comprehensive and politically legitimate way to create much needed societal boundaries around corporate use of predictive analytics. It concludes that the Commission could use its unfairness authority to protect people from the threats that the algorithmic economy creates.

Volume None
Pages None
DOI 10.2139/ssrn.3449112
Language English
Journal Discrimination

Full Text