Fordham Intellectual Property, Media & Entertainment Law Journal | 2019

Accountability of Algorithms in the GDPR and Beyond: A European Legal Framework on Automated Decision-Making

 

Abstract


Today, automated decision systems appear to carry higher social and economic risks than ever before. We often have no information about the design or instructions the machine is given. This easily becomes a source of biases, errors, and discrimination. Indeed, an algorithm is not neutral and can perpetuate existing stereotypes and social segregation. For example, underrepresentation of a minority group in historical data may reinforce discrimination against that group in future hiring processes or credit-scoring. \n \nThis paper’s subject matter deals with the EU’s legal framework on automated decision-making based on the General Data Protection Regulation (GDPR) and some Member State implementation laws, with a specific emphasis on French law. Some legal remedies are provided by the GDPR. However, I argue in particular that there is no right to an individual explanation concerning a decision based on an automated decision-making pursuant to the GDPR. The GDPR does not provide the data subject with an individual right to know and understand the decision’s precise basis. Moreover, intellectual property rights and trade secrets create some barriers to the rights’ efficiencies, and the GDPR does not furnish limitations to the application of such proprietary rights in the privacy context. Also, the right not to be subject to an automated decision-making process is limited by several broad exceptions. Specifically, the exceptions provide for many flexibilities that favor private and public stakeholders, such as the Member States. Compounding the exceptions is that the related safeguards, such as the right to obtain a human intervention, do not provide for a right to an explanation either; they only afford the right to ask for a human being, and not a machine, with whom to interact. Nevertheless, this does not ensure a better understanding of the decision. Indeed, it may not be feasible for a human to conduct a meaningful review of a process, for instance, if the process may have involved third- party data and algorithms, pre-learned models, or inherently opaque machine learning techniques. Finally, no supervisory body explicitly provides for guarantees to respect such measures. Consequently, I am skeptical as to the ability of such provisions to address the opacity and discrimination problems of algorithms. \n \nI also argue that too many flexibilities have been given to the Member States. Only some rules regarding automated decision-making are finally adopted to implement the GDPR in the national laws. As a result, the GDPR also fails to create a single standard on algorithmic transparency. This has a negative impact on the ability to create a “digital single market,” which is one of the European Commission’s primary goals.

Volume 30
Pages 91
DOI 10.2139/SSRN.3391266
Language English
Journal Fordham Intellectual Property, Media & Entertainment Law Journal

Full Text