Fair and Responsible AI: A Focus on the Ability to Contest
aarXiv:2102.10787v1 [cs.HC] 22 Feb 2021
Fair and Responsible AI: A Focus on the Abilityto Contest
Henrietta Lyons
University of MelbourneParkville, VIC 3010, [email protected]
Tim Miller
University of MelbourneParkville, VIC 3010, [email protected]
Eduardo Velloso
University of MelbourneParkville, VIC 3010, [email protected]
Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).Copyright held by the owner/author(s).
Fair and Responsible AI Workshop (CHI’20) , April 25–30, 2020, Honolulu, HI, USAACM 978-1-4503-6819-3/20/04.https://doi.org/10.1145/3334480.XXXXXXX
Abstract
As the use of artificial intelligence (AI) in high-stakes decision-making increases, the ability to contest such decisions isbeing recognised in AI ethics guidelines as an importantsafeguard for individuals. Yet, there is little guidance on howAI systems can be designed to support contestation. In thispaper we explain that the design of a contestation processis important due to its impact on perceptions of fairness andsatisfaction. We also consider design challenges, includ-ing a lack of transparency as well as the numerous designoptions that decision-making entities will be faced with. Weargue for a human-centred approach to designing for con-testability to ensure that the needs of decision subjects, andthe community, are met.
Author Keywords
Contestability; explainability; algorithmic fairness, ethics.
Introduction
There is great potential for Artificial Intelligence (AI) to en-hance decision-making, by making it more accurate, effi-cient, and scalable than human decision-making [4, 12].To harness these benefits, AI systems should be designedresponsibly, to ensure that they are fair, accountable, andtransparent [6]. This is particularly important given the in-creasing use of AI in high-stakes decision-making, includingsentencing, hiring, and loan application determination [12].n response to calls for AI systems to be designed, devel-oped, and deployed responsibly, numerous AI ethics guide-lines have been produced. One ‘safeguard’ that is gainingtraction within these guidelines is the ability to contest AIdecisions (see sidebar for examples) [11]. Article 22(3) ofthe European Union’s General Data Protection Regula-tion provides a legal ‘right to contest’ decisions made usingsolely automated processes. However, none of these doc-uments provide guidance on how AI systems should be de-signed to enable contestation. In this paper, we outline whydesign is important, what the design challenges are, andour human-centred approach to designing for contestability.
Examples of Ethical AIGuidelines Calling for theAbility to Contest
Ethics Guidelines for Trust-worthy AI (High-Level ExpertGroup on Artificial Intelli-gence): "[T]here are manydifferent interpretations offairness, we believe thatfairness has both a sub-stantive and a proceduraldimension... The procedu-ral dimension of fairnessentails the ability to contestand seek effective redressagainst decisions made by AIsystems and by the humansoperating them." [16]Principles on Artificial In-telligence (OECD): "Thereshould be transparencyand responsible disclosurearound AI systems to ensurethat people understand AI-based outcomes and canchallenge them." [8]AI Ethics Framework (Aus-tralia): (Principle 7) "Con-testability: When an AI sys-tem significantly impacts aperson, community, group orenvironment, there shouldbe a timely process to allowpeople to challenge the useor output of the AI system." [15]
The importance of design
The importance of designing AI systems to enable ‘con-testability’ has been acknowledged in HCI and AlgorithmicAccountability literature (e.g. [10, 2]). Using a legal lens, theAlgorithmic Accountability work has taken a theoretical ap-proach to proposing requirements of a contestation scheme[3], and design requirements that enable contestation [2].Within HCI, the focus of contestability research has been onthe ability of expert users to work interactively with a systemto contest its output [10].HCI researchers [12, 4, 9] have also drawn on organisa-tional psychology literature to study how the design of AIsystems used in decision-making impact human percep-tions of procedural fairness, ‘procedural justice’ [18]. Theprocedural justice literature indicates that having a legiti-mate way to contest a decision increases a person’s per-ception of procedural fairness, which can impact their per-ception of the fairness of the decision (‘distributive justice’),their choice to accept or contest a decision, and their atti-tude towards the entity making the decision [18, 13]. In linewith this literature [13], Lee et al [12] found that having ‘out-come control’, the ability to correct or appeal a decision, in a cooperative group allocation task improved participants’perceptions of the fairness of the outcome.The procedural justice literature also indicates that designof a contestation process (not just its availability) impactsperceptions of procedural fairness. For example, havingthe same decision-maker assess the decision on appealis seen as less fair than having a new decision-maker [13].In addition, in a study of content moderation across socialmedia platforms, Myers West [14] found that users weredissatisfied with contestation processes, reporting a lack ofclear instruction about how to lodge an appeal, receivingno reply or resolution having lodged a challenge, and noaccess to human intervention. These findings indicate thatthe design of a contestation process matters.
Design challenges
To meaningfully challenge a decision, a decision subject re-quires some form of information in order to understand thedecision, decide whether to contest, and to use as groundsfor contestation. Many AI systems used in decision-makingare effectively “black boxes" [17]; their decision-making pro-cesses are hidden due to the use of complex algorithmsor techniques (e.g. deep learning) or intentionally by com-panies to protect trade secrets [5]. This opacity makes itdifficult to understand why a decision was made, and con-sequently, to contest it in any meaningful way. In contrast,with human decision-making a person can generally seekan explanation from the decision maker as to why a deci-sion was made. Often, in high-stakes decisions, reasonsmust be documented during the decision-making processto mitigate the issue of an inaccurate post-hoc explanation.Promisingly, the field of explainable artificial intelligence(XAI) is progressing work into explainability [19]. To date,XAI has not focused on providing explanations for contesta-tion specifically, which offers a new avenue of research. second design challenge is that there are many ways tocontest a decision [13]. For example, existing contestationprocesses for human decisions (e.g. internal review, com-plaints mechanisms, external review via tribunal or court)could be adapted for decisions made using AI. However,with decisions made at scale, leaving appeal processes to acourt to determine would overwhelm an already pressuredsystem. Low perceptions of fairness are associated withprocedures that are time consuming, costly and resourceintensive [13]. An alternative contestation process might in-volve a decision subject directly contesting a decision withinAI system via an interface. However, the novelty of this ap-proach coupled with a lack of human touch could negativelyimpact perceptions of fairness. With an abundance of de-sign choices, it is difficult to know where to begin.
Sample of preliminaryfindings from our currentresearch
AI systems are not isolated,but exist in socio-technicalcontexts with existing legalframeworks, political sys-tems, and social norms, thatneed to be considered whendesigning for contestationDifferent processes for con-testation are likely to berequired depending on thecontext in which a decision isbeing madeContestation processesneed to be clear and easy toaccessContestation processesshould align with humanrights, to ensure equality, bedesigned for accessibility,and to provide compensationLack of transparency is anissue; explainability is impor-tant We suggest that taking a human-centred approach to ex-plore how people conceptualise contestability in relation toAI systems is a key first step in designing for meaningfulcontestation. To understand the needs of decision subjects,and the expectations of the community more generally, weare currently conducting a thematic analysis of submissionsmade to Australia’s ‘Artificial Intelligence: Australia’s EthicsFramework’, a discussion paper that proposed ‘contestabil-ity’ as a core ethical principle [15]. The sidebar contains asample of our preliminary findings.
Conclusion
The increasing use of AI in high-stakes decision-making,which has been deployed without appropriate safeguardslike procedural fairness, has had a significant, negative im-pact on thousands of people, from teachers losing their jobs[1] to the erroneous loss of medical benefits [7]. To reducenegative consequences, AI systems must be responsiblydesigned, developed, and deployed [6]. Though the abilityto contest decisions is not the only mechanism required to ensure that AI systems are ‘fair’, it is a crucial safeguard,and in some circumstances, a legal requirement. How ac-cess to contestation, and the contestation process itself, isdesigned is important given the impact on perceptions offairness and satisfaction. Yet, there are many design chal-lenges including opacity, and an abundance of design op-tions. A key first step in designing for meaningful contes-tation is to explore and understand the needs of decisionsubjects as well as the community more generally.
Acknowledgements
Henrietta Lyons is supported by the Melbourne School ofEngineering Ingenium scholarship program. This researchwas partly funded by Australian Research Council Discov-ery Grant DP190103414
Explanation in Artificial Intelli-gence: A Human-Centred Approach . Eduardo Velloso is therecipient of an Australian Research Council Discovery EarlyCareer Researcher Award (Project Number: DE180100315)funded by the Australian Government.
REFERENCES [1] Houston Federation of Teachers, Local 2415, et al vHouston Independent School District, 251 F.Supp.3d116 (2017) .[2] Marco Almada. 2019. Human intervention inautomated decision-making: Toward the constructionof contestable systems. In
Proceedings of theSeventeenth International Conference on ArtificialIntelligence and Law . 2–11.[3] Emre Bayamlioglu. 2018. Contesting AutomatedDecisions.
European Data Protection Law Review
Proc of the018 CHI Conference on Human Factors in ComputingSystems - CHI ’18 (2018).[5] Jenna Burrell. 2016. How the machine ‘thinks’:Understanding opacity in machine learning algorithms.
Big Data and Society (2016), 1–12.[6] Virginia Dignum. 2019.
Responsible ArtificialIntelligence: How to Develop and Use AI in aResponsible Way . Springer.[7] Virginia Eubanks. 2018.
Automating Inequality: HowHigh-Tech Tools Profile, Police and Punish the Poor
Proc of Thirty-Second AAAI Conference on ArtificialIntelligence (AAAI-18) . 51–60.[10] Tad Hirsch, Kritzia Merced, Shrikanth Narayanan,Zac E Imel, and David C Atkins. 2017. Designingcontestability: Interaction design, machine learning,and mental health. In
Proceedings of the 2017Conference on Designing Interactive Systems . 95–99.[11] Anna Jobin, Marcello Ienca, and Effy Vayena. 2019.The global landscape of AI ethics guidelines.
NatureMachine Intelligence
Proceedings of the ACM on Human-ComputerInteraction
3, CSCW (2019), 1–26.[13] Gerald S Leventhal. 1980. What should be done withequity theory? In
Social exchange . Springer, 27–55.[14] Sarah Myers West. 2018. Censored, suspended,shadowbanned: User interpretations of contentmoderation on social media platforms.
New Media &Society
ProceduralJustice: A Psychological Analysis . Lawrence ErlbaumAssociates, Hillsdale, NJ.[19] Sandra Wachter, Brent Mittelstadt, and Chris Russell.2018. Counterfactual Explanations without Openingthe Black Box: Automated Decisions and the GDPR.