Ai & Society | 2021

Ethical dilemmas

 

Abstract


Smith (2021) in ‘Perhaps Ned Ludd had a point?’, alerts us to pay attention not only to the great ethical and theoretical issues, but also to the ways in which actual human lives are impacted by what we think, say and do. We should be vigilant of a widespread tendency to both overestimate the pace and underestimate the extent of change of advanced technologies in a variety of real-world contexts. It may be tempting to be consumed by a positivistic and empirical world view that focuses on ‘how humans judge machines’ and not on ‘how humans could and should judge machines’ (Gill 2019). The human-centred ethos of AI&Society asks us to transcend this techno-centric view, and explore not just the ‘how’ question, but also the ‘could’ and ‘should’ questions. In exploring these questions, we need to be mindful of the concern that, for example, algorithmic aversion may risk rejecting technology that could improve social welfare and we may ‘fail to recognise the consequences of technology when we show a positive bias towards algorithms.’ (Gill 2020, 2021). Whilst the techno-centric paradigm tends to provide efficiency, precision and replicability of technological innovations, the human-centred paradigm promotes creativity, flexibility, and resilience. Those who seek the tradeoff between efficiency and flexibility face ethical challenges that designers of all technologies face. AI&Society authors continue to reflect on narratives of AI ethics that vary from moral and ethical dilemmas of human judgment in the ‘heat of the moment’ of the trolley problem, to ethical implications such as those of opacity, explainability, reliability, trustworthiness and justice that arise from the development and implementation of artificial intelligence (AI) technologies. Self-driving cars open up the concrete possibility of encountering familiar moral dilemmas in the real world, for example, whether to save a group of children who have suddenly darted into the road or swerving to avoid that collision and instead colliding with a single pedestrian properly using a crosswalk. The narrative on moral machines and ‘virtuous ethics’ gives an insight into the relational functions of social robots such that of providing empathy and intimacy or even encouragement and advice. From this perspective, moral machines must be something like the virtuous person, or at least the person aiming to become virtuous in the sense of employing ethical reasoning to produce ethical outcomes. The argument is that what matters ultimately is the flourishing of the virtuous agent, and virtue’s benefits for society such as trustworthiness, safety, etc. If so, then the virtues in question are only instrumental. It is argued that even in this case, we encounter virtuous agents in deeply social ways and wonder about their social characters, what kinds of characters they are, and what it would be like to encounter them. For proponents of the “social-relational” approach to the machine question, it is these encounters that matter. The use of predictive systems in socially and politically sensitive areas such as crime prevention and justice management, and crowd management and emotion analysis, raise ethical concerns of misclassification, for example in the case of conviction risk assessment or the decision-making process, when designing public policies (Gill 2020, 2021). It is argued that such automated AI decision support systems might perpetuate bias that is already in the data used to set up the system, e.g., by increasing police patrols in an area and discovering more crime in that area. Although there is a general discussion about privacy and surveillance in information technology, focusing mainly on the access to private data and data that is personally identifiable, the ethical narrative of AI in surveillance goes beyond the mere accumulation of data and direction of attention: they include the use of information to manipulate behaviour, online and offline, in a way that undermines autonomous * Karamjit S. Gill [email protected]

Volume None
Pages 1 - 8
DOI 10.1007/s00146-021-01260-7
Language English
Journal Ai & Society

Full Text