aa r X i v : . [ c s . A I] J un Responsible Autonomy
Virginia DignumDelft University of [email protected] 9, 2017
Abstract
As intelligent systems are increasingly making decisionsthat directly affect society, perhaps the most importantupcoming research direction in AI is to rethink the eth-ical implications of their actions. Means are needed tointegrate moral, societal and legal values with technologi-cal developments in AI, both during the design process aswell as part of the deliberation algorithms employed bythese systems. In this paper, we describe leading ethicstheories and propose alternative ways to ensure ethical be-havior by artificial systems. Given that ethics are depen-dent on the socio-cultural context and are often only im-plicit in deliberation processes, methodologies are neededto elicit the values held by designers and stakeholders, andto make these explicit leading to better understanding andtrust on artificial autonomous systems.
It is no news that Artificial Intelligence (AI) is increas-ingly entering the public domain in fields such as trans-portation, service robots, health-care, education, publicsafety and security, employment and workplace, and en-tertainment. Developments in autonomy and learningtechnologies are rapidly enabling AI systems to decideand act without direct human control. As these advancescontinue at high speed, there is a growing awareness thata responsible approach to AI is needed to ensure the safe,beneficial and fair use AI technologies, to consider theimplications of morally relevant decision making by ma-chines, and the ethical and legal consequences and statusof AI. Design methods and tools to elicit and represent humanvalues, translate these values into technical requirements,and deal with moral overload when numerous values areto be incorporated, are needed to demonstrate that designsolutions realize the values wished for.As an example, recently much attention has been givento the ethical dilemmas self-driving cars face when need-ing to deal with potentially life-threatening decisions [6].This has been described as an application of the well-known trolley problem [12], an hypothetical scenario,long used in philosophy and ethics discussions, where arunway trolley is speeding down a track to which fivepeople are tied up and unable to move. An observerhas control over a lever that can switch tracks before thetrolley hits the five people, but also on this alternativetrack there is one person tied to the tracks. The moraldilemma concerns the decision the observer should make:do nothing and allow the trolley to kill the five, or pullthe lever and kill one? This is obviously a hypotheticalscenario without any direct practical application. How-ever, it can be seen as an abstraction for many dilemmasinvolving AI systems now and in the future. Besides therelation to self-driving cars, similar dilemmas will needto be solved by intelligent medicine dispenses faced withthe need to choose between two patients when it does nothave enough of a needed medicine, by search and res-cue robots faced with the need to prioritize victims, or, aswe have recently shown, by health-care robots needing tochoose between user’s desires and optimal care [10]. Inthis paper, we use the trolley scenario as illustration ofthis wide application of moral dilemmas deliberation.From an ethical perspective, there are no optimal solu-tions to these dilemmas. In fact, different ethical theories1ill lead to distinct solutions given that values hold by in-dividuals, groups and societies put different preferenceson the action to choose. Understanding of these differ-ences is essential to the design of AI systems able to dealwith such dilemmas. Moreover, it is not so that the re-sponsibility lays solely with the individual (human or ma-chine) that operates the lever: societal, legal and physicalinfrastructures are also means to determine the decision.If we are to build AI systems that can deal with thistype of ethical dilemma, and ensure that AI is developedresponsibly incorporating social and ethical values, soci-etal concerns about the ethics of AI must be reflected indesign. AI systems should therefore be ground on princi-ples of Accountability, Responsibility and Transparency(ART), extending and characterizing the classic principlesof Autonomy, Interactivity and Adaptability described in[11, 22].Firstly,
Accountability for the decision must be deriv-able from the algorithms and data used by the system inorder to make the decision. This includes the need for rep-resentation of the moral values and societal norms holdingin the context of operation, which the agent uses for de-liberation. Secondly, even if the AI system is the directcause of action, the chain of
Responsibility must be clear,linking the agent’s decision to user, owner, manufacturer,developer, and all other stakeholders whose actions in oneway or another contribute to the decision. Finally, expla-nation of actions requires
Transparency in terms of thealgorithms and data used, their provenance and their dy-namics. I.e., algorithms must be designed in ways that letus inspect their workings.This paper is organized as follows. Section 2 posi-tions this work within the topic of Responsible ArtificialIntelligence, discussing in particular the Value-SensitiveDesign methodology and the development of ArtificialMoral Agents. We describe relevant ethical theories inSection 3. In Section 4 we discuss how ethical reasoningin AI can differ based on these theories, and the role ofvalue systems in moral deliberation. In Section 5, we dis-cuss how the integration of Ethical theories and Value sys-tems lead to different responses to moral dilemmas, andpropose mechanisms for implementation. Finally, in Sec-tion 6 we present preliminary conclusions and directionsfor further research.
Central premise of Responsible AI is that in order forAI systems to be safe, accepted and trusted, the systemshould be designed to take ethical considerations intoaccount and to consider the moral consequences of itsactions and decisions, in accountable, responsible, andtransparent ways. Only then, their goals, their decisions,and the actions they take to achieve these, will be closelyaligned with human values.Ethical considerations in the development of intelligentinteractive systems is becoming one of the main influen-tial areas of research in AI, and has led to several ini-tiatives both from researchers as from practitioners, in-cluding the IEEE initiative on Ethics of Autonomous Sys-tems , the Foundation for Responsible Robotics , and thePartnership on AI which brings together the largest techcompanies to advance public understanding and aware-ness of AI and its potential benefits and costs. Responsible AI starts with design processes that ensurethat design decisions are formulated explicitly rather thanbeing implicit in the procedures and objects. In partic-ular, the the values and value priorities of designers andstakeholders should be elicited in a participatory way thatensures that global aims and policies are clear, shared, andcontext-oriented.Value-Sensitive Design (VSD) methodologies, alsoknown as Design for Values or Values in Design, are tech-nology design approaches that take human values as thecentral focus of design [13, 28]. As such, VSD is an idealcandidate for the design of AI technology. The underlay-ing premise is design is never value free, and the identifi-cation, and explicit representation of the values underly-ing design leads to better designs. Value-sensitive designenables engineers and developers to give conflicting so-cial values a place in smart design and to combine them issuch a way as to reach a win-win situation.Value sensitive design is a theoretically grounded ap-proach to the design of technology that accounts for hu-man values in a principled, systematic and comprehensive http://standards.ieee.org/develop/indconn/ec/autonomous_systems. http://responsiblerobotics.org/ for-the-sake-of specificationlink, describing how values are translated into norms, intorequirements making explicit the design decision. In theopposite direction, links form an explicit constitutive rela-tion [25], indicating which goal counts-as a norm, counts-as a value in a given context. Even though, being pieces of software and hardware, AIsystems are basically tools, their increased intelligenceand inter-ability capabilities makes that AI systems areincreasingly being perceived, and expected to behave, aspartners to their users, with the duties and responsibili-ties we expect from human teammates [8]. [30] proposesa pathway to engineering ethics in AI comprising opera-tional, functional and full ethical behaviour. At the lowestlevel of ethical behaviour,
Tools , such as search engines,do not have either autonomy nor social awareness and arenot considered to be ethical systems, but incorporate intheir design the values of their engineers, and are there-fore said to have operational morality . As system auton-omy and social awareness increases,
Assistant systems,are able to act independently in open environments with functional morality , i.e. are sensitive to ethically relevantfeatures of their environment, based on hard-wired eth-ical rules, resulting in autonomous agents that are ableto adjust their actions to human norms. Most normativesystems fall into this category [9, 5]. Finally,
ArtificialMoral Agents (AMA) are able of self-reflection and canreason, argue and adjust their moral behavior to that oftheir partners and context.
In order to build machines that follow ethical principles,we first need to understand the different ethical theoriesthat can be applied to decision-making. Note that this pa-per focuses on ethical deliberation by AI systems, and noton other areas of AI Ethics such as regulation and codesof conduct, and AI and robot rights.Ethics (or Moral Philosophy) is concerned with ques-tions of how people ought to act, and the search for adefinition of right conduct (identified as the one causingthe greatest good) and the good life (in the sense of a lifeworth living or a life that is satisfying or happy). From theperspective of understanding and applying ethical princi-ples to the design of artificial systems, Normative Ethics(or Prescriptive Ethics) are of particular relevance. Nor-mative ethics is the branch of ethics concerned with es-tablishing how things should or ought to be, how to valuethem, which things are good or bad, and which actions areright or wrong. It attempts to develop a set of rules gov-erning human conduct, or a set of norms for action. In thefollowing, we briefly introduce Consequentialism, Deon-tology and Virtue Ethics as exemplary of the main schoolsof thought in Normative Ethics. For more information onNormative Ethics, we refer to e.g. the Stanford Encyclo-pedia of Philosophy . The aim being to show their differ-ent impact on possible agent deliberation approaches andto be extensive. Normative ethical theories can be catego-rized into three main categories: . Consequentialism (or Teleological Ethics) argues thatthe morality of an action is contingent on the action’s out-come or result. Thus, a morally right action is one thatproduces a good outcome or consequence. Consequen-tialist theories must consider questions like “What sort ofconsequences count as good consequences?”, “Who is theprimary beneficiary of moral action?”, “How are the con-sequences judged and who judges them?”
Deontology is the normative ethical position thatjudges the morality of an action based on rules. This ap-proach to ethics focuses on the rightness or wrongness ofthe action description that is used in the decision to act,as opposed to the rightness or wrongness of the conse-quences of those actions. It argues that decisions shouldbe made considering the factors of one’s duties and other’s https://plato.stanford.edu able 1: Comparison of Main Ethical Theories Consequentialism Deontology Virtue EthicsDescription
An action is right if it promotesthe best consequences, i.e wherehappiness is maximized. An action is right if it is inaccordance with a moral rule orprinciple. An action is right if it is what avirtuous agent would do in thecircumstances.
CentralIssue
The results matter, not the actionsthemselves Persons must be ends in and ofthemselves and may never beused as means Emphasize the character of theagent making the actions
GuidingValue
Good (often seen as maximumhappiness) Right (rationality is doing one’smoral duty) Virtue (dispositions leading to theattainment of happiness)
PracticalReasoning
The best for most (means-endsreasoning) Follow the rule (rationalreasoning) Practice human qualities (socialpractice)
DeliberationFocus
Consequences (What is outcomeof action?) Action (Is action compatible withimperative?) Motives (is action motivated byvirtue?)rights. Deontologic systems are about having a set ofrules to follow, i.e. can be seen as a top-down approach tomorality. Kant’s Categorical Imperative roots morality inthe rational capacities of people and asserts certain invi-olable moral laws. Kant argues that to act in the morallyright way, people must act according to duty, and that it isthe motives of the person who carries out the action thatmake them right or wrong, not the consequences of theactions.Finally,
Virtue Ethics , focuses on the inherent charac-ter of a person rather than on the nature or consequencesof specific actions performed. This theory identifiesvirtues (those habits and behaviours that will allow aperson to achieve well being or a good life), counselspractical wisdom to resolve any conflicts between virtues,and claims that a lifetime of practicing these virtuesleads to, or in effect constitutes, happiness and the goodlife. Virtue ethics indicate that regret is an appropriateresponse to a moral dilemma.Note that our aim is not to provide a full landscape ofEthical theories but to present the most exemplary alter-natives applicable in AI reasoning. Other approaches suit-able for AI, such as the principle of double effect (DDE),the principle of lesser evils and human rights ethics , canbe seen as alternatives to the exemplary theories describedabove.These and other ethical theories are currently being considered on the discussion around the governance andlegal position of AI and have led to concrete proposalssuch as that of the Engineering and Physical Sciences Re-search Council (EPSRC) in the UK, listing a set of prin-ciples for designers, builders and users of robots in thereal world , or the one currently under discussion by theEuropean Parliament.Table 1 gives a comparison of these Ethics theories.AI systems that can deal with ethical reasoning shouldmeet the following requirements, further discussed inSection 5: • Representation languages rich enough to link do-main knowledge and agent actions to the ‘Value’central to the theory; • Planning mechanisms appropriate to the PracticalReasoning prescribed by the theory • Deliberation capabilities to deal with the Focus ofthe theory.Obviously, many architectures are possible that meetthese requirements, and more research is needed to fur-ther elaborate on these issues. Our aim here is to providea sketch of the possibilities rather than a full account of ar-chitectural and implementation characteristics. In Section5, we describe the effect of the different Ethics theories See:
4n the results of deliberation, taking as scenario the trol-ley problem introduced in Section 1.
Ethical theories provide an abstract account of the mo-tives, questions and aims of moral reasoning. For its prac-tical application, more is needed, namely how and by whodeliberation is done, and understanding which moral andsocietal values are at the basis of deliberation. E.g. Con-sequentialistic approaches aim at ‘the best for the most’but one needs to understand societal values in other to de-termine what counts as the ‘best’. In fact, depending onthe situation, this can be wealth, health, sustainability oranother value. In this section, we turn our attention to thedesign of AI systems. We present several design optionsconcerning who is responsible to the decision, and howdecisions are dependent on the relative priority of differ-ent moral and societal values.
Even though most work in Artificial Moral Agents(AMA) refers to automated decision-making by the ma-chine itself, in reality the spectrum of decision making ismuch wider, and in many cases the actual decision by themachine itself is limited. Depending on the level of auton-omy and regulation, we identify four possible approachesto design decision-making mechanisms for autonomoussystems and indicate how these can be used for moral rea-soning by the AI systems described in Section 2.2: • Human control: in this case a person or group ofpersons are responsible for the decision. Differentcontrol levels can be identified from that of a auto-pilot, where the system is in control and the humansupervises, to that of a ‘guardian angel’ where thesystem supervises human action. From a design per-spective, this approach requires to include means toensure shared awareness of the situation, such thatthe person taking decision has enough information atthe time she must intervene. Such interactive controlsystems are also known as human-in-the-loop con-trol systems [18]. This is the decision-making mech-anism required for Tools. • Regulation: here the decision is incorporated, orconstrained in the systemic infrastructure of the en-vironment. In this case, the environment ensures thatthe system never gets into moral dilemma situation.I.e the environment is regulated in such ways that de-viation is made impossible, and therefore moral deci-sions by the autonomous system are not needed. Thisis the mechanism used in smart highways, linkingroad vehicles to their physical surroundings, wherethe road infrastructure controls the vehicles [20]. Inthis case, ethics are modeled as regulations and con-straints to enable that systems can suffice with lim-ited moral reasoning, as is the case of Assistants inthe categorization in section 2.2. • Artificial Moral Agents (AMA): these are AI sys-tems able to incorporate moral reasoning in their de-liberation and to explain their behaviour in terms ofmoral concepts. An AMA [30] can autonomouslyevaluate the moral and societal consequences of itsdecisions and use this evaluation in their decision-making process. Here moral refers to principles re-garding right and wrong , and explanation refers toalgorithmic mechanisms to provide a qualitative un-derstanding of the relationship between the system’sbeliefs and its decisions. This approach requirescomplex decision making algorithms, based e.g. ondeontic logics, and/or reinforcement learning. Thesemechanisms ensure Full ethical behavior required byPartner systems. • Random: the autonomous system randomly choosesits course of action when faced with a (moral) deci-sion. The claim here is that if it is ethically prob-lematic to choose between two wrongs, an elegantsolution is to simply not make a deliberate choice .The Random mechanism can be seen as an approxi-mation to human behavior and can be applied to anytype of system. Interestingly, there is some empiri-cal evidence that, under time pressure, people tend tochoose for justice and fairness over careful reasoning[4]. This behaviour could be implemented as a weakform of randomness.These four classes of decision-makers differ in terms ofAccountability, Responsibility and Transparency (ART), cf. Wired: https://goo.gl/FGKhE5 . One of the main challenges for moral reasoning is to de-termine which moral values to aim for and which ethicalprinciples to adhere to in a given circumstance. Each indi-vidual and socio-cultural environment prioritizes differentmoral and societal values. Therefore, besides understand-ing how moral decisions are taken, using Ethical theories,another aspect to consider are the cultural and individualvalues of the people and societies involved. Schwartz hasdemonstrated that moral values are quite consistent acrosscultures [23] but that cultures prioritize these values dif-ferently [24, 16]. Basic values refer to desirable goalsthat motivate action and transcend specific actions andsituations, and can be classified along four dimensions:(i) Openness to change, (ii) Self-enhancement, (iii) Con-servation, (iv) Self-transcendence. As such, values serveas criteria to guide the selection or evaluation of actions, cf. taking into account the relative priority of values.Different value priorities will lead to different decisionsin a self-driving vehicle dilemma scenario. E.g. a prefer-ence for Hedonism will more likely lead to a choose ac-tions that protect the passenger, while Benevolence canlead to prefer actions that protect the pedestrians.It is therefore important to identify holding societal val-ues, when determining the rules for moral deliberation byAMAs. Approaches based on crowd-sourcing or directdemocracy, can be used to elicit the values of the commu-nity, but should be taken with caution. In fact, as in theemperor interpreting the crowd’s bidding at the circus, so-cial acceptance does not always imply moral acceptabil-ity and vice-versa [29]. In [19] and [6] a social acceptanceapproach was followed to determine the most appropriateaction by a robot in the trolley problem. This has iden-tified that people take different choices when put in theplace of the public or that of the vehicle owner. Moral ac-ceptability can be determined using e.g. the Moral Foun-dations Questionnaire [15] which measures several ethicalprinciples, including harm , fairness and authority .A combination with morally studies, e.g. according toEthical systems as those described in section 3 and theelicitation of holding community values can be of usehere. In this section, we will discuss the engineering of ethi-cal deliberation mechanisms based on the different ap-proaches described in Section 4.1 and how these meetthe ART principles, and provide discuss the effects of im-plementation of the different Ethical theories presented inSection 3 on the deliberation of AMAs.Assuming that the development of AI systems followsa standard engineering cycle of Analysis - Design - Im-plement - Evaluate, taking a Design for Values approachbasically means that the Analysis phase will need to in-clude activities for (i) the identification of societal values,(ii) deciding on moral deliberation approach (User con-trol, Regulation or AMA), and (iii) methods to link valuesto formal system requirements, such as e.g. [27] or [1].Concerning how moral deliberation mechanisms could6 able 2: Computational and ART consequences of ethical deliberation mechanisms
EthicalDeliberation Computational reqs ARTUser Control • Realtime reasoning • Ensure situational awareness to user • Explanation capabilities • Output internal state in user understable way • Delegated to user
Regulation • Formal link from values to norms to behaviour • Define institutions for monitoring and control • Moral reasoning can be done off-line • A: institutional • R: institutional • T: system (by requirement)
AMA • Formal link from values to norms to behaviour • Define reasoning rules • Supervised learning of morality • Realtime reasoning • A: system (by explanation) • R: system (by deliberation) • T: system (by requirement)be implemented in AI systems it should be noted thatmoral dilemmas do not have one optimal solution, andthe dilemma is exactly how to choose between two ‘bad’options. As an abstract example of the many morally-oriented decisions that AMAs will need to take in all typesof domains and situations, we use the classic trolley prob-lem scenario as introduced in Section 1. I.e. here, thetrolley scenario should be seen as a metaphor to highlightmany of ethical aspects of choices by machines, such asautonomous vehicles, or care robots. The application ofethical theories to the trolley problem leads to differentdecisions. E.g. taking a Utilitarian, or Consequential-ist, approach, the decision would be to save the largestamount of lives, whereas the application of a HumanRights approach would lead to a decision not to switchthe lever, as it is not to one to decide on the lives of oth-ers, given that each live is valuable in itself [26]. More-over, to design an ethical deliberation mechanism, both Ethics (cf. Section 3) and Values (cf. Section 4.2) mustbe considered together. I.e, the agent’s capability to evalu-ate the context and its ‘personality’ will identify differentorderings of the values and which ethical theory is mostsalient in a given situation. Also the order in which dif-ferent aspects are evaluated or rules are used, can lead tovery diverse decisions. These ordering itself is also de-termined by the values and ethical principles that the sys-tem follows, and is influenced by the context of operation.For instance when Conservation is the priority value, adecision based on Consequentialism theories would leadto save the largest amount of lives, whereas a Deonto-logical approach would consider traffic laws and possiblyalso higher level legal systems, such as the human rights,whereas a Virtuous system would choose to take no ac-tion (as it would benefit a virtuous person to do no harmdeliberately).From an implementation perspective, the different Ethi-7al theories differ in terms of computational complexity ofthe required deliberation algorithms. To implement Con-sequentialist agents, reasoning about the consequences ofactions is needed, which can be supported by e.g. dy-namic logics. For Deontologic agents, higher order rea-soning is needed to reason about the actions themselves.I.e. the agent must be aware of its own action capabil-ities and their relations to institutional norms, requiringe.g. Deontic logics. Finally, Virtue agents need to reasonabout its own motives, which lead to actions, which leadto consequences, which are complexer modalities and re-quire constructs to deal with regret and creativity to applylearned solutions to new dilemmas.All approaches raise their own specific computationalproblems, but they also raise a common problem ofwhether any computer (or human, for that matter) couldever gather and compare all the information that wouldbe necessary for the theories to be applied in real time[2]. This problem seems especially acute for a Conse-quentialist approach, since the consequences of any ac-tion are essentially unbounded in space or time, and there-fore a pragmatic decision must be taken on how far shouldthe system go in evaluating possible consequences. Theproblem does not go away for Deontologic or Virtues ap-proaches because consistency between the duties can typ-ically also only be assessed through their effects in spaceand time. Reinforcement learning techniques can be ap-plied as means to analyze the evolution and adaptation ofethical behaviour, but this requires further research.Most importantly is to understand how society will ac-cept these decisions, and how the ART principles differ-ently. In an empirical experiment, Malle has found out“differences both in the norms people impose on robots(expecting action over inaction) and the blame people as-sign to robots (less for acting, and more for failing to act)”[19]. As illustration, Table ?? provides a small illustrationof the computational issues of the different moral deliber-ation approaches and how ART can be addressed. How-ever, further research is needed to understand which arethe differences in acceptance of decisions driven by dif-ferent approaches. Accountability requires both the function of guiding ac-tion (by forming beliefs and making decisions), and thefunction of explanation (by placing decisions in a broadercontext and by classifying them along moral values). Tothis effect, machine learning techniques can be used to classify states or action as ‘right’ or ‘wrong’, basicallyin the same way as classifiers learn to distinguish be-tween cats and dogs. Another approach to develop ex-planations methods is to apply evolutionary ethics [3] andstructured argumentation models [21]. This enables tocreate a modular explanation tree where each node ex-plains nodes at lower levels, where each node encapsu-late a specific reasoning modules, treated each as a black-box. This moreover provides a model-agnostic approachpotentially able to deal with
Transparency in stochastic,logic and data-based models in a uniform way. Furtherresearch is needed to verify this approach. Yet anotherapproach is proposed in [14] based on pragmatic socialheuristics instead of moral rules or maximization princi-ples. This approach takes a learning perspective integrat-ing both the initial ethical deliberation rules with adapta-tion to the context.
In all areas of application, AI reasoning must be ableto take into account societal values, moral and ethicalconsiderations, weigh the respective priorities of valuesheld by different stakeholders and in multiple multicul-tural contexts, explain its reasoning, and guarantee trans-parency. As the capabilities for autonomous decisionmaking grow, perhaps the most important issue to con-sider is the need to rethink responsibility. There is an ur-gent need to identify and formalize what autonomy andresponsibility exactly mean when applied to machines.Whereas taking a moral agent approach placing the wholeresponsibility with the developer as advocated by someresearchers [7], or taking a institutional regulatory ap-proach, the fact is that the chain of responsibility is get-ting longer. Definitions of control and responsibility areneeded that are able to deal with a larger distance betweenhuman control and system autonomy.Nevertheless, increasingly, robots and intelligentagents will be taking decisions that can affect our lives andway of living in smaller or larger ways. Being fundamen-tally artifacts, AI systems are fully under the control andresponsibility of their owners or users. However, develop-ments in autonomy and learning are rapidly enabling AIsystems to decide and act without direct human control.That is, in dynamic environments, their adaptability capa-8ilities can lead to situations in which the consequencesof their decisions and actions will not be always possibleto direct or predict.In this paper, we proposed possible ways to implementethics and human values into AI design. In particular, wepropose several approaches to responsibility: as a task ofthe human-in-the-loop, as part of the decision-making al-gorithm, or as part of the the social, legal and physicalinfrastructures that enable interaction. 9 eferences [1] Huib Aldewereld, S. Alvarez-Napagao, F. Dignum,and J. Vazquez-Salceda. Making norms concrete. In
AAMAS 2010 , pages 807–814, 2010.[2] C. Allen, I. Smit, and W. Wallach. Artificial moral-ity: Top-down, bottom-up, and hybrid approaches.
Ethics and Information Technology , 7(3):149–155,2005.[3] Ken Binmore.
Natural justice . Oxford UniversityPress, 2005.[4] Fredrik Bj¨orklund. Differences in the justificationof choices in moral dilemmas: Effects of gender,time pressure and dilemma seriousness.
Scandina-vian Journal of Psychology , 44(5):459–466, 2003.[5] Guido Boella, L. Van Der Torre, and H. Verha-gen. Introduction to normative multiagent systems.
Computational & Mathematical Organization The-ory , 12(2-3):71–79, 2006.[6] Jean-Franc¸ois Bonnefon, Azim Shariff, and IyadRahwan. The social dilemma of autonomous vehi-cles.
Science , 352(6293):1573–1576, 2016.[7] Joanna Bryson and Philip Kime. Just an artifact:why machines are perceived as moral agents. In
IJ-CAI’11 , pages 1641–1646, 2011.[8] Mario Conci, Virginia Dignum, Dirk Heylen, andJonas Beskow. Roadmap for casa (computer as so-cial actors) technologies. final report EIT ICT labsactivity RIHA 12124. 2012.[9] Rosaria Conte, C. Castelfranchi, and F. Dignum.Autonomous norm acceptance. In
InternationalWorkshop on Agent Theories, Architectures, andLanguages , pages 99–112. Springer, 1998.[10] Stephen Cranefield, M. Winikoff, V. Dignum, andF. Dignum. No pizza for you: Value-based plan se-lection in bdi agents, 2017.[11] Luciano Floridi and J. Sanders. On the morality ofartificial agents.
Minds and machines , 14(3):349–379, 2004. [12] Philippa Foot. The problem of abortion and the doc-trine of double effect. 1967.[13] Batya Friedman, Peter Kahn, and Alan Borning.Value sensitive design and information systems.
Ad-vances in Management Information Systems , 6:348– 372, 2006.[14] Gerd Gigerenzer. Moral satisficing: Rethinkingmoral behavior as bounded rationality.
Topics incognitive science , 2(3):528–554, 2010.[15] Jesse Graham, Brian Nosek, Jonathan Haidt, RaviIyer, Spassena Koleva, and Peter Ditto. Mappingthe moral domain.
Journal of personality and socialpsychology , 101(2):366, 2011.[16] Gert Hofstede and G. Hofstede.
Culture’s conse-quences: Comparing values, behaviors, institutionsand organizations . Sage, 2001.[17] Ken Lay, O. C. Ferrell, and L. Ferrell. The re-sponsibility and accountability of ceos: The last in-terview with ken lay.
Journal of Business Ethics ,100(2):209–219, 2011.[18] Wenchao Li, Dorsa Sadigh, Shankar Sastry, andSanjit Seshia.
Synthesis for Human-in-the-LoopControl Systems , pages 470–484. Springer, 2014.[19] Bertram Malle, Matthias Scheutz, Thomas Arnold,John Voiklis, and Corey Cusimano. Sacrifice onefor the good of many?: People apply different moralnorms to human and robot agents. In
Proc. 10thACM/IEEE Int. Conf. on Human-Robot Interaction ,pages 117–124. ACM, 2015.[20] James Misener and S. Shladover. Path investigationsin vehicle-roadside cooperation and safety: A foun-dation for safety and vehicle-infrastructure integra-tion research. In
Intelligent Transportation SystemsConference, 2006 , pages 9–16. IEEE, 2006.[21] Sanjay Modgil and Henry Prakken. A general ac-count of argumentation with preferences.
ArtificialIntelligence , 195:361–397, 2013.[22] Stuart Russell and P Norvig.
Artificial Intelligence:A Modern Approach . Pearson Education, 3rd edi-tion, 2009.1023] Shalom Schwartz. An Overview Basic Human Val-ues: Theory, Methods, and Applications Introduc-tion to the Values Theory.
Jerusalem Hebrew Uni-versity , 2006.[24] Shalom Schwartz. A theory of cultural value orien-tations: Explication and applications.
Comparativesociology , 5(2):137–182, 2006.[25] John Searle.
Making the social world: The structureof human civilization . Oxford UP, 2010.[26] Amartya Sen. Elements of a theory of human rights.
Philosophy & Public Affairs , 32(4):315–356, 2004.[27] Ibo van de Poel.
Translating Values into Design Re-quirements , pages 253–266. Springer Netherlands,Dordrecht, 2013.[28] Jeroen van den Hoven. Design for values and val-ues for design.
Information Age +, Journal of theAustralian Computer Society , 7(2):4–7, 2005.[29] Ilse Verdiesen, Martijn Cligge, Jan Timmermans,Lennart Segers, Virginia Dignum, and Jeroenvan den Hoven. MOOD: Massive open online delib-eration platform a practical application. In ,pages 6–11, 2016.[30] Wendall Wallach and Collin Allen.