Mark Coeckelbergh
University of Vienna
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mark Coeckelbergh.
Ethics and Information Technology | 2010
Mark Coeckelbergh
Should we grant rights to artificially intelligent robots? Most current and near-future robots do not meet the hard criteria set by deontological and utilitarian theory. Virtue ethics can avoid this problem with its indirect approach. However, both direct and indirect arguments for moral consideration rest on ontological features of entities, an approach which incurs several problems. In response to these difficulties, this paper taps into a different conceptual resource in order to be able to grant some degree of moral consideration to some intelligent social robots: it sketches a novel argument for moral consideration based on social relations. It is shown that to further develop this argument we need to revise our existing ontological and social-political frameworks. It is suggested that we need a social ecology, which may be developed by engaging with Western ecology and Eastern worldviews. Although this relational turn raises many difficult issues and requires more work, this paper provides a rough outline of an alternative approach to moral consideration that can assist us in shaping our relations to intelligent robots and, by extension, to all artificial and biological entities that appear to us as more than instruments for our human purposes.
Ethics and Information Technology | 2011
Mark Coeckelbergh
Nussbaum’s version of the capability approach is not only a helpful approach to development problems but can also be employed as a general ethical-anthropological framework in ‘advanced’ societies. This paper explores its normative force for evaluating information technologies, with a particular focus on the issue of human enhancement. It suggests that the capability approach can be a useful way of to specify a workable and adequate level of analysis in human enhancement discussions, but argues that any interpretation of what these capabilities mean is itself dependent on (interpretations of) the techno-human practices under discussion. This challenges the capability approach’s means-end dualism concerning the relation between on the one hand technology and on the other hand humans and capabilities. It is argued that instead of facing a choice between development and enhancement, we better reflect on how we want to shape human-technological practices, for instance by using the language of capabilities. For this purpose, we have to engage in a cumbersome hermeneutics that interprets dynamic relations between unstable capabilities, technologies, practices, and values. This requires us to modify the capability approach by highlighting and interpreting its interpretative dimension.
Ethics and Information Technology | 2013
Mark Coeckelbergh
Ethical reflection on drone fighting suggests that this practice does not only create physical distance, but also moral distance: far removed from one’s opponent, it becomes easier to kill. This paper discusses this thesis, frames it as a moral-epistemological problem, and explores the role of information technology in bridging and creating distance. Inspired by a broad range of conceptual and empirical resources including ethics of robotics, psychology, phenomenology, and media reports, it is first argued that drone fighting, like other long-range fighting, creates epistemic and moral distance in so far as ‘screenfighting’ implies the disappearance of the vulnerable face and body of the opponent and thus removes moral-psychological barriers to killing. However, the paper also shows that this influence is at least weakened by current surveillance technologies, which make possible a kind of ‘empathic bridging’ by which the fighter’s opponent on the ground is re-humanized, re-faced, and re-embodied. This ‘mutation’ or unintended ‘hacking’ of the practice is a problem for drone pilots and for those who order them to kill, but revealing its moral-epistemic possibilities opens up new avenues for imagining morally better ways of technology-mediated fighting.
Science, Technology, & Human Values | 2006
Mark Coeckelbergh
A prima facie analysis suggests that there are essentially two, mutually exclusive, ways in which risk arising from engineering design can be managed: by imposing external constraints on engineers or by engendering their feelings of responsibility and respect their autonomy. The author discusses the advantages and disadvantages of both approaches. However, he then shows that this opposition is a false one and that there is no simple relation between regulation and autonomy. Furthermore, the author argues that the most pressing need is not more or less regulation but the further development of moral imagination. The enhancement of moral imagination can help engineers to discern the moral relevance of design problems, to create new design options, and to envisage the possible outcomes of their designs. The author suggests a dual program of developing regulatory frameworks that support engineers’ autonomy and responsibility simultaneously with efforts to promote their moral imagination. He describes how some existing institutional changes have started off in this direction and proposes empirical research to take this further.
International Journal of Social Robotics | 2011
Mark Coeckelbergh
This paper argues that our understanding of many human-robot relations can be enhanced by comparisons with human-animal relations and by a phenomenological approach which highlights the significance of how robots appear to humans. Some potential gains of this approach are explored by discussing the concept of alterity, diversity and change in human-robot relations, Heidegger’s claim that animals are ‘poor in world’, and the issue of robot-animal relations. These philosophical reflections result in a perspective on human-robot relations that may guide robot design and inspire more empirical human-robot relations research that is sensitive to how robots appear to humans in different contexts at different times.
International Journal of Social Robotics | 2009
Mark Coeckelbergh
The development of pet robots, toy robots, and sex robots suggests a near-future scenario of habitual living with ‘personal’ robots. How should we evaluate their potential impact on the quality of our lives and existence?In this paper, I argue for an approach to ethics of personal robots that advocates a methodological turn from robots to humans, from mind to interaction, from intelligent thinking to social-emotional being, from reality to appearance, from right to good, from external criteria to good internal to practice, and from theory to experience and imagination. First I outline what I take to be a common approach to roboethics, then I sketch the contours of an alternative methodology: ethics of personal robots as an ethics of appearance, human good, experience, and imagination.The result is a sketch of an empirically informed anthropocentric ethics that aims at understanding and evaluating what robots do to humans as social and emotional beings in virtue of their appearance, in particular how they may contribute to human good and human flourishing. Starting from concrete experience and practice and being sufficiently sensitive to individual and cultural differences, this approach invites us to be attentive to how human good emerges in human–robot interaction and to imagine, possibilities of living with personal robots that help to constitute good human lives.
Ethics and Information Technology | 2010
Mark Coeckelbergh
Can we build ‘moral robots’? If morality depends on emotions, the answer seems negative. Current robots do not meet standard necessary conditions for having emotions: they lack consciousness, mental states, and feelings. Moreover, it is not even clear how we might ever establish whether robots satisfy these conditions. Thus, at most, robots could be programmed to follow rules, but it would seem that such ‘psychopathic’ robots would be dangerous since they would lack full moral agency. However, I will argue that in the future we might nevertheless be able to build quasi-moral robots that can learn to create the appearance of emotions and the appearance of being fully moral. I will also argue that this way of drawing robots into our social-moral world is less problematic than it might first seem, since human morality also relies on such appearances.
Philosophy of Engineering and Technology | 2013
Mark Coeckelbergh
Part I Descriptive Anthropology of Vulnerability.- Chapter 1. The Transhumanist Challenge.- Chapter 2. An Anthropology of Vulnerability.- Chapter 3. Cultures and Transformations of Vulnerability.- Part II Normative Anthropology of Vulnerability.- Chapter 4. Ethics of Vulnerability (1): Implications for ethics of technology.- Chapter 5. Ethics of Vulnerability (2): Imagining the Posthuman future.- Chapter 6. Ethics of Vulnerability (3): Vulnerability in the Information Age.- Chapter 7. Politics of Vulnerability: Freedom, Justice, and the Public/Private distinction.- Chapter 8. Normative Aesthetics of Vulnerability: The Art of Coping with Vulnerability.- Conclusion.
Ai & Society | 2009
Mark Coeckelbergh
Contemporary technology creates a proliferation of nonhuman artificial entities such as robots and intelligent information systems. Sometimes they are called ‘artificial agents’. But are they agents at all? And if so, should they be considered as moral agents and be held morally responsible? They do things to us in various ways, and what happens can be and has to be discussed in terms of right and wrong, good or bad. But does that make them agents or moral agents? And who is responsible for the consequences of their actions? The designer? The user? The robot? Standard moral theory has difficulties in coping with these questions for several reasons. First, it generally understands agency and responsibility as individual and undistributed. I will not further discuss this issue here. Second, it is tailored to human agency and human responsibility, excluding non-humans. It makes a strong distinction between (humans as) subjects and objects, between humans and animals, between ends (aim, goal) and means (instrument), and sometimes between the moral and the empirical sphere. Moral agency is seen as an exclusive feature of (some) humans. But if non-humans (natural and artificial) have such an influence on the way we lead our lives, it is undesirable and unhelpful to exclude them from moral discourse. In this paper, I explore how we can include artificial agents in our moral discourse, without giving up the ‘folk’ intuition that humans are somehow special with regard to morality, that there is a special relation between humanity and morality—whatever that means. Giving up this view happens if we lower the threshold for moral agency (which I take Foridi and Sanders to do), or if we call artefacts ‘moral’ in virtue of what they do (which I take Verbeek to do in his interpretation of Latour and others) or in virtue of the value we ascribe to them (which I take Magnani to do). I propose an alternative route, which replaces the question about how ‘moral’ non-human agents really are by the question about the moral significance of appearance. Instead of asking about what kind of ‘mind’ or brain states non-humans really have to count as moral agents (approach 1), about what they really do to us (approach 2), or about what value they really have (approach 3), I propose to redirect our attention to the various ways in which non-humans, and in particular robots, appear to us as agents, and how they influence us in virtue of this appearance. Thus, I leave the question regarding the moral status of non-humans open and make room for a study of the moral significance of how humans perceive artificial non-humans such as robots and are influenced by that perception in their interaction with these entities and in their beliefs about these entities. In particular, I argue that humans are justified in ascribing virtual moral agency and moral responsibility to those nonhumans that appear similar to themselves—and to the extent that they appear so—and in acting according to this belief. Thinking about non-humans implies that we reconsider our views about humans. My project in that domain is to shift at least some of our philosophical attention in moral anthropology from what we really are (as opposed to nonhumans) to anthropomorphology: the human form, what we appear to be, and how other beings appear to us given (our projections and recreations of) the human form. I want to make plausible that it is not their intentional state, but their performance that counts morally, and that we can gain M. Coeckelbergh (&) Department of Philosophy, University of Twente, P.O. Box 217, 7500 AE Enschede, The Netherlands e-mail: [email protected]
Ethics and Information Technology | 2012
Mark Coeckelbergh
Can we trust robots? Responding to the literature on trust and e-trust, this paper asks if the question of trust is applicable to robots, discusses different approaches to trust, and analyses some preconditions for trust. In the course of the paper a phenomenological-social approach to trust is articulated, which provides a way of thinking about trust that puts less emphasis on individual choice and control than the contractarian-individualist approach. In addition, the argument is made that while robots are neither human nor mere tools, we have sufficient functional, agency-based, appearance-based, social-relational, and existential criteria left to evaluate trust in robots. It is also argued that such evaluations must be sensitive to cultural differences, which impact on how we interpret the criteria and how we think of trust in robots. Finally, it is suggested that when it comes to shaping conditions under which humans can trust robots, fine-tuning human expectations and robotic appearances is advisable.