Merel Noorman
Royal Netherlands Academy of Arts and Sciences
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Merel Noorman.
Ethics and Information Technology | 2014
Merel Noorman; Deborah G. Johnson
Central to the ethical concerns raised by the prospect of increasingly autonomous military robots are issues of responsibility. In this paper we examine different conceptions of autonomy within the discourse on these robots to bring into focus what is at stake when it comes to the autonomous nature of military robots. We argue that due to the metaphorical use of the concept of autonomy, the autonomy of robots is often treated as a black box in discussions about autonomous military robots. When the black box is opened up and we see how autonomy is understood and ‘made’ by those involved in the design and development of robots, the responsibility questions change significantly.
Science and Engineering Ethics | 2014
Merel Noorman
Abstract The prospect of increasingly autonomous military robots has raised concerns about the obfuscation of human responsibility. This papers argues that whether or not and to what extent human actors are and will be considered to be responsible for the behavior of robotic systems is and will be the outcome of ongoing negotiations between the various human actors involved. These negotiations are about what technologies should do and mean, but they are also about how responsibility should be interpreted and how it can be best assigned or ascribed. The notion of responsibility practices, as the paper shows, provides a conceptual tool to examine these negotiations as well as the interplay between technological development and the ascription of responsibility. To illustrate the dynamics of responsibility practices the paper explores how the introduction of unmanned aerial vehicles has led to (re)negotiations about responsibility practices, focusing particularly on negotiations within the US Armed Forces.
Archive | 2014
Deborah G. Johnson; Merel Noorman
This chapter takes as its starting place that artefacts, in combination with humans, constitute human action and social practices, including moral actions and practices. Our concern is with what is regarded as a moral agent in these actions and practices. Ideas about artefactual ontology, artefactual agency, and artefactual moral agency are intertwined. Discourse on artefactual agency and artefactual moral agency seems to draw on three different conceptions of agency. The first has to do with the causal efficacy of artefacts in the production of events and states of affairs. The second can be thought of as acting for or on behalf of another entity; agents are those who perform tasks for others and/or represent others. The third conception of agency has to do with autonomy and is often used to ground discourse on morality and what it means to be human. The casual efficacy and acting for conceptions of agency are used to ground intelligible accounts of artefactual moral agency. Accounts of artefactual moral agency that draw on the autonomy conception of agency, however, are problematic when they use an analogy between human moral autonomy and some aspect of artefacts as the basis for attributing to artefacts the status associated with moral autonomy.
Information services & use | 2014
Peter Linde; Merel Noorman; Bridgette Wessels; Thordis Sveinsdottir
In this paper we will address the questions of what and where the value of open access to research data might be and how libraries and related stakeholders can contribute to achieve the benefits of freely sharing data. In particular, the emphasis is on how libraries need to acquire the competence for collaboration to train and encourage researchers and library staff to work with open data. The paper is based on the early results of the RECODE project, an EU FP7 project that addresses the drivers and barriers in developing open access to research data in Europe (http://www.recodeproject.eu).
Responsible Innovation 3 | 2017
Merel Noorman; Tsjalling Swierstra; Dorien Zandbergen
Responsible Innovation (RI) is a normative conception of technology development, which hopes to improve upon prevailing practices. One of its key principles is the active involvement of a broad range of stakeholders in deliberations in order to better embed innovations in society. In this paper, we examine the applicability of this principle in corporate settings and in smaller scale technological projects. We do so in the context of a case study focused on an innovation project of a start-up organisation with social aspirations. We describe our failed attempts to introduce RI-inspired stakeholder engagement approaches and articulate the ‘reasonable reasons’ why the organisation rejected these approaches. We then examine the methods that the organisation adopted to be responsive to various stakeholders’ needs and values. Based on our analysis, we argue that there is a need for the field of RI to explore additional and alternative ways to address issues of stakeholder commitment and inclusion, in order to make RI’s deliberative ideals more applicable to the rapid, fluid, partial, and provisional style of deliberation and decision making that we found in corporate contexts.
international conference on electronic publishing | 2014
Bridgette Wessels; Thordis Sveinsdottir; Peter Linde; Merel Noorman
In this paper we will address the questions of what and where the value of open access to research data might be and how libraries and related stakeholders can contribute to achieve the benefits of freely sharing data. In particular, the emphasis will be on how libraries need to acquire the competence for collaboration to train and encourage researchers and library staff to work with open data. The paper is based on the early results of the RECODE project, an EU FP7 project that addresses the drivers and barriers in developing open access to research data in Europe (http://www.recodeproject.eu).
ETHICS '14 Proceedings of the IEEE 2014 International Symposium on Ethics in Engineering, Science, and Technology | 2014
Deborah G. Johnson; Merel Noorman
A survey of popular, technical and scholarly literature suggests that autonomous artificial agents will populate the future. Although some visions may seem fanciful, autonomous artificial agents are being designed, built, and deployed in a wide range of sectors. The specter of future artificial agents - with more learning capacity and more autonomy - raises important questions about responsibility. Can anyone (any humans) be responsible for the behavior of entities that learn as they go and operate autonomously? This paper takes as its starting place that humans are and always should be held responsible for the behavior of machines, even machines that learn and operate autonomously. In order to prevent evolution to a future in which no humans are thought to be responsible for the behavior of artificial agents, four principles are proposed, principles that should be kept in mind as artificial agents are developed.
IEEE Technology and Society Magazine | 2014
Deborah G. Johnson; Merel Noorman
Proceedings of the 2008 conference on Current Issues in Computing and Philosophy | 2008
Merel Noorman
IEEE Technology and Society Magazine | 2014
Deborah G. Johnson; Merel Noorman