Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Deborah G. Johnson is active.

Publication


Featured researches published by Deborah G. Johnson.


Ethics and Information Technology | 2006

Computer systems: Moral entities but not moral agents

Deborah G. Johnson

After discussing the distinction between artifacts and natural entities, and the distinction between artifacts and technology, the conditions of the traditional account of moral agency are identified. While computer system behavior meets four of the five conditions, it does not and cannot meet a key condition. Computer systems do not have mental states, and even if they could be construed as having mental states, they do not have intendings to act, which arise from an agent’s freedom. On the other hand, computer systems have intentionality, and because of this, they should not be dismissed from the realm of morality in the same way that natural objects are dismissed. Natural objects behave from necessity; computer systems and other artifacts behave from necessity after they are created and deployed, but, unlike natural objects, they are intentionally created and deployed. Failure to recognize the intentionality of computer systems and their connection to human intentionality and action hides the moral character of computer systems. Computer systems are components in human moral action. When humans act with artifacts, their actions are constituted by the intentionality and efficacy of the artifact which, in turn, has been constituted by the intentionality and efficacy of the artifact designer. All three components – artifact designer, artifact, and artifact user – are at work when there is an action and all three should be the focus of moral evaluation.


Ethics and Information Technology | 2008

Un-making artificial moral agents

Deborah G. Johnson; Keith W. Miller

Floridi and Sanders, seminal work, “On the morality of artificial agents” has catalyzed attention around the moral status of computer systems that perform tasks for humans, effectively acting as “artificial agents.” Floridi and Sanders argue that the class of entities considered moral agents can be expanded to include computers if we adopt the appropriate level of abstraction. In this paper we argue that the move to distinguish levels of abstraction is far from decisive on this issue. We also argue that adopting certain levels of abstraction out of context can be dangerous when the level of abstraction obscures the humans who constitute computer systems. We arrive at this critique of Floridi and Sanders by examining the debate over the moral status of computer systems using the notion of interpretive flexibility. We frame the debate as a struggle over the meaning and significance of computer systems that behave independently, and not as a debate about the ‘true’ status of autonomous systems. Our analysis leads to the conclusion that while levels of abstraction are useful for particular purposes, when it comes to agency and responsibility, computer systems should be conceptualized and identified in ways that keep them tethered to the humans who create and deploy them.


technical symposium on computer science education | 2002

Is diversity in computing a moral matter

Deborah G. Johnson; Keith W. Miller

n Question: Is UnderRepresentation Immoral? Women and some minorities are under-represented in academic computer science and in professional computing more generally. Evidence for this assertion appears elsewhere (e.g., [2]), and we won t go into the details here. Instead, we focus on the underlying moral issue: Is there anything wrong (immoral, unfair) with so few women and minorities studying or working in computing? First we clear away two distractions. Most o f the data about this issue focus on the U.S., partly because of the special importance of equality and equal opportunity in American democracy. This paper also focuses on the U.S. Nevertheless, the arguments here can apply to many other countries (see [4, 6] in this special issue). Second, our analysis focuses on women in computing, not under-represented minorities. The circumstances of women and minorities are in general quite different. In computing, some minorities appear to be over-represented in computing and other minorities appear under-represented. We believe, however, that the core ethical issue fairness is important to everyone involved in computing. In this editorial, we explicitly address the issue of under-representat ion for women, knowing that work remains to be done on the details regarding specific minorities. We return to our original question: Is the under-representation o f women in computer science immoral? Some Say the Answer Is No A straightforward argument (that we disagree with) can be made for dismissing this issue on grounds that the prevailing under-representation is not the result of unfairness. The argument goes as follows: Yes, there are relatively few women going into computer science. But this is not because women are discriminated against. Rather, they choose not to go into the field, despite the fact that universities and industry are actively recruiting them. I f women simply do not want to go into computing and freely choose not to, then there is nothing unethical about the current situation. This is not a convincing argument. For one thing, it claims that women are not discriminated against in computing on grounds that universities and industry appear to be actively recruiting them. The two, however, are not incompatible. It is possible that universities and industry are making conscious efforts to recruit women while at the same time placing those women in chilly environments in which they are overtly or covertly discriminated against. Official, overt discrimination has become less likely in the U.S. because of legislation and the potential to be sued. Still, there are other ways to make under-represented groups unwelcome and uncomfortable. Nevertheless, for sake of analysis, lets temporarily assume that there is no unfair discrimination in hiring and no conscious effort to discourage women from entering the field of computing. What comes into focus then is the interesting possibility that there is something about the field of computing that makes it less attractive to women. Suppose for the sake of argument that women on average find computing less enjoyable and more unpleasant than other activities. I f this were true and i f these features were essential to computing, then we might simply accept the current situation. We could claim that there is a mismatch between the interests and desires of women and the nature of computing, and leave it at that. No need to change anything. To use an analogy, suppose there is a particular job that required a worker to have a shaved head, and further suppose that women were less inclined than men to shave their heads to get the job. Here we would say that an inherent feature of the work influences choices, and the low number of women is not indicative of discrimination or unfairness. However, the ethical character o f the situation shifts dramatically i f we discover that the shaved head requirement is not truly essential to the job, or could be removed with a trivial design change in the workplace. Then we would wonder why this requirement was there at all. Furthermore, i f we found that the requirement was in fact detrimental to the effectiveness of the employee and didn t improve the profits o f the employer, we would surely wonder about the fairness and the wisdom of the requirement. This analogy suggests that the situation with regard to women in computing may be more subtle than overt discrimination. Are the disincentives women experience inherent to computing? Are the aspects o f computing that women find distasteful essential? Or


Ethics and Information Technology | 2005

Computer Systems and Responsibility: A Normative Look at Technological Complexity

Deborah G. Johnson; Thomas M. Powers

In this paper, we focus attention on the role of computer system complexity in ascribing responsibility. We begin by introducing the notion of technological moral action (TMA). TMA is carried out by the combination of a computer system user, a system designer (developers, programmers, and testers), and a computer system (hardware and software). We discuss three sometimes overlapping types of responsibility: causal responsibility, moral responsibility, and role responsibility. Our analysis is informed by the well-known accounts provided by Hart and Hart and Honoré. While these accounts are helpful, they have misled philosophers and others by presupposing that responsibility can be ascribed in all cases of action simply by paying attention to the free and intended actions of human beings. Such accounts neglect the part played by technology in ascriptions of responsibility in cases of moral action with technology. For both moral and role responsibility, we argue that ascriptions of both causal and role responsibility depend on seeing action as complex in the sense described by TMA. We conclude by showing how our analysis enriches moral discourse about responsibility for TMA.


Ethics and Information Technology | 2014

Negotiating autonomy and responsibility in military robots

Merel Noorman; Deborah G. Johnson

Central to the ethical concerns raised by the prospect of increasingly autonomous military robots are issues of responsibility. In this paper we examine different conceptions of autonomy within the discourse on these robots to bring into focus what is at stake when it comes to the autonomous nature of military robots. We argue that due to the metaphorical use of the concept of autonomy, the autonomy of robots is often treated as a black box in discussions about autonomous military robots. When the black box is opened up and we see how autonomy is understood and ‘made’ by those involved in the design and development of robots, the responsibility questions change significantly.


Archive | 2011

Software Agents, Anticipatory Ethics, and Accountability

Deborah G. Johnson

This chapter takes up a case study of the accountability issues around increasingly autonomous computer systems. In this early phase of their development, certain computer systems are being referred to as “software agents” or “autonomous systems” because they operate in a variety of ways that are seemingly independent of human control. However, because of the responsibility and liability issues, conceptualizing these systems as autonomous seems morally problematic and likely to be legally problematic. Whether software agents and autonomous systems are used to make financial decisions, control transportation, or perform military objectives, when something goes wrong, issues of accountability will indubitably arise. While it would seem that the law will ultimately have to handle these issues, law is currently being used only minimally or indirectly to address accountability for computer software failure. This nascent discussion of computer systems “in the making” seems a good focal point for considering innovative approaches to making law, governance, and ethics more helpful with regard to new technologies. For a start, it would seem that some anticipatory reasoning as to how accountability/liability issues are likely to be handled in law could have an influence on the development of the technology (even if the anticipatory thinking is ultimately wrong). Such thinking could – in principle at least – shape the design of computer systems.


Trends in Biotechnology | 2010

The role of ethics in science and engineering

Deborah G. Johnson

It is generally thought that science and engineering should never cross certain ethical lines. The idea connects ethics to science and engineering, but it frames the relationship in a misleading way. Moral notions and practices inevitably influence and are influenced by science and engineering. The important question is how such interactions should take place. Anticipatory ethics is a new approach that integrates ethics into technological development.


IEEE Technology and Society Magazine | 2003

Virtual harms and real responsibility

Chuck Huff; Deborah G. Johnson; Keith W. Miller

This is a preliminary sort of the ethical and metaphysical issues arising and in general from virtual reality environments. The lessons we learned seem promising, but more analysis remains. Perhaps the most important lesson is that virtual actions and interactions have consequences for flesh-and-blood persons and hence, the flesh controllers of virtual action, whether they control directly (as in playing a character) or indirectly (as in designing a virtual world), have responsibilities for their actions.


Communications of The ACM | 2008

Computing ethics Computer experts: guns-for-hire or professionals?

Deborah G. Johnson

Considering the responsibilities of those who build systems fundamental to significant social functions, institutions, and values.


acm symposium on applied computing | 2006

A dialogue on responsibility, moral agency, and IT systems

Deborah G. Johnson; Keith W. Miller

The dialogue that follows was written to express some of our ideas and remaining questions about IT systems, moral agency, and responsibility. We seem to have made some progress on some these issues, but we havent come to anything close to agreement on several important points. While the issues are becoming more clearly drawn, what we have discovered so far is closer to a web of connecting ideas, than to formal claims or final conclusions.

Collaboration


Dive into the Deborah G. Johnson's collaboration.

Top Co-Authors

Avatar

Merel Noorman

Royal Netherlands Academy of Arts and Sciences

View shared research outputs
Top Co-Authors

Avatar

Keith W. Miller

University of Illinois at Springfield

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nael Barakat

Grand Valley State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge