Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John P. Sullins is active.

Publication


Featured researches published by John P. Sullins.


Ethics and Information Technology | 2010

RoboWarfare: can robots be more ethical than humans on the battlefield?

John P. Sullins

Telerobotically operated and semiautonomous machines have become a major component in the arsenals of industrial nations around the world. By the year 2015 the United States military plans to have one-third of their combat aircraft and ground vehicles robotically controlled. Although there are many reasons for the use of robots on the battlefield, perhaps one of the most interesting assertions are that these machines, if properly designed and used, will result in a more just and ethical implementation of warfare. This paper will focus on these claims by looking at what has been discovered about the capability of humans to behave ethically on the battlefield, and then comparing those findings with the claims made by robotics researchers that their machines are able to behave more ethically on the battlefield than human soldiers. Throughout the paper we will explore the philosophical critique of this claim and also look at how the robots of today are impacting our ability to fight wars in a just manner.


IEEE Transactions on Affective Computing | 2012

Robots, Love, and Sex: The Ethics of Building a Love Machine

John P. Sullins

This paper will explore the ethical impacts of the use of affective computing by engineers and roboticists who program their machines to mimic and manipulate human emotions in order to evoke loving or amorous reactions from their human users. We will see that it does seem plausible that some people might buy a love machine if it were created, but it is argued here that principles from machine ethics have a role to play in the design of these machines. This is best achieved by applying what is known about the philosophy of love, the ethics of loving relationships, and the philosophical value of the erotic in the early design stage of building robust artificial companions. The paper concludes by proposing certain ethical limits on the manipulation of human psychology when it comes to building sex robots and in the simulation of love in such machines. In addition, the paper argues that the attainment of erotic wisdom is an ethically sound goal and that it provides more to loving relationships than only satisfying physical desire. This fact may limit the possibility of creating a machine that can fulfill all that one should want out of erotic love unless a machine can be built that would help its user attain this kind of love.


Ethics and Information Technology | 2005

Ethics and Artificial life: From Modeling to Moral Agents

John P. Sullins

Artificial Life (ALife) has two goals. One attempts to describe fundamental qualities of living systems through agent based computer models. And the second studies whether or not we can artificially create living things in computational mediums that can be realized either, virtually in software, or through biotechnology. The study of ALife has recently branched into two further subdivisions, one is “dry” ALife, which is the study of living systems “in silico” through the use of computer simulations, and the other is “wet” ALife that uses biological material to realize what has only been simulated on computers, effectively wet ALife uses biological material as a kind of computer. This is challenging to the field of computer ethics as it points towards a future in which computer and bioethics might have shared concerns. The emerging studies into wet ALife are likely to provide strong empirical evidence for ALife’s most challenging hypothesis: that life is a certain set of computable functions that can be duplicated in any medium. I believe this will propel ALife into the midst of the mother of all cultural battles that has been gathering around the emergence of biotechnology. Philosophers need to pay close attention to this debate and can serve a vital role in clarifying and resolving the dispute. But even if ALife is merely a computer modeling technique that sheds light on living systems, it still has a number of significant ethical implications such as its use in the modeling of moral and ethical systems, as well as in the creation of artificial moral agents.


Archive | 2008

Friends by Design: A Design Philosophy for Personal Robotics Technology

John P. Sullins

Small robotic appliances are beginning the process of home automation. Following the lead of the affective computing movement begun by Professor Rosalind Picard in 1995 at the MIT Media lab, roboticists have also begun pursuing affective robotics, robotics that uses simulated emotions and other human expressions and body language to help the machine better interact with its users. Here I will trace the evolution of this design philosophy and present arguments that critique and expand this design philosophy using concepts gleaned from the phenomenology of artifacts as described in the literature of the philosophy of technology.


financial cryptography | 2010

Ethical proactive threat research

John Aycock; John P. Sullins

Through a provocative examination of the positive effects of computer security research on regular users, we argue that traditional security research is insufficient. Instead, we turn to a largely untapped alternative, proactive threat research, a fruitful research area but an ethical minefield. We discuss practices for ethical research and dissemination of proactive research.


ieee symposium on security and privacy | 2014

A Case Study in Malware Research Ethics Education: When Teaching Bad is Good

John P. Sullins

There is a growing interest in the research of malware in the context of cyber-security. In this paper I will present a case study that will outline the curriculum used to teach malware ethics within the context of a computer science course that teaches students malware programming techniques. Issues from computer and information ethics that apply most closely to ethical malware research will be highlighted. The topics discussed in the course will be outlined and assessment techniques will be discussed.


ETHICS '14 Proceedings of the IEEE 2014 International Symposium on Ethics in Engineering, Science, and Technology | 2014

The ethics of coexistence: can i learn to stop worrying and love the logic bomb?

John Aycock; Anil Somayaji; John P. Sullins

Computer security attacks are frequent fodder for ethical analyses, but the ethics of computer security defenses are not often examined. We address this by considering a topical problem in computer security. In an age of so-called “advanced persistent threats” that lurk undetected on computer systems for long periods of time, it is increasingly unrealistic to expect a computer system to be permanently free of malicious software. Recognizing this, we posit the idea of a “cosecure system” - a cosecure system, by design, would allow legitimate software and malicious software to coexist safely on the same machine. We take an unusual tack to software design and use ethical concerns to guide the design of a cosecure system, rather than building a cosecure system and then performing an ex post facto ethical analysis. The principal tenets of security that must be upheld are confidentiality, integrity, and availability, and any system purporting to be secure has an ethical duty to the system user to uphold these. This is the starting point for our design process, and we proceed to look at how a cosecure system may be implemented. What we arrive at by going through this ethics-based software design becomes a proof by contradiction: we are forced to conclude that it is not possible, in fact, for malicious and legitimate software to coexist; a cosecure system as we have described it cannot be built. This allows us to see traditional computer security defenses in a new light. If we cannot uphold key security properties in the best case, where a system is expressly designed to allow coexistence of malicious and legitimate software, what does that imply about the defenses of the actual computer systems we use? We propose that a community defense is an alternative that eludes previous ethical issues, as well as being defensible from an information ethics point of view.


Archive | 2014

Deception and Virtue in Robotic and Cyber Warfare

John P. Sullins

Informational warfare is fundamentally about automating the human capacity for deceit and lies. This poses a significant problem in the ethics of informational warfare. If we want to maintain our commitments to just and legal warfare, then how can we build systems based on what would normally be considered unethical behavior in a way that our commitments to social justice are enhanced and not degraded by this endeavor, is there such a thing as a virtuous lie in the context of warfare? Given that no war is ever fully just or ethical. And that navigating the near instantaneous life and death decisions necessitated by modern conflicts fully taxes the moral intuitions of even the best trained and well intentioned war fighters. It follows, that we need accurate analysis on whether or not we can construct informational technologies that can help us make more ethical decisions on the battlefield. In this chapter I will focus on the fact that robots and other artificial agents will need to understand and utilize deception in order to be useful on the virtual and actual battlefield. At the same time, these agents must maintain the virtues required of an informational agent such as the ability to retain the trust of all those who interact with it. To further this analysis it is important to realize that the moral virtues required of an artificial agent are very different from those that are required of a human moral agent. Some of the major differences are that a virtuous artificial agent need only reveal its intentions to legitimate users, and in many situations it is actually morally obliged to keep some data confidential from certain users. In many circumstances cyber warfare systems must resist the attempts of other agents, human or otherwise, to change its programming or stored data. Given the specific virtues we must program into our cyber warfare systems, we will find that while human agents have many other drives and motivations that can complicate issues of trust, we will find that in comparison to human agents, artificial agents are far less complex and morally ambiguous. Thus it is conceivable that artificial agent should be actually more successful at navigating the moral paradox of the virtuous lie often necessitated by military conflict.


Archive | 2013

Roboethics and Telerobotic Weapons Systems

John P. Sullins

A technology is used ethically when it is intelligently controlled to further a moral good. From this we can extrapolate that the ethical use of telerobotic weapons technology occurs when that technology is intelligently controlled and advances a moral action. This paper deals with the first half of the conjunction; can telerobotic weapons systems be intelligently controlled? At the present time it is doubtful that these conditions are being fully met. I suggest some ways in which this situation could be improved.


ACM Sigcas Computers and Society | 1999

Artificial knowing: gender and the thinking machine

John P. Sullins

Introducing a new hobby for other people may inspire them to join with you. Reading, as one of mutual hobby, is considered as the very easy hobby to do. But, many people are not interested in this hobby. Why? Boring is the reason of why. However, this feel actually can deal with the book and time of you reading. Yeah, one that we will refer to break the boredom in reading is choosing artificial knowing gender and the thinking machine as the reading material.

Collaboration


Dive into the John P. Sullins's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joseph O. Chapa

United States Air Force Academy

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thomas Burri

University of St. Gallen

View shared research outputs
Top Co-Authors

Avatar

Filippo Santoni de Sio

Delft University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge