Steve Torrance
University of Sussex
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Steve Torrance.
Ai & Society | 2008
Steve Torrance
In what ways should we include future humanoid robots, and other kinds of artificial agents, in our moral universe? We consider the Organic view, which maintains that artificial humanoid agents, based on current computational technologies, could not count as full-blooded moral agents, nor as appropriate targets of intrinsic moral concern. On this view, artificial humanoids lack certain key properties of biological organisms, which preclude them from having full moral status. Computationally controlled systems, however advanced in their cognitive or informational capacities, are, it is proposed, unlikely to possess sentience and hence will fail to be able to exercise the kind of empathic rationality that is a prerequisite for being a moral agent. The organic view also argues that sentience and teleology require biologically based forms of self-organization and autonomous self-maintenance. The organic view may not be correct, but at least it needs to be taken seriously in the future development of the field of Machine Ethics.
Ai & Society | 2013
Steve Torrance
I discuss the realizability and the ethical ramifications of Machine Ethics, from a number of different perspectives: I label these the anthropocentric, infocentric, biocentric and ecocentric perspectives. Each of these approaches takes a characteristic view of the position of humanity relative to other aspects of the designed and the natural worlds—or relative to the possibilities of ‘extra-human’ extensions to the ethical community. In the course of the discussion, a number of key issues emerge concerning the relation between technology and ethics, and the nature of what it is to have moral status. Some radical challenges to certain technological presuppositions and ramifications of the infocentric approach will be discussed. Notwithstanding the obvious tensions between the infocentric perspective on one side and the biocentric and ecocentric perspectives on the other, we will see that there are also striking parallels in the way that each of these three approaches generates challenges to an anthropocentric ethical hegemony, and possible scope for some degree of convergence.
Archive | 2010
Steve Torrance
I discuss the realizability and the ethical ramifications of machine ethics (ME), from a number of different perspectives: I label these the anthropocentric, infocentric, biocentric and ecocentric perspectives. Each of these approaches takes a characteristic view of the position of humanity relative to other aspects of the designed and the natural worlds – or relative to the possibilities of ‘extra-human’ extensions to the ethical community. In the course of the discussion a number of key issues will emerge concerning the relation between technology and ethics, and the nature of what it is to have moral status. Some radical challenges to certain technological presuppositions and ramifications of the infocentric approach will be discussed. However, notwithstanding the obvious tensions between the infocentric perspective on one side and the biocentric and ecocentric perspectives on the other, we will see that there are also striking parallels in the way that each of these three approaches generates challenges to an anthropocentric ethical hegemony, and possible scope for some degree of convergence.
Ai & Society | 2008
Steve Torrance
future AI developments become more ambitious, numerous ethical questions have come to the forefront. Advisory and decision systems in many areas have increased moral ramifications. Ethical requirements become increasingly important in a wide range of professional and commercial domains. Computer-based support systems in those domains raise ever more acute questions. How far can we expect such systems to participate in difficult moral decision-making? Could we ever expect such systems to have moral autonomy in some sense, or to provide us with moral advice that we should regard as authoritative or even as overriding human-generated moral views? What ends might be met by simulating moral decision-making in machines? What functions would be involved in any such simulation? What are the most appropriate methods for implementation? Under what circumstances might we expect computer-based decision systems to be given any kind of moral responsibility or accountability for their judgments or actions? This special issue covers the ethical ramifications of artificial agents in the widest sense. At one end of the spectrum papers consider the practical implications of current developments in AI systems—the possibilities and limitations inherent in current or expectable future technologies. At the other extreme there are discussions of broad issues concerning humanoid and other forms of possible artificial agents in the remote future, and as envisaged in seminal science fiction.
Archive | 2015
Steve Torrance; Ron Chrisley
It is suggested that some limitations of current designs for medical AI systems (be they autonomous or advisory) stem from the failure of those designs to address issues of artificial (or machine) consciousness. Consciousness would appear to play a key role in the expertise, particularly the moral expertise, of human medical agents, including, for example, autonomous weighting of options in (e.g.,) diagnosis; planning treatment; use of imaginative creativity to generate courses of action; sensorimotor flexibility and sensitivity; empathetic and morally appropriate responsiveness; and so on. Thus, it is argued, a plausible design constraint for a successful ethical machine medical or care agent is for it to at least model, if not reproduce, relevant aspects of consciousness and associated abilities. In order to provide theoretical grounding for such an enterprise we examine some key philosophical issues that concern the machine modelling of consciousness and ethics, and we show how questions relating to the first research goal are relevant to medical machine ethics. We believe that this will overcome a blanket skepticism concerning the relevance of understanding consciousness, to the design and construction of artificial ethical agents for medical or care contexts. It is thus argued that it would be prudent for designers of MME agents to reflect on issues to do with consciousness and medical (moral) expertise; to become more aware of relevant research in the field of machine consciousness; and to incorporate insights gained from these efforts into their designs.
Phenomenology and The Cognitive Sciences | 2009
Giovanna Colombetti; Steve Torrance
Archive | 2011
Steve Torrance; Tom Froese
Archive | 2007
Robert Clowes; Steve Torrance; Ron Chrisley
Archive | 2005
Marek McGann; Steve Torrance
International Journal of Machine Consciousness | 2012
Steve Torrance