Ron Chrisley
University of Sussex
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ron Chrisley.
Artificial Intelligence | 2003
Ron Chrisley
Mike Anderson1 has given us a thoughtful and useful field guide: Not in the genre of a bird-watcher’s guide which is carried in the field and which contains detailed descriptions of possible sightings, but in the sense of a guide to a field (in this case embodied cognition) which aims to identify that field’s general principles and properties. I’d like to make some comments that will hopefully complement Anderson’s work, highlighting points of agreement and disagreement between his view of the field and my own, and acting as a devil’s advocate in places where further discussion seems to be required. Given the venue for this guide, we can safely restrict the discussion to embodied artificial intelligence (EAI), even if such work draws on notions of embodied cognition from the fields of philosophy, psychology and linguistics. In particular, I’ll restrict my discussion to the impact that embodiment can have on the task of creating artificial intelligent agents, either as technological ends in themselves, or as means to understanding natural intelligent systems, or both.
Minds and Machines | 1994
Ron Chrisley
Some have suggested that there is no fact to the matter as to whether or not a particular physical system relaizes a particular computational description. This suggestion has been taken to imply that computational states are not “real”, and cannot, for example, provide a foundation for the cognitive sciences. In particular, Putnam has argued that every ordinary open physical system realizes every abstract finite automaton, implying that the fact that a particular computational characterization applies to a physical system does not tell oneanything about the nature of that system. Putnams argument is scrutinized, and found inadequate because, among other things, it employs a notion of causation that is too weak. I argue that if ones view of computation involves embeddedness (inputs and outputs) and full causality, one can avoid the universal realizability results. Therefore, the fact that a particular system realizes a particular automaton is not a vacuous one, and is often explanatory. Furthermore, I claim that computation would not necessarily be an explanatorily vacuous notion even if it were universally realizable.
Artificial Intelligence in Medicine | 2008
Ron Chrisley
OBJECTIVE Consciousness is often thought to be that aspect of mind that is least amenable to being understood or replicated by artificial intelligence (AI). The first-personal, subjective, what-it-is-like-to-be-something nature of consciousness is thought to be untouchable by the computations, algorithms, processing and functions of AI method. Since AI is the most promising avenue toward artificial consciousness (AC), the conclusion many draw is that AC is even more doomed than AI supposedly is. The objective of this paper is to evaluate the soundness of this inference. METHODS The results are achieved by means of conceptual analysis and argumentation. RESULTS AND CONCLUSIONS It is shown that pessimism concerning the theoretical possibility of artificial consciousness is unfounded, based as it is on misunderstandings of AI, and a lack of awareness of the possible roles AI might play in accounting for or reproducing consciousness. This is done by making some foundational distinctions relevant to AC, and using them to show that some common reasons given for AC scepticism do not touch some of the (usually neglected) possibilities for AC, such as prosthetic, discriminative, practically necessary, and lagom (necessary-but-not-sufficient) AC. Along the way three strands of the authors work in AC--interactive empiricism, synthetic phenomenology, and ontologically conservative heterophenomenology--are used to illustrate and motivate the distinctions and the defences of AC they make possible.
Connectionist Models#R##N#Proceedings of the 1990 Summer School | 1991
Ron Chrisley
Abstract The Connectionist Navigational Map (CNM) is a parallel distributed processing architecture for the learning and use of robot spatial maps. It is shown here how a robot can, using a recurrent network (the CNM predictive map ), learn a model of its environment that allows it to predict what sensations it would have if it were to move in a particular way. It is shown how this predictive ability can be used (via the CNM orienting system ) to enable the robot to determine its current location. This ability, in turn, can be used, when given a desired sensation, to generate sequences of goal states that provide a route to a place with the desired sensory properties. This sequence is given to the CNMs inverse model , which in turn generates a sequence of actions that effects the desired state transitions, thus providing a sort of “content-addressable” planning capability. Finally, the theoretical motivation behind this work is discussed.
Artificial Intelligence Review | 1994
Ron Chrisley
It is claimed that there are pre-objective phenomena, which cognitive science should explain by employing the notion of non-conceptual representational content. It is argued that a match between parallel distributed processing (PDP) and non-conceptual content (NCC) not only provides a means of refuting recent criticisms of PDP as a cognitive architecture; it also provides a vehicle for NCC that is required by naturalism. A connectionist cognitive mapping algorithm is used as a case study to examine the affinities between PDP and NCC.
Archive | 2009
Amir Hussain; Igor Aleksander; Leslie S. Smith; Allan Kardec Barros; Ron Chrisley; Vassilis Cutsuridis
Brain Inspired Cognitive Systems 2008 (June 24-27, 2008; So Lus, Brazil) brought together leading scientists and engineers who use analytic, syntactic and computational methods both to understand the prodigious processing properties of biological systems and, specifically, of the brain, and to exploit such knowledge to advance computational methods towards ever higher levels of cognitive competence. This book includes the papers presented at four major symposia: Part I - Cognitive Neuroscience Part II - Biologically Inspired Systems Part III - Neural Computation Part IV - Models of Consciousness.
Archive | 1998
Ron Chrisley
I show that despite a recent argument to the contrary, connectionist representations can be non-compositional. This is not because they have context-sensitive constituents, but rather because they sometimes have no constituents at all. The argument to be rejected depends on the assumption that one can only assign propositional contents to representations if one starts by assigning sub-proposition al contents to atomic representations. I give some philosophical arguments and present a counterexample to show that this assumption is mistaken.
bioRxiv | 2018
Matt Jaquiery; Nora Andermane; Ron Chrisley
People routinely fail to notice that things have changed in a visual scene if they do not perceive the changes in the process of occurring, a phenomenon known as ‘change blindness’ (1,2). The majority of lab-based change blindness studies use static stimuli and require participants to identify simple changes such as alterations in stimulus orientation or scene composition. This study uses a ‘flicker’ paradigm adapted for dynamic stimuli which allowed for both simple orientation changes and more complex trajectory changes. Participants were required to identify a moving rectangle which underwent one of these changes against a background of moving rectangles which did not. The results demonstrated that participants’ ability to correctly identify the target deteriorated with the presence of a visual mask and a larger number of distractor objects, consistent with findings in previous change blindness work. The study provides evidence that the flicker paradigm can be used to induce change blindness with dynamic stimuli, and that changes to predictable trajectories are detected or missed in the similar way as orientation changes.
Archive | 2015
Steve Torrance; Ron Chrisley
It is suggested that some limitations of current designs for medical AI systems (be they autonomous or advisory) stem from the failure of those designs to address issues of artificial (or machine) consciousness. Consciousness would appear to play a key role in the expertise, particularly the moral expertise, of human medical agents, including, for example, autonomous weighting of options in (e.g.,) diagnosis; planning treatment; use of imaginative creativity to generate courses of action; sensorimotor flexibility and sensitivity; empathetic and morally appropriate responsiveness; and so on. Thus, it is argued, a plausible design constraint for a successful ethical machine medical or care agent is for it to at least model, if not reproduce, relevant aspects of consciousness and associated abilities. In order to provide theoretical grounding for such an enterprise we examine some key philosophical issues that concern the machine modelling of consciousness and ethics, and we show how questions relating to the first research goal are relevant to medical machine ethics. We believe that this will overcome a blanket skepticism concerning the relevance of understanding consciousness, to the design and construction of artificial ethical agents for medical or care contexts. It is thus argued that it would be prudent for designers of MME agents to reflect on issues to do with consciousness and medical (moral) expertise; to become more aware of relevant research in the field of machine consciousness; and to incorporate insights gained from these efforts into their designs.
International Journal of Machine Consciousness | 2014
Ron Chrisley
A critique of some central themes in Pentti Haikonens recent book, Consciousness and Robot Sentience, is offered. Haikonen maintains that the crucial question concerning consciousness is how the inner workings of the brain or an artificial system can appear, not as inner workings, but as subjective experience. It is argued here that Haikonens own account fails to answer this question, and that the question is not in fact the right one to ask anyway. It is argued that making the required changes to the question reveals an important lacuna in Haikonens explanation of consciousness.