Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gillian M. Hayes is active.

Publication


Featured researches published by Gillian M. Hayes.


intelligent robots and systems | 2002

Talking to Godot: dialogue with a mobile robot

Christian Theobalt; Johan Bos; Tim Chapman; Arturo Espinosa-Romero; Mark Fraser; Gillian M. Hayes; Ewan Klein; Tetsushi Oka; Richard Reeve

Godot is a mobile robot platform that serves as a testbed for the interface between a sophisticated low-level robot navigation and a symbolic high-level spoken dialogue system. The interesting feature of this combined system is that information flows in two directions: (1) the navigation system. supplies landmark; information from the cognitive map used for the interpretation of the users utterances in the dialogue system; and (2) the semantic content of utterances analysed by the dialogue system are used to adjust probabilities about the robots position in the navigation system.


international conference on artificial neural networks | 1997

Learning to Communicate Through Imitation in Autonomous Robots

Aude Billard; Gillian M. Hayes

Communication is a desirable skill for robots. We describe a method of how these skills could be learned. A control architecture of connectionist model combining life-long learning and predefined behaviours is developed and implemented in a physical system of two autonomous robots. A teaching scenario based on movement imitation is used to teach a basic non grammatical language. Teaching is done from a teacher robot to a student robot. While following the teacher, the student robot learns a basic vocabulary concerning its movements and location.


Adaptive Behavior | 2013

Hedonic value: enhancing adaptation for motivated agents

Ignasi Cos; Lola Cañamero; Gillian M. Hayes; Andrew Gillies

Reinforcement learning (RL) in the context of artificial agents is typically used to produce behavioral responses as a function of the reward obtained by interaction with the environment. When the problem consists of learning the shortest path to a goal, it is common to use reward functions yielding a fixed value after each decision, for example a positive value if the target location has been attained and a negative value at each intermediate step. However, this fixed strategy may be overly simplistic for agents to adapt to dynamic environments, in which resources may vary from time to time. By contrast, there is significant evidence that most living beings internally modulate reward value as a function of their context to expand their range of adaptivity. Inspired by the potential of this operation, we present a review of its underlying processes and we introduce a simplified formalization for artificial agents. The performance of this formalism is tested by monitoring the adaptation of an agent endowed with a model of motivated actor–critic, embedded with our formalization of value and constrained by physiological stability, to environments with different resource distribution. Our main result shows that the manner in which reward is internally processed as a function of the agent’s motivational state, strongly influences adaptivity of the behavioral cycles generated and the agent’s physiological stability.


Adaptive Behavior | 2005

An Architecture for Behavior-Based Reinforcement Learning

George Konidaris; Gillian M. Hayes

This paper introduces an integration of reinforcement learning and behavior-based control designed to produce real-time learning in situated agents. The model layers a distributed and asynchronous reinforcement learning algorithm over a learned topological map and standard behavioral substrate to create a reinforcement learning complex. The topological map creates a small and task-relevant state space that aims to make learning feasible, while the distributed and asynchronous aspects of the architecture make it compatible with behavior-based design principles. We present the design, implementation and results of an experiment that requires a mobile robot to perform puck foraging in three artificial arenas using the new model, random decision making, and layered standard reinforcement learning. The results show that our model is able to learn rapidly on a real robot in a real environment, learning and adapting to change more quickly than both alternatives. We show that the robot is able to make the best choices it can given its drives and experiences using only local decisions and therefore displays planning behavior without the use of classical planning techniques.


Adaptive Behavior | 2010

Learning Affordances of Consummatory Behaviors: Motivation-Driven Adaptive Perception

Ignasi Cos; Lola Cañamero; Gillian M. Hayes

This article introduces a formalization of the dynamics between sensorimotor interaction and homeostasis, integrated in a single architecture to learn object affordances of consummatory behaviors. We also describe the principles necessary to learn grounded knowledge in the context of an agent and its surrounding environment, which we use to investigate the constraints imposed by the agent’s internal dynamics and the environment. This is tested with an embodied, situated robot, in a simulated environment, yielding results that support this formalization. Furthermore, we show that this methodology allows learned affordances to be dynamically redefined, depending on object similarity, resource availability, and the rhythms of the agent’s internal physiology. For example, if a resource becomes increasingly scarce, the value assigned by the agent to its related effect increases accordingly, encouraging a more active behavioral strategy to maintain physiological stability. Experimental results also suggest that a combination of motivation-driven and affordance learning in a single architecture should simplify its overall complexity while increasing its adaptivity.


international conference on robotics and automation | 2002

A tale of two filters-on-line novelty detection

Paul A. Crook; Stephen Marsland; Gillian M. Hayes; Ulrich Nehmzow

For mobile robots, as well as other learning systems, the ability to highlight unexpected features of their environment - novelty detection - is very useful. One particularly important application for a robot equipped with novelty detection is inspection, highlighting potential problems in an environment. In this paper two novelty filters, both of which are capable of on-line and off-line novelty detection, are compared for two robot inspection tasks, one using sonar and the other camera images. The benefits and problems of using each of the filters are discussed and demonstrated.


Neural Processing Letters | 2000

Learning Synaptic Clusters for Nonlinear Dendritic Processing

Michael W. Spratling; Gillian M. Hayes

Nonlinear dendritic processing appears to be a feature of biological neuronsand would also be of use in many applications of artificial neuralnetworks. This paper presents a model of an initially standard linearnode which uses unsupervised learning to find clusters of inputs withinwhich inactivity at one synapse can occlude the activity at the othersynapses.


Robotics and Autonomous Systems | 1998

Design, analysis and comparison of robot learners

Jeremy L. Wyatt; John Hoar; Gillian M. Hayes

This paper outlines some ideas as to how robot learning experiments might best be designed. There are three principal findings: (i) in order to evaluate robot learners we must employ multiple evaluation methods together; (ii) in order to measure in any absolute way the performance of a learning algorithm we must characterise the complexity of the underlying decision task formed by the interaction of the agent, task and environment; (iii) that in fact this goal is too difficult to attain in practice, and progress in robot learning must rely on comparative work. Four methods for agent analysis are presented. These are used to analyse a robot that learns to push boxes from reinforcement. Using these techniques we have been able to show that Q(λ)-learning outperforms one-step Q-learning on a typical robot task. The differences are statistically significant. We emphasise the importance of experimental design in order to integrate the various forms of evaluation.


simulation of adaptive behavior | 2008

Evolution of Valence Systems in an Unstable Environment

Matthijs Snel; Gillian M. Hayes

We compare the performance of drive- versus perception-based motivational systems in an unstable environment. We investigate the hypothesis that valence systems (systems that evaluate positive and negative nature of events) that are based on internal physiology will have an advantage over systems that are based purely on external sensory input. Results show that inclusion of internal drive levels in valence system input significantly improves performance. Furthermore, a valence system based purely on internal drives outperforms a system that is additionally based on perceptual input. We provide arguments for why this is so and relate our architecture to brain areas involved in animal learning.


european conference on artificial life | 2005

Valency for adaptive homeostatic agents: relating evolution and learning

Theodoros Damoulas; Ignasi Cos-Aguilera; Gillian M. Hayes; Tim Taylor

This paper introduces a novel study on the sense of valency as a vital process for achieving adaptation in agents through evolution and developmental learning. Unlike previous studies, we hypothesise that behaviour-related information must be underspecified in the genes and that additional mechanisms such as valency modulate final behavioural responses. These processes endow the agent with the ability to adapt to dynamic environments. We have tested this hypothesis with an ad hoc designed model, also introduced in this paper. Experiments have been performed in static and dynamic environments to illustrate these effects. The results demonstrate the necessity of valency and of both learning and evolution as complementary processes for adaptation to the environment.

Collaboration


Dive into the Gillian M. Hayes's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuval Marom

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Demiris

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ignasi Cos

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jay Bradley

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar

Lola Cañamero

University of Hertfordshire

View shared research outputs
Researchain Logo
Decentralizing Knowledge