Chrystopher L. Nehaniv
University of Hertfordshire
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Chrystopher L. Nehaniv.
human-robot interaction | 2006
Kerstin Dautenhahn; Mick L. Walters; Sarah Woods; Kheng Lee Koay; Chrystopher L. Nehaniv; A. Sisbot; Rachid Alami; Thierry Siméon
This paper presents the combined results of two studies that investigated how a robot should best approach and place itself relative to a seated human subject. Two live Human Robot Interaction (HRI) trials were performed involving a robot fetching an object that the human had requested, using different approach directions. Results of the trials indicated that most subjects disliked a frontal approach, except for a small minority of females, and most subjects preferred to be approached from either the left or right side, with a small overall preference for a right approach by the robot. Handedness and occupation were not related to these preferences. We discuss the results of the user studies in the context of developing a path planning system for a mobile robot.
Applied Bionics and Biomechanics | 2009
Kerstin Dautenhahn; Chrystopher L. Nehaniv; Michael L. Walters; Ben Robins; Hatice Kose-Bagci; Mike Blow
This paper provides a comprehensive introduction to the design of the minimally expressive robot KASPAR, which is particularly suitable for human--robot interaction studies. A low-cost design with off-the-shelf components has been used in a novel design inspired from a multi-disciplinary viewpoint, including comics design and Japanese Noh theatre. The design rationale of the robot and its technical features are described in detail. Three research studies will be presented that have been using KASPAR extensively. Firstly, we present its application in robot-assisted play and therapy for children with autism. Secondly, we illustrate its use in human--robot interaction studies investigating the role of interaction kinesics and gestures. Lastly, we describe a study in the field of developmental robotics into computational architectures based on interaction histories for robot ontogeny. The three areas differ in the way as to how the robot is being operated and its role in social interaction scenarios. Each will be introduced briefly and examples of the results will be presented. Reflections on the specific design features of KASPAR that were important in these studies and lessons learnt from these studies concerning the design of humanoid robots for social interaction will also be discussed. An assessment of the robot in terms of utility of the design for human--robot interaction experiments concludes the paper.
IEEE Transactions on Autonomous Mental Development | 2010
Angelo Cangelosi; Giorgio Metta; Gerhard Sagerer; Stefano Nolfi; Chrystopher L. Nehaniv; Kerstin Fischer; Jun Tani; Tony Belpaeme; Giulio Sandini; Francesco Nori; Luciano Fadiga; Britta Wrede; Katharina J. Rohlfing; Elio Tuci; Kerstin Dautenhahn; Joe Saunders; Arne Zeschel
This position paper proposes that the study of embodied cognitive agents, such as humanoid robots, can advance our understanding of the cognitive development of complex sensorimotor, linguistic, and social learning skills. This in turn will benefit the design of cognitive robots capable of learning to handle and manipulate objects and tools autonomously, to cooperate and communicate with other robots and humans, and to adapt their abilities to changing internal, environmental, and social conditions. Four key areas of research challenges are discussed, specifically for the issues related to the understanding of: 1) how agents learn and represent compositional actions; 2) how agents learn and represent compositional lexica; 3) the dynamics of social interaction and learning; and 4) how compositional action and language representations are integrated to bootstrap the cognitive system. The review of specific issues and progress in these areas is then translated into a practical roadmap based on a series of milestones. These milestones provide a possible set of cognitive robotics goals and test scenarios, thus acting as a research roadmap for future work on cognitive developmental robotics.
Lecture Notes in Computer Science | 2001
Alan F. Blackwell; Carol Britton; Anna L. Cox; Thomas R. G. Green; Corin A. Gurr; Gada F. Kadoda; Maria Kutar; Martin J. Loomes; Chrystopher L. Nehaniv; Marian Petre; Chris Roast; Chris P. Roe; Allan Wong; Richard M. Young
The Cognitive Dimensions of Notations framework has been created to assist the designers of notational systems and information artifacts to evaluate their designs with respect to the impact that they will have on the users of those designs. The framework emphasizes the design choices available to such designers, including characterization of the users activity, and the inevitable tradeoffs that will occur between potential design options. The resuliing framework has been under development for over 10 years, and now has an active community of researchers devoted to it. This paper first introduces Cognitive Dimensions. It then summarizes the current activity, especially the results of a one-day workshop devoted to Cognitive Dimensions in December 2000, and reviews the ways in which it applies to the field of Cognitive Technology.
congress on evolutionary computation | 2005
Daniel Polani; Chrystopher L. Nehaniv
The classical approach to using utility functions suffers from the drawback of having to design and tweak the functions on a case by case basis. Inspired by examples from the animal kingdom, social sciences and games we propose empowerment, a rather universal function, defined as the information-theoretic capacity of an agents actuation channel. The concept applies to any sensorimotor apparatus. Empowerment as a measure reflects the properties of the apparatus as long as they are observable due to the coupling of sensors and actuators via the environment. Using two simple experiments we also demonstrate how empowerment influences sensor-actuator evolution
human-robot interaction | 2006
Joe Saunders; Chrystopher L. Nehaniv; Kerstin Dautenhahn
Programming robots to carry out useful tasks is both a complex and non-trivial exercise. A simple and intuitive method to allow humans to train and shape robot behaviour is clearly a key goal in making this task easier. This paper describes an approach to this problem based on studies of social animals where two teaching strategies are applied to allow a human teacher to train a robot by moulding its actions within a carefully scaffolded environment. Within these enviroments sets of competences can be built by building stateslash action memory maps of the robots interaction within that environment. These memory maps are then polled using a k-nearest neighbour based algorithm to provide a generalised competence. We take a novel approach in building the memory models by allowing the human teacher to construct them in a hierarchical manner. This mechanism allows a human trainer to build and extend an action-selection mechanism into which new skills can be added to the robots repertoire of existing competencies. These techniques are implemented on physical Khepera miniature robots and validated on a variety of tasks.
european conference on artificial life | 2003
Tom Quick; Chrystopher L. Nehaniv; Kerstin Dautenhahn; Graham Roberts
We demonstrate the evolution of simple embodied Genetic Regulatory Networks (GRNs) as real-time control systems for robotic and software-based embodied Artificial Organisms, and present results from two experimental test-beds: homeostatic temperature regulation in an abstract software environment, and phototactic robot behaviour maximising exposure to light. The GRN controllers are continually coupled to the organisms’ environments throughout their lifetimes, and constitute the primary basis for the organisms’ behaviour from moment to moment. The environment in which the organisms are embodied is shown to play a significant role in the dynamics of the GRNs, and the behaviour of the organisms.
Connection Science | 2006
Lars Olsson; Chrystopher L. Nehaniv; Daniel Polani
This article describes a developmental system based on information theory implemented on a real robot that learns a model of its own sensory and actuator apparatus. There is no innate knowledge regarding the modalities or representation of the sensory input and the actuators, and the system relies on generic properties of the robot’s world, such as piecewise smooth effects of movement on sensory changes. The robot develops the model of its sensorimotor system by first performing random movements to create an informational map of the sensors. Using this map, the robot then learns what effects the different possible actions have on the sensors. After this developmental process, the robot can perform basic visually guided movement.
european conference on artificial life | 2005
Daniel Polani; Chrystopher L. Nehaniv
The classical approach to using utility functions suffers from the drawback of having to design and tweak the functions on a case by case basis. Inspired by examples from the animal kingdom, social sciences and games we propose empowerment, a rather universal function, defined as the information-theoretic capacity of an agent’s actuation channel. The concept applies to any sensorimotoric apparatus. Empowerment as a measure reflects the properties of the apparatus as long as they are observable due to the coupling of sensors and actuators via the environment.
robot and human interactive communication | 2006
Mike Blow; Kerstin Dautenhahn; Andrew Appleby; Chrystopher L. Nehaniv; David Lee
As robots enter everyday life and start to interact with ordinary people the question of their appearance becomes increasingly important. Our perception of a robot can be strongly influenced by its facial appearance. Synthesizing relevant ideas from narrative art design, the psychology of face recognition, and recent HRI studies into robot faces, we discuss effects of the uncanny valley and the use of iconicity and its relationship to the self-other perceptive divide, as well as abstractness and realism, classifying existing designs along these dimensions. A new expressive HRI research robot called KASPAR is introduced and the results of a preliminary study on human perceptions of robot expressions are discussed