Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aris Alissandrakis is active.

Publication


Featured researches published by Aris Alissandrakis.


systems man and cybernetics | 2007

Correspondence Mapping Induced State and Action Metrics for Robotic Imitation

Aris Alissandrakis; Chrystopher L. Nehaniv; Kerstin Dautenhahn

This paper addresses the problem of body mapping in robotic imitation where the demonstrator and imitator may not share the same embodiment [degrees of freedom (DOFs), body morphology, constraints, affordances, and so on]. Body mappings are formalized using a unified (linear) approach via correspondence matrices, which allow one to capture partial, mirror symmetric, one-to-one, one-to-many, many-to-one, and many-to-many associations between various DOFs across dissimilar embodiments. We show how metrics for matching state and action aspects of behavior can be mathematically determined by such correspondence mappings, which may serve to guide a robotic imitator. The approach is illustrated and validated in a number of simulated 3-D robotic examples, using agents described by simple kinematic models and different types of correspondence mappings


International Journal of Advanced Robotic Systems | 2007

Self-imitation and Environmental Scaffolding for Robot Teaching

Joe Saunders; Chrystopher L. Nehaniv; Kerstin Dautenhahn; Aris Alissandrakis

Imitative learning and learning by observation are social mechanisms that allow a robot to acquire knowledge from a human or another robot. However to be able to obtain skills in this way the robot faces many complex issues, one of which is that of finding solutions to the correspondence problem. Evolutionary predecessors to observational imitation may have been self-imitation where an agent avoids the complexities of the correspondence problem by learning and replicating actions it has experienced through the manipulation of its body. We investigate how a robotic control and teaching system using self-imitation can be constructed with reference to psychological models of motor control and ideas from social scaffolding seen in animals. Within these scaffolded environments sets of competencies can be built by constructing hierarchical state/action memory maps of the robots interaction within that environment. The scaffolding process provides a mechanism to enable learning to be scaled up. The resulting system allows a human trainer to teach a robot new skills and modify skills that the robot may possess. Additionally the system allows the robot to notify the trainer when it is being taught skills it already has in its repertoire and to direct and focus its attention and sensor resources to relevant parts of the skill being executed. We argue that these mechanisms may be a first step towards the transformation from self-imitation to observational imitation. The system is validated on a physical pioneer robot that is taught using self-imitation to track, follow and point to a patterned object.


robot and human interactive communication | 2006

Action, State and Effect Metrics for Robot Imitation

Aris Alissandrakis; Chrystopher L. Nehaniv; Kerstin Dautenhahn

This paper addresses the problem of body mapping in robotic imitation where the demonstrator and imitator may not share the same embodiment (degrees of freedom (DOFs), body morphology, constraints, affordances and so on). Body mappings are formalized using a unified (linear) approach via correspondence matrices, which allow one to capture partial, mirror symmetric, one-to-one, one-to-many, many-to-one and many-to-many associations between various DOFs across dissimilar embodiments. We show how metrics for matching state and action aspects of behaviour can be mathematically determined by such correspondence mappings, which may serve to guide a robotic imitator. The approach is illustrated in a number of examples, using agents described by simple kinematic models and different types of correspondence mappings. Also, focusing on aspects of displacement and orientation of manipulated objects, a selection of metrics are presented, towards a characterization of the space of effect metrics


computational intelligence in robotics and automation | 2005

An Approach for Programming Robots by Demonstration: Generalization Across Different Initial Configurations of Manipulated Objects

Aris Alissandrakis; Chrystopher L. Nehaniv; Kerstin Dautenhahn; Joe Saunders

Imitation is a powerful learning tool that can be used by a robotic agent to socially learn new skills and tasks. One of the fundamental problems in imitation is the correspondence problem, how to map between the actions, states and effects of the model and imitator agents, when the embodiment of the agents is dissimilar. In our approach, the matching depends on different metrics and granularity. Focusing on object manipulation and arrangement demonstrated by a human, this paper presents Jabberwocky, a system that uses different metrics and granularity to produce action command sequences that when executed by an imitating agent can achieve corresponding effects (manipulandum absolute/relative position, displacement, rotation and orientation). Based on a single demonstration of an object manipulation task by a human and using a combination of effect metrics, the system is shown to produce correspondence solutions that are then performed by an imitating agent, generalizing with respect to different initial object positions and orientations in the imitators workspace. Depending on the particular metrics and granularity used, the corresponding effects will differ (shown in examples), making the appropriate choice of metrics and granularity depend on the task and context


human-robot interaction | 2006

Evaluation of robot imitation attempts: comparison of the system's and the human's perspectives

Aris Alissandrakis; Chrystopher L. Nehaniv; Kerstin Dautenhahn; Joe Saunders

Imitation is a powerful learning tool when humans and robots interact in a social context. A series of experimental runs and a small pilot user study were conducted to evaluate the performance of a system designed for robot imitation. Performance assessments of similarity of imitative behaviours were carried out by machines and by humans: the system was evaluated quantitatively (from a machine-centric perspective) and qualitatively (from a human perspective) in order to study the reconciliation of these views. The experimental results presented here illustrate how the number of exceptions can be used as a performance measure by a robotic or software imitator of an object manipulation behaviour. (In this context, exceptions are events when the optimal displacement and/or rotation that minimize the dissimilarity metrics used to generate a corresponding imitative behaviour cannot be directly achieved in the particular context.) Results of the user study giving similarity judgments on imitative behaviours were used to examine how the quantitative measure of the number of exceptions (from a robots perspective) corresponds to the qualitative evaluation of similarity (from a humans perspective) for the imitative behaviours generated by the jabberwocky system. Results suggest that there is a good alignment between this quantitive system centered assessment and the more qualitative human-centered assessment of imitative performance.


intelligent robots and systems | 2010

Full-body gesture recognition using inertial sensors for playful interaction with small humanoid robot

Martin D. Cooney; Christian Becker-Asano; Takayuki Kanda; Aris Alissandrakis; Hiroshi Ishiguro

People like to play, and robotic technology offers the opportunity to interact with artifacts in new ways. Robots co-existing with humans in domestic and public environments are expected to behave as companions, also engaging in playful interaction. If a robot is small, we foresee that people will want to be able to pick it up and express their intentions playfully by hugging, shaking and moving it around in various ways. Such robots will need to recognize these gestures--which we call “full-body gestures” because they affect the robots full body. Inertial sensors inside the robot could be used to detect these gestures, in order to avoid having to rely on external sensors in the environment. However, it is not obvious which gestures typically occur during play, and which of these can be reliably detected. We therefore investigate full-body gesture recognition using Sponge Robot, a small humanoid robot equipped with inertial sensors and designed for playful human-robot interaction.


human-robot interaction | 2008

Human to robot demonstrations of routine home tasks: exploring the role of the robot's feedback

Nuno Otero; Aris Alissandrakis; Kerstin Dautenhahn; Chrystopher L. Nehaniv; Dag Sverre Syrdal; Kheng Lee Koay

In this paper, we explore some conceptual issues, relevant for the design of robotic systems aimed at interacting with humans in domestic environments. More specifically, we study the role of the robots feedback (positive or negative acknowledgment of understanding) on a human teachers demonstration of a routine home task (laying a table). Both the human and the systems perspectives are considered in the analysis and discussion of results from a human-robot user study, highlighting some important conceptual and practical issues. These include the lack of explicitness and consistency on peoples demonstration strategies. Furthermore, we discuss the need to investigate design strategies to elicit peoples knowledge about the task and also successfully advertize the robots abilities in order to promote peoples ability to provide appropriate demonstrations.


computational intelligence in robotics and automation | 2003

Synchrony and perception in robotic imitation across embodiments

Aris Alissandrakis; Chrystopher L. Nehaniv; Kerstin Dautenhahn

Social robotics opens up the possibility of individualized social intelligence in member robots of a community, and allows us to harness not only individual learning by the individual robot, but also the acquisition of new skills by observing other members of the community (robot, human, or virtual). We describe ALICE (Action Learning for Imitation via Correspondences between Embodiments), an implemented generic mechanism for solving the correspondence problem between differently embodied robots. ALICE enables a robotic agent to learn a behavioral repertoire suitable to performing a task by observing a model agent, possibly having a different type of body, joints, different number of degrees of freedom, etc. Previously we demonstrated that the character of imitation achieved will depend on the granularity of subgoal matching, and on the metrics used to evaluate success. In this work, we implement ALICE for simple robotic arm agents in simulation using various metrics for evaluating success according to actions, states, or effects or weighted combinations. We examine the roles of synchronization, looseness of perceptual match, and of proprioceptive matching by a series of experiments. As a complement to the social developmental aspects suggested by developmental psychology, our results show that synchronization and loose perceptual matching also allow for faster acquisition of behavioral competencies at low error rates. We also discuss the use of social learning mechanisms like ALICE for transmission of skills between robots, and give the first example of transmission of a skill through a chain of robots, despite differences in embodiment of agents involved. This simple example demonstrates that by using social learning and imitation, cultural transmission is possible among robots, even heterogeneous groups of robots.


International Journal of Social Robotics | 2014

Designing Enjoyable Motion-Based Play Interactions with a Small Humanoid Robot

Martin D. Cooney; Takayuki Kanda; Aris Alissandrakis; Hiroshi Ishiguro

Robots designed to co-exist with humans in domestic and public environments should be capable of interacting with people in an enjoyable fashion in order to be socially accepted. In this research, we seek to set up a small humanoid robot with the capability to provide enjoyment to people who pick up the robot and play with it by hugging, shaking and moving the robot in various ways. Inertial sensors inside a robot can capture how its body is moved when people perform such “full-body gestures”. Unclear is how a robot can recognize what people do during play, and how such knowledge can be used to provide enjoyment. People’s behavior is complex, and naïve designs for a robot’s behavior based only on intuitive knowledge from previous designs may lead to failed interactions. To solve these problems, we model people’s behavior using typical full-body gestures observed in free interaction trials, and devise an interaction design based on avoiding typical failures observed in play sessions with a naïve version of our robot. The interaction design is completed by investigating how a robot can provide “reward” and itself suggest ways to play during an interaction. We then verify experimentally that our design can be used to provide enjoyment during a playful interaction. By describing the process of how a small humanoid robot can be designed to provide enjoyment, we seek to move one step closer to realizing companion robots which can be successfully integrated into human society.


ieee-ras international conference on humanoid robots | 2011

Interaction design for an enjoyable play interaction with a small humanoid robot

Martin D. Cooney; Takayuki Kanda; Aris Alissandrakis; Hiroshi Ishiguro

Robots designed to act as companions are expected to be able to interact with people in an enjoyable fashion. In particular, our aim is to enable small companion robots to respond in a pleasant way when people pick them up and play with them. To this end, we developed a gesture recognition system capable of recognizing play gestures which involve a person moving a small humanoid robots full body (“full-body gestures”). However, such recognition by itself is not enough to provide a nice interaction. We find that interactions with an initial, naïve version of our system frequently fail. The question then becomes: what more is required? I.e., what sort of design is required in order to create successful interactions? To answer this question, we analyze typical failures which occur and compile a list of guidelines. Then, we implement this model in our robot, proposing strategies for how a robot can provide “reward” and suggest goals for the interaction. As a consequence, we conduct a validation experiment. We find that our interaction design with “persisting intentions” can be used to establish an enjoyable play interaction.

Collaboration


Dive into the Aris Alissandrakis's collaboration.

Top Co-Authors

Avatar

Kerstin Dautenhahn

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joe Saunders

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar

Yoshihiro Miyake

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dag Sverre Syrdal

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge