Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Guido Schillaci is active.

Publication


Featured researches published by Guido Schillaci.


International Journal of Social Robotics | 2013

Evaluating the Effect of Saliency Detection and Attention Manipulation in Human-Robot Interaction

Guido Schillaci; Saša Bodiroža; Verena V. Hafner

The ability to share the attention with another individual is essential for having intuitive interaction. Two relatively simple, but important prerequisites for this, saliency detection and attention manipulation by the robot, are identified in the first part of the paper. By creating a saliency based attentional model combined with a robot ego-sphere and by adopting attention manipulation skills, the robot can engage in an interaction with a human and start an interaction game including objects as a first step towards a joint attention.We set up an interaction experiment in which participants could physically interact with a humanoid robot equipped with mechanisms for saliency detection and attention manipulation. We tested our implementation in four combinations of activated parts of the attention system, which resulted in four different behaviours.Our aim was to identify those physical and behavioural characteristics that need to be emphasised when implementing attentive mechanisms in robots, and to measure the user experience when interacting with a robot equipped with attentive mechanisms.We adopted two techniques for evaluating saliency detection and attention manipulation mechanisms in human-robot interaction: user experience as measured by qualitative and quantitative questions in questionnaires and proxemics estimated from recorded videos of the interactions.The robot’s level of interactiveness has been found to be positively correlated with user experience factors like excitement and robot factors like lifelikeness and intelligence, suggesting that robots must give as much feedback as possible in order to increase the intuitiveness of the interaction, even when performing only attentive behaviours. This was confirmed also by proxemics analysis: participants reacted more frenetically when the interaction was perceived as less satisfying. Improving the robot’s feedback capability could increase user satisfaction and decrease the probability of unexpected or incomprehensible user movements. Finally, multi-modal interaction (through arm and head movements) increased the level of interactiveness perceived by participants. Positive correlation has been found between the elegance of robot movements and user satisfaction.


human-robot interaction | 2011

Random movement strategies in self-exploration for a humanoid robot

Guido Schillaci; Verena V. Hafner

Motor Babbling has been identified as a self-exploring behaviour adopted by infants and is fundamental for the development of more complex behaviours, self-awareness and social interaction skills. Here, we adopt this paradigm for the learning strategies of a humanoid robot that maps its random arm movements with its head movements, determined by the perception of its own body. Finally, we analyse three random movement strategies and experimentally test on a humanoid robot how they affect the learning speed.


HBU'12 Proceedings of the Third international conference on Human Behavior Understanding | 2012

Internal simulations for behaviour selection and recognition

Guido Schillaci; Bruno Lara; Verena V. Hafner

In this paper, we present internal simulations as a methodology for human behaviour recognition and understanding. The internal simulations consist of pairs of inverse forward models representing sensorimotor actions. The main advantage of this method is that it both serves for action selection and prediction as well as recognition. We present several human-robot interaction experiments where the robot can recognize the behaviour of the human reaching for objects.


Frontiers in Robotics and AI | 2016

Exploration Behaviors, Body Representations, and Simulation Processes for the Development of Cognition in Artificial Agents

Guido Schillaci; Verena V. Hafner; Bruno Lara

Sensorimotor control and learning are fundamental prerequisites for cognitive development in humans and animals. Evidence from behavioural sciences and neuroscience suggests that motor and brain development are strongly intertwined with the experiential process of \textit{exploration}, where internal body representations are formed and maintained over time. In order to guide our movements, our brain must hold an internal model of our body and constantly monitor its configuration state. How can sensorimotor control using such low-level body representations enable the development of more complex cognitive and motor capabilities? Although a clear answer has still not been found for this question, several studies suggest that processes of mental simulation of action-perception loops are likely to be executed in our brain and are dependent on internal body representations. Therefore, the capability to re-enact sensorimotor experience might represent a key mechanism behind the implementation of higher cognitive capabilities, such as behaviour recognition, arbitration and imitation, sense of agency and self-other distinction. Addressed mainly to researchers on autonomous motor and mental development in artificial agents, this work aims at gathering the latest development in the study on exploration behaviours, on internal body representations, on internal models, and on mechanisms for internal sensorimotor simulations. Relevant studies in human and animal sciences are discussed and a parallel to similar investigations in robotics is presented.


human-robot interaction | 2012

Coupled inverse-forward models for action execution leading to tool-use in a humanoid robot

Guido Schillaci; Verena V. Hafner; Bruno Lara

We propose a computational model based on inverse-forward model pairs for the simulation and execution of actions. The models are implemented on a humanoid robot and are used to control reaching actions with the arms. In the experimental setup a tool has been attached to the left arm of the robot extending its covered action space. The preliminary investigations carried out aim at studying how the use of tools modifies the body scheme of the robot. The system performs action simulations before the actual executions. For each of the arms, predicted end-effector positions are compared with the desired one and the internal pair presenting the lowest error is selected for action execution. This allows the robot to decide on performing an action either with its hand alone or with the one with the attached tool.


intelligent robots and systems | 2010

An adaptive probabilistic approach to goal-level imitation learning

Haris Dindo; Guido Schillaci

Imitation learning has been recognized as a promising technique to teach robots advanced skills. It is based on the idea that robots could learn new behaviors by observing and imitating the behaviors of other skilled actors. We propose an adaptive probabilistic graphical model which copes with three core issues of any imitative behavior: observation, representation and reproduction of skills. Our model, Growing Hierarchical Dynamic Bayesian Network (GHDBN), is hierarchical (i.e. able to characterize structured behaviors at different levels of abstraction), and growing (i.e. skills are learned or updated incrementally - and at each level of abstraction - every time a new observation sequence is available). A GHDBN, once trained, is able to recognize skills being observed and to reproduce them by exploiting the generative power of the model. The system has been successfully tested in simulation, and initial tests have been conducted on a NAO humanoid robot platform.


Development and Learning and Epigenetic Robotics (ICDL-Epirob), 2014 Joint IEEE International Conferences on | 2014

Online learning of visuo-motor coordination in a humanoid robot. A biologically inspired model

Guido Schillaci; Verena V. Hafner; Bruno Lara

Coordinating vision with movements of the body is a fundamental prerequisite for the development of complex motor and cognitive skills. Visuo-motor coordination seems to rely on processes that map spatial vision onto patterns of muscular contraction. In this paper, we investigate the formation and the coupling of sensory maps in the humanoid robot Aldebaran Nao. We propose a biologically inspired model for coding internal representations of sensorimotor experience that can be fed with data coming from different motor and sensory modalities, such as visual, auditory and tactile. The model is inspired by the self-organising properties of areas in the human brain, whose topologies are structured by the information produced through the interaction of the individual with the external world. In particular, Dynamic Self-Organising Maps (DSOMs) proposed by Rougier et al. [1] have been adopted together with a Hebbian paradigm for online and continuous learning on both static and dynamic data distributions. Results show how the humanoid robot improves the quality of its visuo-motor coordination over time, starting from an initial configuration where no knowledge about how to visually follow its arm movements is present. Moreover, plasticity of the proposed model is tested. At a certain point during the developmental timeline, a damage in the system is simulated by adding a perturbation to the motor command used for training the model. Consequently, the performance of the visuo-motor coordination is affected by an initial degradation, followed by a new improvement as the proposed model adapts to the new mapping.


Archive | 2014

Sensorimotor learning and simulation of experience as a basis for the development of cognition in robotics

Guido Schillaci

State-of-the-art robots are still not properly able to learn from, adapt to, react to unexpected circumstances, and to autonomously and safely operate in uncertain environments. Researchers in developmental robotics address these issues by building artificial systems capable of acquiring motor and cognitive capabilities by interacting with their environment, inspired by human development. This thesis adopts a similar approach in finding some of those basic behavioural components that may allow for the autonomous development of sensorimotor and social skills in robots. Here, sensorimotor interactions are investigated as a mean for the acquisition of experience. Experiments on exploration behaviours for the acquisition of arm movements, tool-use and interactive capabilities are presented. The development of social skills is also addressed, in particular of joint attention, the capability to share the focus of attention between individuals. Two prerequisites of joint attention are investigated: imperative pointing gestures and visual saliency detection. The established framework of the internal models is adopted for coding sensorimotor experience in robots. In particular, inverse and forward models are trained with different configurations of low-level sensory and motor data generated by the robot through exploration behaviours, or observed by human demonstrator, or acquired through kinaesthetic teaching. The internal models framework allows the generation of simulations of sensorimotor cycles. This thesis investigates also how basic cognitive skills can be implemented in a humanoid robot by allowing it to recreate the perceptual and motor experience gathered in past interactions with the external world. In particular, simulation processes are used as a basis for implementing cognitive skills such as action selection, tool-use, behaviour recognition and self-other distinction.


human-robot interaction | 2013

Is that me?: sensorimotor learning and self-other distinction in robotics

Guido Schillaci; Verena V. Hafner; Bruno Lara; Marc Grosjean

In order to have robots interact with other agents, it is important that they are able recognize their own actions. The research reported here relates to the use of internal models for self-other distinction. We demonstrate how a humanoid robot, which acquires a sensorimotor scheme through self-exploration, can produce and predict simple trajectories that have particular characteristics. Comparing these predictions to incoming sensory information provides the robot with a basic tool for distinguishing between self and other.


human-robot interaction | 2014

Learning hand-eye coordination for a humanoid robot using SOMs

Ivana Kajić; Guido Schillaci; Saša Bodiroža; Verena V. Hafner

Hand-eye coordination is an important motor skill acquired in infancy which precedes pointing behavior. Pointing facilitates social interactions by directing attention of engaged participants. It is thus essential for the natural flow of human-robot interaction. Here, we attempt to explain how pointing emerges from sensorimotor learning of hand-eye coordination in a humanoid robot. During a body babbling phase with a random walk strategy, a robot learned mappings of joints for different arm postures. Arm joint configurations were used to train biologically inspired models consisting of SOMs. We show that such a model implemented on a robotic platform accounts for pointing behavior while humans present objects out of reach of the robot’s hand.

Collaboration


Dive into the Guido Schillaci's collaboration.

Top Co-Authors

Avatar

Verena V. Hafner

Humboldt University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Bruno Lara

Universidad Autónoma del Estado de Morelos

View shared research outputs
Top Co-Authors

Avatar

Antonio Pico

Humboldt University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Saša Bodiroža

Humboldt University of Berlin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sasa Bodiroza

Humboldt University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Bruno Lara-Guzmán

Universidad Autónoma del Estado de Morelos

View shared research outputs
Top Co-Authors

Avatar

Esaú Escobar-Juárez

Universidad Autónoma del Estado de Morelos

View shared research outputs
Top Co-Authors

Avatar

Jorge Hermosillo-Valadez

Universidad Autónoma del Estado de Morelos

View shared research outputs
Top Co-Authors

Avatar

Pierre Letier

Université libre de Bruxelles

View shared research outputs
Researchain Logo
Decentralizing Knowledge