Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alessandro Roncone is active.

Publication


Featured researches published by Alessandro Roncone.


robotics science and systems | 2016

A cartesian 6-DoF gaze controller for humanoid robots

Alessandro Roncone; Ugo Pattacini; Giorgio Metta; Lorenzo Natale

In robotic systems with moving cameras control of gaze allows for image stabilization, tracking and attention switching. Proper integration of these capabilities lets the robot exploit the kinematic redundancy of the oculomotor system to improve tracking performance and extend the field of view, while at the same time stabilize vision to reduce image blur induced by the robot’s own movements. Gaze may be driven not only by vision but also by other sensors (e.g. inertial sensors or motor encoders) that carry information about the robot’s own movement. Humanoid robots have sophisticated oculomotor systems, usually mounting inertial devices and are therefore an ideal platform to study this problem. We present a complete architecture for gaze control of a humanoid robot. Our system is able to control the neck and the eyes in order to track a 3D cartesian fixation point in space. The redundancy of the kinematic problem is exploited to implement additional behaviors, namely passive gaze stabilization, saccadic movements, and vestibuloocular reflex. We implement this framework on the iCub’s head, which is equipped with a 3-DoFs neck and a 3-DoFs eyes system and includes an inertial unit that provides feedback on the acceleration and angular speed of the head. The framework presented in this work can be applied to any robot equipped with an anthropomorphic head. In addition we provide an opensource, modular implementation, which has been already ported to other robotic platforms.


ieee-ras international conference on humanoid robots | 2014

3D stereo estimation and fully automated learning of eye-hand coordination in humanoid robots

Sean Ryan Fanello; Ugo Pattacini; Ilaria Gori; Vadim Tikhanoff; Marco Randazzo; Alessandro Roncone; Francesca Odone; Giorgio Metta

This paper deals with the problem of 3D stereo estimation and eye-hand calibration in humanoid robots. We first show how to implement a complete 3D stereo vision pipeline, enabling online and real-time eye calibration. We then introduce a new formulation for the problem of eye-hand coordination. We developed a fully automated procedure that does not require human supervision. The end-effector of the humanoid robot is automatically detected in the stereo images, providing large amounts of training data for learning the vision-to-kinematics mapping. We report exhaustive experiments using different machine learning techniques; we show that a mixture of linear transformations can achieve the highest accuracy in the shortest amount of time, while guaranteeing real-time performance. We demonstrate the application of the proposed system in two typical robotic scenarios: (1) object grasping and tool use; (2) 3D scene reconstruction. The platform of choice is the iCub humanoid robot.


PLOS ONE | 2016

Peripersonal Space and Margin of Safety around the Body: Learning Visuo-Tactile Associations in a Humanoid Robot with Artificial Skin

Alessandro Roncone; Matej Hoffmann; Ugo Pattacini; Luciano Fadiga; Giorgio Metta

This paper investigates a biologically motivated model of peripersonal space through its implementation on a humanoid robot. Guided by the present understanding of the neurophysiology of the fronto-parietal system, we developed a computational model inspired by the receptive fields of polymodal neurons identified, for example, in brain areas F4 and VIP. The experiments on the iCub humanoid robot show that the peripersonal space representation i) can be learned efficiently and in real-time via a simple interaction with the robot, ii) can lead to the generation of behaviors like avoidance and reaching, and iii) can contribute to the understanding the biological principle of motor equivalence. More specifically, with respect to i) the present model contributes to hypothesizing a learning mechanisms for peripersonal space. In relation to point ii) we show how a relatively simple controller can exploit the learned receptive fields to generate either avoidance or reaching of an incoming stimulus and for iii) we show how the robot can select arbitrary body parts as the controlled end-point of an avoidance or reaching movement.


international conference on robotics and automation | 2017

Transparent role assignment and task allocation in human robot collaboration

Alessandro Roncone; Olivier Mangin; Brian Scassellati

Collaborative robots represent a clear added value to manufacturing, as they promise to increase productivity and improve working conditions of such environments. Although modern robotic systems have become safe and reliable enough to operate close to human workers on a day-to-day basis, the workload is still skewed in favor of a limited contribution from the robots side, and a significant cognitive load is allotted to the human. We believe the transition from robots as recipients of human instruction to robots as capable collaborators hinges around the implementation of transparent systems, where mental models about the task are shared between peers, and the human partner is freed from the responsibility of taking care of both actors. In this work, we implement a transparent task planner able to be deployed in realistic, near-future applications. The proposed framework is capable of basic reasoning capabilities for what concerns role assignment and task allocation, and it interfaces with the human partner at the level of abstraction he is most comfortable with. The system is readily available to non-expert users, and programmable with high-level commands in an intuitive interface. Our results demonstrate an overall improvement in terms of completion time, as well as a reduced cognitive load for the human partner.


intelligent robots and systems | 2015

Learning peripersonal space representation through artificial skin for avoidance and reaching with whole body surface

Alessandro Roncone; Matej Hoffmann; Ugo Pattacini; Giorgio Metta

With robots leaving factory environments and entering less controlled domains, possibly sharing living space with humans, safety needs to be guaranteed. To this end, some form of awareness of their body surface and the space surrounding it is desirable. In this work, we present a unique method that lets a robot learn a distributed representation of space around its body (or peripersonal space) by exploiting a whole-body artificial skin and through physical contact with the environment. Every taxel (tactile element) has a visual receptive field anchored to it. Starting from an initially blank state, the distance of every object entering this receptive field is visually perceived and recorded, together with information whether the object has eventually contacted the particular skin area or not. This gives rise to a set of probabilities that are updated incrementally and that carry information about the likelihood of particular events in the environment contacting a particular set of taxels. The learned representation naturally serves the purpose of predicting contacts with the whole body of the robot, which is of clear behavioral relevance. Furthermore, we devised a simple avoidance controller that is triggered by this representation, thus endowing a robot with a “margin of safety” around its body. Finally, simply reversing the sign in the controller we used gives rise to simple “reaching” for objects in the robots vicinity, which automatically proceeds with the most activated (closest) body part.


ieee-ras international conference on humanoid robots | 2014

Gaze stabilization for humanoid robots: A comprehensive framework

Alessandro Roncone; Ugo Pattacini; Giorgio Metta; Lorenzo Natale

Gaze stabilization is an important requisite for humanoid robots. Previous work on this topic has focused on the integration of inertial and visual information. Little attention has been given to a third component, which is the knowledge that the robot has about its own movement. In this work we propose a comprehensive framework for gaze stabilization in a humanoid robot. We focus on the problem of compensating for disturbances induced in the cameras due to self-generated movements of the robot. In this work we employ two separate signals for stabilization: (1) an anticipatory term obtained from the velocity commands sent to the joints while the robot moves autonomously; (2) a feedback term from the on board gyroscope, which compensates unpredicted external disturbances. We first provide the mathematical formulation to derive the forward and the differential kinematics of the fixation point of the stereo system. We finally test our method on the iCub robot. We show that the stabilization consistently reduces the residual optical flow during the movement of the robot and in presence of external disturbances. We also demonstrate that proper integration of the neck DoF is crucial to achieve correct stabilization.


human-robot interaction | 2018

Compact Real-time Avoidance on a Humanoid Robot for Human-robot Interaction

Dong Hai Phuong Nguyen; Matej Hoffmann; Alessandro Roncone; Ugo Pattacini; Giorgio Metta

With robots leaving factories and entering less controlled domains, possibly sharing the space with humans, safety is paramount and multimodal awareness of the body surface and the surrounding environment is fundamental. Taking inspiration from peripersonal space representations in humans, we present a framework on a humanoid robot that dynamically maintains such a protective safety zone, composed of the following main components: (i) a human 2D keypoints estimation pipeline employing a deep learning based algorithm, extended here into 3D using disparity; (ii) a distributed peripersonal space representation around the robot»s body parts; (iii) a reaching controller that incorporates all obstacles entering the robot»s safety zone on the fly into the task. Pilot experiments demonstrate that an effective safety margin between the robot»s and the human»s body parts is kept. The proposed solution is flexible and versatile since the safety zone around individual robot and human body parts can be selectively modulated---here we demonstrate stronger avoidance of the human head compared to rest of the body. Our system works in real time and is self-contained, with no external sensory equipment and use of onboard cameras only.


international conference on social robotics | 2016

Physiologically Inspired Blinking Behavior for a Humanoid Robot

Hagen Lehmann; Alessandro Roncone; Ugo Pattacini; Giorgio Metta

Blinking behavior is an important part of human nonverbal communication. It signals the psychological state of the social partner. In this study, we implemented different blinking behaviors for a humanoid robot with pronounced physical eyes. The blinking patterns implemented were either statistical or based on human physiological data. We investigated in an online study the influence of the different behaviors on the perception of the robot by human users with the help of the Godspeed questionnaire. Our results showed that, in the condition with human-like blinking behavior, the robot was perceived as being more intelligent compared to not blinking or statistical blinking. As we will argue, this finding represents the starting point for the design of a ‘holistic’ social robotic behavior.


applications of natural language to data bases | 2018

Toward Human-Like Robot Learning.

Sergei Nirenburg; Marjorie McShane; Stephen Beale; Peter Wood; Brian Scassellati; Olivier Magnin; Alessandro Roncone

We present an implemented robotic system that learns elements of its semantic and episodic memory through language interaction with its human users. This human-like learning can happen because the robot can extract, represent and reason over the meaning of the user’s natural language utterances. The application domain is collaborative assembly of flatpack furniture. This work facilitates a bi-directional grounding of implicit robotic skills in explicit ontological and episodic knowledge and of ontological symbols in the real-world actions by the robot. In so doing, this work provides an example of successful integration of robotic and cognitive architectures.


international conference on robotics and automation | 2014

Automatic kinematic chain calibration using artificial skin: Self-touch in the iCub humanoid robot

Alessandro Roncone; Matej Hoffmann; Ugo Pattacini; Giorgio Metta

Collaboration


Dive into the Alessandro Roncone's collaboration.

Top Co-Authors

Avatar

Giorgio Metta

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Ugo Pattacini

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matej Hoffmann

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Lorenzo Natale

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dong Hai Phuong Nguyen

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hagen Lehmann

Istituto Italiano di Tecnologia

View shared research outputs
Researchain Logo
Decentralizing Knowledge