Eris Chinellato
University of Leeds
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Eris Chinellato.
International Journal of Humanoid Robotics | 2004
Antonio Morales; Eris Chinellato; Andrew H. Fagg; Angel P. del Pobil
Manipulation skills are a key issue for a humanoid robot. Here, we are interested in a vision-based grasping system able to deal with previously unknown objects in real time and in an intelligent man- ner. Starting from a number of feasible candidate grasps, we focus on the problem of predicting their reliability using the knowledge acquired in previous grasping experiences. A set of visual features which take into account physical properties that can aect the stability and reliability of a grasp are dened. A humanoid robot obtains its grasping experience by repeating a large number of grasping actions on dieren t objects. An experimental protocol is established in order to classify grasps according to their reliability. Two prediction/classication strategies are dened which allow the robot to predict the outcome of a grasp only analizing its visual features. The results indicate that these strategies are adequate to predict the realibility of a grasp and to generalize to dieren t objects.
Journal of Experimental Psychology: Human Perception and Performance | 2012
Anna Stenzel; Eris Chinellato; Maria A. Tirado Bou; Angel P. del Pobil; Markus Lappe; Roman Liepelt
In human-human interactions, corepresenting a partners actions is crucial to successfully adjust and coordinate actions with others. Current research suggests that action corepresentation is restricted to interactions between human agents facilitating social interaction with conspecifics. In this study, we investigated whether action corepresentation, as measured by the social Simon effect (SSE), is present when we share a task with a real humanoid robot. Further, we tested whether the believed humanness of the robots functional principle modulates the extent to which robotic actions are corepresented. We described the robot to participants either as functioning in a biologically inspired human-like way or in a purely deterministic machine-like manner. The SSE was present in the human-like but not in the machine-like robot condition. These findings suggest that humans corepresent the actions of nonbiological robotic agents when they start to attribute human-like cognitive processes to the robot. Our findings provide novel evidence for top-down modulation effects on action corepresentation in human-robot interaction situations.
Journal of Vision | 2007
Anthony Singhal; Jody C. Culham; Eris Chinellato; Melvyn A. Goodale
Previous kinematic research suggests that visually guided grasping employs an accurate real-time control system in the dorsal stream, whereas delayed grasping relies on less accurate stored information derived by the perceptual system in the ventral stream. We explored these ideas in two experiments combining visually guided and delayed grasping with auditory tasks involving perception-based imagery and semantic memory. In both experiments, participants were cued to grasp three-dimensional objects of varying sizes. During visually guided trials, objects were visible during the interval between the cue and movement onset. During delayed trials, objects were occluded at the time of the cue. In Experiment 1, the second task required participants to listen to object names and vocally respond if the objects were of a particular shape. In Experiment 2, participants studied a paired-associates list prior to testing and then performed cued recall while grasping. The results of these experiments showed that there was reciprocal interference on both tasks, which was consistently greater during delayed grasping. Experiment 2 showed that the introduction of the second task resulted in larger grip apertures during delayed grasping. This supports the idea that delayed grasping involves processing of stored perception-based information that shares resources with cross-modal tasks involving imagery and memory.
IEEE Transactions on Autonomous Mental Development | 2011
Eris Chinellato; Marco Antonelli; Beata J. Grzyb; A.P. del Pobil
Primates often perform coordinated eye and arm movements, contextually fixating and reaching towards nearby objects. This combination of looking and reaching to the same target is used by infants to establish an implicit visuomotor representation of the peripersonal space, useful for both oculomotor and arm motor control. In this work, taking inspiration from such behavior and from primate visuomotor mechanisms, a shared sensorimotor map of the environment, built on a radial basis function framework, is configured and trained by the coordinated control of eye and arm movements. Computational results confirm that the approach seems especially suitable for the problem at hand, and for its implementation on a real humanoid robot. By exploratory gazing and reaching actions, either free or goal-based, the artificial agent learns to perform direct and inverse transformations between stereo vision, oculomotor, and joint-space representations. The integrated sensorimotor map that allows to contextually represent the peripersonal space through different vision and motor parameters is never made explicit, but rather emerges thanks to the interaction of the agent with the environment.
The Journal of Neuroscience | 2010
Annalisa Bosco; Rossella Breveglieri; Eris Chinellato; Claudio Galletti; Patrizia Fattori
Reaching and grasping an object is an action that can be performed in light, as well as in darkness. Area V6A is a visuomotor area of the medial posterior parietal cortex involved in the control of reaching movements. It contains reaching neurons as well as neurons modulated by passive somatosensory and visual stimulations. In the present work we analyze the effect of visual feedback on reaching activity of V6A neurons. Three macaques were trained to execute reaching movements in two conditions: in darkness, where only the reaching target was visible, and in full light, where the monkey also saw its own moving arm and the environment. Approximately 85% of V6A neurons (127/149) were significantly related to the task in at least one of the two conditions. The majority of task-related cells (69%) showed reach-related activity in both visual conditions, some were modulated only in light (15%), while others only in dark (16%). The sight of the moving arm often changed dramatically the cells response to arm movements. In some cases the reaching activity was enhanced and in others it was reduced or disappeared altogether. These neuronal properties may represent differences in the degree to which cells are influenced by feedback control versus feedforward movement planning. On average, reach-related modulations were stronger in light than in dark, a phenomenon similar to that observed in brain imaging experiments in the human medial posterior parietal cortex, a region likely homologous to macaque area V6A.
IEEE Transactions on Autonomous Mental Development | 2013
Maxime Petit; Stéphane Lallée; Jean-David Boucher; Grégoire Pointeau; Pierrick Cheminade; Dimitri Ognibene; Eris Chinellato; Ugo Pattacini; Ilaria Gori; Uriel Martinez-Hernandez; Hector Barron-Gonzalez; Martin Inderbitzin; Andre L. Luvizotto; Vicky Vouloutsi; Yiannis Demiris; Giorgio Metta; Peter Ford Dominey
One of the defining characteristics of human cognition is our outstanding capacity to cooperate. A central requirement for cooperation is the ability to establish a “shared plan”—which defines the interlaced actions of the two cooperating agents—in real time, and even to negotiate this shared plan during its execution. In the current research we identify the requirements for cooperation, extending our earlier work in this area. These requirements include the ability to negotiate a shared plan using spoken language, to learn new component actions within that plan, based on visual observation and kinesthetic demonstration, and finally to coordinate all of these functions in real time. We present a cognitive system that implements these requirements, and demonstrate the systems ability to allow a Nao humanoid robot to learn a nontrivial cooperative task in real-time. We further provide a concrete demonstration of how the real-time learning capability can be easily deployed on a different platform, in this case the iCub humanoid. The results are considered in the context of how the development of language in the human infant provides a powerful lever in the development of cooperative plans from lower-level sensorimotor capabilities.
IEEE Transactions on Autonomous Mental Development | 2014
Marco Antonelli; Agostino Gibaldi; Frederik Beuth; Angel Juan Duran; Andrea Canessa; Manuela Chessa; Fabio Solari; Angel P. Del Pobil; Fred H. Hamker; Eris Chinellato; Silvio P. Sabatini
Reaching a target object in an unknown and unstructured environment is easily performed by human beings. However, designing a humanoid robot that executes the same task requires the implementation of complex abilities, such as identifying the target in the visual field, estimating its spatial location, and precisely driving the motors of the arm to reach it. While research usually tackles the development of such abilities singularly, in this work we integrate a number of computational models into a unified framework, and demonstrate in a humanoid torso the feasibility of an integrated working representation of its peripersonal space. To achieve this goal, we propose a cognitive architecture that connects several models inspired by neural circuits of the visual, frontal and posterior parietal cortices of the brain. The outcome of the integration process is a system that allows the robot to create its internal model and its representation of the surrounding space by interacting with the environment directly, through a mutual adaptation of perception and action. The robot is eventually capable of executing a set of tasks, such as recognizing, gazing and reaching target objects, which can work separately or cooperate for supporting more structured and effective behaviors.
intelligent robots and systems | 2003
Antonio Morales; Eris Chinellato; Andrew H. Fagg; A.P. del Pobil
This paper deals with visually guided grasping of unmodeled objects for robots which exhibit an adaptive behavior based on their previous experiences. Nine features are proposed to characterize three-finger grasps. They are computed from the object image and the kinematics of the hand. Real experiments on a humanoid robot with a Barrett hand are carried out to provide experimental data. This data is employed by a classification strategy, based on the k-nearest neighbour estimation rule, to predict the reliability of a grasp configuration in terms of five different performance classes. Prediction results suggest the methodology is adequate.
IEEE Robotics & Automation Magazine | 2017
Nick Hawes; Christopher Burbridge; Ferdian Jovan; Lars Kunze; Bruno Lacerda; Lenka Mudrová; Jay Young; Jeremy L. Wyatt; Denise Hebesberger; Tobias Körtner; Rares Ambrus; Nils Bore; John Folkesson; Patric Jensfelt; Lucas Beyer; Alexander Hermans; Bastian Leibe; Aitor Aldoma; Thomas Faulhammer; Michael Zillich; Markus Vincze; Eris Chinellato; Muhannad Al-Omari; Paul Duckworth; Yiannis Gatsoulis; David C. Hogg; Anthony G. Cohn; Christian Dondrup; Jaime Pulido Fentanes; Tomas Krajnik
Thanks to the efforts of the robotics and autonomous systems community, the myriad applications and capacities of robots are ever increasing. There is increasing demand from end users for autonomous service robots that can operate in real environments for extended periods. In the Spatiotemporal Representations and Activities for Cognitive Control in Long-Term Scenarios (STRANDS) project (http://strandsproject.eu), we are tackling this demand head-on by integrating state-of-the-art artificial intelligence and robotics research into mobile service robots and deploying these systems for long-term installations in security and care environments. Our robots have been operational for a combined duration of 104 days over four deployments, autonomously performing end-user-defined tasks and traversing 116 km in the process. In this article, we describe the approach we used to enable long-term autonomous operation in everyday environments and how our robots are able to use their long run times to improve their own performance.
international symposium on neural networks | 2009
Beata J. Grzyb; Eris Chinellato; Grzegorz M. Wojcik; Wieslaw A. Kaminski
The properties of separation ability and computational efficiency of Liquid State Machines depend on the neural model employed and on the connection density in the liquid column. A simple model of part of mammalians visual system consisting of one hypercolumn was examined. Such a system was stimulated by two different input patterns, and the Euclidean distance, as well as the partial and global entropy of the liquid column responses were calculated. Interesting insights could be drawn regarding the properties of different neural models used in the liquid hypercolumn, and on the effect of connection density on the information representation capability of the system.