Lorenzo Vannucci
Sant'Anna School of Advanced Studies
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lorenzo Vannucci.
ieee-ras international conference on humanoid robots | 2014
Lorenzo Vannucci; Nino Cauli; Egidio Falotico; Alexandre Bernardino; Cecilia Laschi
Nowadays, increasingly complex robots are being designed. As the complexity of robots increases, traditional methods for robotic control fail, as the problem of finding the appropriate kinematic functions can easily become intractable. For this reason the use of neuro-controllers, controllers based on machine learning methods, has risen at a rapid pace. This kind of controllers are especially useful in the field of humanoid robotics, where it is common for the robot to perform hard tasks in a complex environment. A basic task for a humanoid robot is to visually pursue a target using eye-head coordination. In this work we present an adaptive model based on a neuro-controller for visual pursuit. This model allows the robot to follow a moving target with no delay (zero phase lag) using a predictor of the target motion. The results show that the new controller can reach a target posed at a starting distance of 1.2 meters in less than 100 control steps (1 second) and it can follow a moving target at low to medium frequencies (0.3 to 0.5 Hz) with zero-lag and small position error (less then 4 cm along the main motion axis). The controller also has adaptive capabilities, being able to reach and follow a target even when some joints of the robot are clamped.
Proceedings of the 2015 Joint MORSE/VAO Workshop on Model-Driven Robot Software Engineering and View-based Software-Engineering | 2015
Georg Hinkel; Henning Groenda; Lorenzo Vannucci; Oliver Denninger; Nino Cauli; Stefan Ulbrich
Although robotics has made progress with respect to adaptability and interaction in natural environments, it cannot match the capabilities of biological systems. A promising approach to solve this problem is to create biologically plausible robot controllers that use detailed neuronal networks. However, this approach yields a large gap between the neuronal network and its connection to the robot on the one side and the technical implementation on the other. Existing approaches neglect bridging this gap between disciplines and their focus on different abstractions layers but manually hand-craft the simulations. This makes the tight technical integration cumbersome and error-prone impairing round-trip validation and academic advancements. Our approach maps the problem to model-driven engineering techniques and defines a domain-specific language (DSL) for integrating biologically plausible Neuronal Networks in robot control algorithms. It provides different levels of abstraction and sets an interface standard for integration. Our approach is implemented in the Neuro-Robotics Platform (NRP) of the Human Brain Project (HBP). Its practical applicability is validated in a minimalist experiment inspired by the Braitenberg vehicles based on the simulation of a four-wheeled Husky robot controlled by a neuronal network.
Frontiers in Neurorobotics | 2017
Egidio Falotico; Lorenzo Vannucci; Alessandro Ambrosano; Ugo Albanese; Stefan Ulbrich; Juan Camilo Vasquez Tieck; Georg Hinkel; Jacques Kaiser; Igor Peric; Oliver Denninger; Nino Cauli; Murat Kirtay; Arne Roennau; Gudrun Klinker; Axel Von Arnim; Luc Guyot; Daniel Peppicelli; Pablo Martínez-Cañada; Eduardo Ros; Patrick Maier; Sandro Weber; Manuel J. Huber; David A. Plecher; Florian Röhrbein; Stefan Deser; Alina Roitberg; Patrick van der Smagt; Rüdiger Dillman; Paul Levi; Cecilia Laschi
Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, environments, robots, and brain–body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 “Neurorobotics” of the Human Brain Project (HBP).1 At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller, and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments.
ieee-ras international conference on humanoid robots | 2015
Lorenzo Vannucci; Alessandro Ambrosano; Nino Cauli; Ugo Albanese; Egidio Falotico; Stefan Ulbrich; Lars Pfotzer; Georg Hinkel; Oliver Denninger; Daniel Peppicelli; Luc Guyot; Axel Von Arnim; Stefan Deser; Patrick Maier; Rüdiger Dillman; Gundrun Klinker; Paul Levi; Alois Knoll; Marc-Oliver Gewaltig; Cecilia Laschi
Developing neuro-inspired computing paradigms that mimic nervous system function is an emerging field of research that fosters our model understanding of the biological system and targets technical applications in artificial systems. The computational power of simulated brain circuits makes them a very promising tool for the development for brain-controlled robots. Early phases of robotic controllers development make extensive use of simulators as they are easy, fast and cheap tools. In order to develop robotics controllers that encompass brain models, a tool that include both neural simulation and physics simulation is missing. Such a tool would require the capability of orchestrating and synchronizing both simulations as well as managing the exchange of data between them. The Neurorobotics Platform (NRP) aims at filling this gap through an integrated software toolkit enabling an experimenter to design and execute a virtual experiment with a simulated robot using customized brain models. As a use case for the NRP, the iCub robot has been integrated into the platform and connected to a spiking neural network. In particular, experiments of visual tracking have been conducted in order to demonstrate the potentiality of such a platform.
ieee international conference on biomedical robotics and biomechatronics | 2016
Lorenzo Vannucci; Silvia Tolu; Egidio Falotico; Paolo Dario; Henrik Hautop Lund; Cecilia Laschi
Two main classes of reflexes relying on the vestibular system are involved in the stabilization of the human gaze: the vestibulocollic reflex (VCR), which stabilizes the head in space and the vestibulo-ocular reflex (VOR), which stabilizes the visual axis to minimize retinal image motion. The VOR works in conjunction with the opto-kinetic reflex (OKR), which is a visual feedback mechanism for moving the eye at the same speed as the observed scene. Together they keep the image stationary on the retina. In this work we present the first complete model of gaze stabilization based on the coordination of VCR and VOR and OKR. The model, inspired on neuroscientific cerebellar theories, is provided with learning and adaptation capabilities based on internal models. Tests on a simulated humanoid platform confirm the effectiveness of our approach.
conference on biomimetic and biohybrid systems | 2016
Alessandro Ambrosano; Lorenzo Vannucci; Ugo Albanese; Murat Kirtay; Egidio Falotico; Pablo Martínez-Cañada; Georg Hinkel; Jacques Kaiser; Stefan Ulbrich; Paul Levi; Christian A. Morillas; Alois Knoll; Marc-Oliver Gewaltig; Cecilia Laschi
The ‘red-green’ pathway of the retina is classically recognized as one of the retinal mechanisms allowing humans to gather color information from light, by combining information from L-cones and M-cones in an opponent way. The precise retinal circuitry that allows the opponency process to occur is still uncertain, but it is known that signals from L-cones and M-cones, having a widely overlapping spectral response, contribute with opposite signs. In this paper, we simulate the red-green opponency process using a retina model based on linear-nonlinear analysis to characterize context adaptation and exploiting an image-processing approach to simulate the neural responses in order to track a moving target. Moreover, we integrate this model within a visual pursuit controller implemented as a spiking neural network to guide eye movements in a humanoid robot. Tests conducted in the Neurorobotics Platform confirm the effectiveness of the whole model. This work is the first step towards a bio-inspired smooth pursuit model embedding a retina model using spiking neural networks.
Bioinspiration & Biomimetics | 2017
Lorenzo Vannucci; Egidio Falotico; Silvia Tolu; Vito Cacucciolo; Paolo Dario; Henrik Hautop Lund; Cecilia Laschi
Gaze stabilization is essential for clear vision; it is the combined effect of two reflexes relying on vestibular inputs: the vestibulocollic reflex (VCR), which stabilizes the head in space and the vestibulo-ocular reflex (VOR), which stabilizes the visual axis to minimize retinal image motion. The VOR works in conjunction with the opto-kinetic reflex (OKR), which is a visual feedback mechanism that allows the eye to move at the same speed as the observed scene. Together they keep the image stationary on the retina. In this work, we implement on a humanoid robot a model of gaze stabilization based on the coordination of VCR, VOR and OKR. The model, inspired by neuroscientific cerebellar theories, is provided with learning and adaptation capabilities based on internal models. We present the results for the gaze stabilization model on three sets of experiments conducted on the SABIAN robot and on the iCub simulator, validating the robustness of the proposed control method. The first set of experiments focused on the controller response to a set of disturbance frequencies along the vertical plane. The second shows the performances of the system under three-dimensional disturbances. The last set of experiments was carried out to test the capability of the proposed model to stabilize the gaze in locomotion tasks. The results confirm that the proposed model is beneficial in all cases reducing the retinal slip (velocity of the image on the retina) and keeping the orientation of the head stable.
international conference on advanced robotics | 2015
Egidio Falotico; Lorenzo Vannucci; Nicola Di Lecce; Paolo Dario; Cecilia Laschi
Humans are able to track a moving visual target by generating voluntary smooth pursuit eye movements. The purpose of smooth pursuit eye movements is to minimize the target velocity projected onto the retina (retinal slip). This is not achievable by a control based on a negative feedback due to the delays in the visual information processing. In this paper we propose a model, suitable for a robotic implementation, able to integrate the main characteristics of visual feedback and predictive control of the smooth pursuit. The model is composed of an inverse dynamics controller for the feedback control, a neural predictor for the anticipation of the target motion and an Weighted Sum module that is able to combine the previous systems in a proper way. Our results, tested on a simulated eye model of our humanoid robot, show that this model can use prediction for a zero-lag visual tracking, use a feedback based control for “unpredictable” target pursuit and combine these two approaches properly switching from one to the other, depending on the target dynamics, in order to guarantee a stable visual pursuit.
Frontiers in Neuroscience | 2017
Lorenzo Vannucci; Egidio Falotico; Cecilia Laschi
Connecting biologically inspired neural simulations to physical or simulated embodiments can be useful both in robotics, for the development of a new kind of bio-inspired controllers, and in neuroscience, to test detailed brain models in complete action-perception loops. The aim of this work is to develop a fully spike-based, biologically inspired mechanism for the translation of proprioceptive feedback. The translation is achieved by implementing a computational model of neural activity of type Ia and type II afferent fibers of muscle spindles, the primary source of proprioceptive information, which, in mammals is regulated through fusimotor activation and provides necessary adjustments during voluntary muscle contractions. As such, both static and dynamic γ-motoneurons activities are taken into account in the proposed model. Information from the actual proprioceptive sensors (i.e., motor encoders) is then used to simulate the spindle contraction and relaxation, and therefore drive the neural activity. To assess the feasibility of this approach, the model is implemented on the NEST spiking neural network simulator and on the SpiNNaker neuromorphic hardware platform and tested on simulated and physical robotic platforms. The results demonstrate that the model can be used in both simulated and real-time robotic applications to translate encoder values into a biologically plausible neural activity. Thus, this model provides a completely spike-based building block, suitable for neuromorphic platforms, that will enable the development of sensory-motor closed loops which could include neural simulations of areas of the central nervous system or of low-level reflexes.
conference on biomimetic and biohybrid systems | 2016
Lorenzo Vannucci; Egidio Falotico; Silvia Tolu; Paolo Dario; Henrik Hautop Lund; Cecilia Laschi
Two main classes of reflexes relying on the vestibular system are involved in the stabilization of the human gaze: the vestibulocollic reflex (VCR), which stabilizes the head in space and the vestibulo-ocular reflex (VOR), which stabilizes the visual axis to minimize retinal image motion. Together they keep the image stationary on the retina.