Egidio Falotico
Sant'Anna School of Advanced Studies
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Egidio Falotico.
intelligent robots and systems | 2012
Kenji Hashimoto; Hyun Jin Kang; Masashi Nakamura; Egidio Falotico; Hun-ok Lim; Atsuo Takanishi; Cecilia Laschi; Paolo Dario; Alain Berthoz
This paper describes a walking stabilization control on a soft ground based on gait analysis for a humanoid robot. There are many researches on gait analysis on a hard ground, but few scientists analyze the walking ability of human beings on a soft ground. Therefore, we conducted anthropometric measurement using a motion capture system on a soft ground. By analyzing experimental data, we obtained two findings. The first finding is that although there are no significant differences in step width and step length, step height tends to increase to avoid the collision between the feet and a soft ground. The second finding is that there are no significant differences in the lateral CoM trajectories but the vertical CoM amplitude increases when walking on a soft ground. Based on these findings, we developed a walking stabilization control to stabilize the CoM motion in the lateral direction on a soft ground. Verification of the proposed control is conducted through experiments with a human-sized humanoid robot WABIAN-2R. The experimental videos are supplemented.
ieee-ras international conference on humanoid robots | 2014
Lorenzo Vannucci; Nino Cauli; Egidio Falotico; Alexandre Bernardino; Cecilia Laschi
Nowadays, increasingly complex robots are being designed. As the complexity of robots increases, traditional methods for robotic control fail, as the problem of finding the appropriate kinematic functions can easily become intractable. For this reason the use of neuro-controllers, controllers based on machine learning methods, has risen at a rapid pace. This kind of controllers are especially useful in the field of humanoid robotics, where it is common for the robot to perform hard tasks in a complex environment. A basic task for a humanoid robot is to visually pursue a target using eye-head coordination. In this work we present an adaptive model based on a neuro-controller for visual pursuit. This model allows the robot to follow a moving target with no delay (zero phase lag) using a predictor of the target motion. The results show that the new controller can reach a target posed at a starting distance of 1.2 meters in less than 100 control steps (1 second) and it can follow a moving target at low to medium frequencies (0.3 to 0.5 Hz) with zero-lag and small position error (less then 4 cm along the main motion axis). The controller also has adaptive capabilities, being able to reach and follow a target even when some joints of the robot are clamped.
robot and human interactive communication | 2010
Egidio Falotico; Davide Zambrano; Giovanni Gerardo Muscolo; Laura Marazzato; Paolo Dario; Cecilia Laschi
The purpose of this work is to investigate the applicability of a visual tracking model on humanoid robots in order to achieve a human-like predictive behavior. In humans, in case of moving targets the oculomotor system uses a combination of the smooth pursuit eye movement and saccadic movements, namely “catch up” saccades to fixate the object of interest
ieee international conference on biomedical robotics and biomechatronics | 2016
Mariangela Manti; Andrea Pratesi; Egidio Falotico; Matteo Cianchetti; Cecilia Laschi
The authors focus on the possibility to adapt technologies and basic concepts of Soft Robotics for building a new generation of soft modular manipulator for assistive robotics that can safely come into direct contact with humans in a challenging scenario which is the bathing activity. This paper starts with the presentation of the concept of the modular manipulator and then moves toward a detailed description of one of its modules. The idea is to develop a manipulator which counts on an actuation system based on McKibben-based flexible fluidic actuators combined with motor driven cables, by addressing technological issues related to effectiveness and reliability. Shortening, elongation and bending capabilities have been assessed by testing different patterns of activations. These measures allowed the estimation of the single module performances and its workspace. These outcomes represent the starting point for the development of a novel modular manipulator to be used as a shower arm for bathing activities.
ieee-ras international conference on humanoid robots | 2009
Egidio Falotico; Matteo Taiana; Davide Zambrano; Alexandre Bernardino; José Santos-Victor; Paolo Dario; Cecilia Laschi
In humans the tracking of a visual moving target across occlusions is not made with continuous smooth pursuit. The tracking stops when the object is occluded and one or two saccades are made to the other side of the occluder to anticipate when and where the object reappears. This paper describes a methodology for the implementation of such a behavior in a robotic platform - the iCub. We use the RLS algorithm for the on-line estimation and prediction of the target trajectory and a vision based object tracker capable of detecting the occlusion and the reappearance of an object. This system demonstrates predictive ability for tracking across an occlusion with a biologically-plausible behavior.
Frontiers in Neurorobotics | 2017
Egidio Falotico; Lorenzo Vannucci; Alessandro Ambrosano; Ugo Albanese; Stefan Ulbrich; Juan Camilo Vasquez Tieck; Georg Hinkel; Jacques Kaiser; Igor Peric; Oliver Denninger; Nino Cauli; Murat Kirtay; Arne Roennau; Gudrun Klinker; Axel Von Arnim; Luc Guyot; Daniel Peppicelli; Pablo Martínez-Cañada; Eduardo Ros; Patrick Maier; Sandro Weber; Manuel J. Huber; David A. Plecher; Florian Röhrbein; Stefan Deser; Alina Roitberg; Patrick van der Smagt; Rüdiger Dillman; Paul Levi; Cecilia Laschi
Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, environments, robots, and brain–body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 “Neurorobotics” of the Human Brain Project (HBP).1 At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller, and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments.
Autonomous Robots | 2017
Egidio Falotico; Nino Cauli; Przemyslaw Kryczka; Kenji Hashimoto; Alain Berthoz; Atsuo Takanishi; Paolo Dario; Cecilia Laschi
Neuroscientific studies show that humans tend to stabilize their head orientation, while accomplishing a locomotor task. This is beneficial to image stabilization and in general to keep a reference frame for the body. In robotics, too, head stabilization during robot walking provides advantages in robot vision and gaze-guided locomotion. In order to obtain the head movement behaviors found in human walk, it is necessary and sufficient to be able to control the orientation (roll, pitch and yaw) of the head in space. Based on these principles, three controllers have been designed. We developed two classic robotic controllers, an inverse kinematics based controller, an inverse kinematics differential controller and a bio-inspired adaptive controller based on feedback error learning. The controllers use the inertial feedback from a IMU sensor and control neck joints in order to align the head orientation with the global orientation reference. We present the results for the head stabilization controllers, on two sets of experiments, validating the robustness of the proposed control methods. In particular, we focus our analysis on the effectiveness of the bio-inspired adaptive controller against the classic robotic controllers. The first set of experiments, tested on a simulated robot, focused on the controllers response to a set of disturbance frequencies and a step function. The other set of experiments were carried out on the SABIAN robot, where these controllers were implemented in conjunction with a model of the vestibulo-ocular reflex (VOR) and opto-kinetic reflex (OKR). Such a setup permits to compare the performances of the considered head stabilization controllers in conditions which mimic the human stabilization mechanisms composed of the joint effect of VOR, OKR and stabilization of the head. The results show that the bio-inspired adaptive controller is more beneficial for the stabilization of the head in tasks involving a sinusoidal torso disturbance, and it shows comparable performances to the inverse kinematics controller in case of the step response and the locomotion experiments conducted on the real robot.
international conference on informatics in control automation and robotics | 2016
Thomas George Thuruthel; Egidio Falotico; Matteo Cianchetti; Federico Renda; Cecilia Laschi
This paper presents a learning model for obtaining global inverse statics solutions for redundant soft robots. Our motivation begins with the opinion that the inverse statics problem is analogous to the inverse kinematics problem in the case of soft continuum manipulators. A unique inverse statics formulation and data sampling method enables the learning system to circumvent the main roadblocks of the inverting problem. Distinct from previous researches, we have addressed static control of both position and orientation of soft robots. Preliminary tests were conducted on the simulated model of a soft manipulator. The results indicate that learning based approaches could be an effective method for modelling and control of complex soft robots, especially for high dimensional redundant robots.
robot and human interactive communication | 2012
Egidio Falotico; Nino Cauli; Kenji Hashimoto; Przemyslaw Kryczka; Atsuo Takanishi; Paolo Dario; Alain Berthoz; Cecilia Laschi
In this work we propose an adaptive model for the head stabilization based on a feedback error learning (FEL). This model is capable to overcome the delays caused by the head motor system and adapts itself to the dynamics of the head motion. It has been designed to track an arbitrary reference orientation for the head in space and reject the disturbance caused by trunk motion. For efficient error learning we use the recursive least square algorithm (RLS), a Newton-like method which guarantees very fast convergence. Moreover, we implement a neural network to compute the rotational part of the head inverse kinematics. Verification of the proposed control is conducted through experiments with Matlab SIMULINK and a humanoid robot SABIAN.
ieee-ras international conference on humanoid robots | 2011
Egidio Falotico; Cecilia Laschi; Paolo Dario; Delphine Bernardin; Alain Berthoz
Head and trunk movements were measured in eight subjects performing a complex locomotor task. The purpose was to determine if, even in a complex scenario, the head is stabilized during walking. The experimental conditions were Free Gaze (FG) and Anchored Gaze (AG). In AG condition, subjects were asked to always gaze on one of two Anchor Points placed in the scenario during walking. Despite the linear and rotational motion of trunk, the head pitch orientation has little variation (9.2°) in AG condition respect to FG (23.4°). Moreover, the head pitch rotation relative to the trunk in AG condition appeared to compensate for trunk pitch rotation. In fact, an anti-phase correlation between these two signals occurred in the AG condition trials. In this perspective, we design a model in which the head is stabilized and the orientation is simulated for a locomotor task. The model we propose considers the trunk rotation as a disturbance and allows following an input reference head rotation, compensating the trunk rotation. This model, suitable for a future implementation on a robotic platform, is based on an inverse kinematics controller and tested on real human data.