Eliseo Stefano Maini
Sant'Anna School of Advanced Studies
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Eliseo Stefano Maini.
ieee-ras international conference on humanoid robots | 2006
Luigi Manfredi; Eliseo Stefano Maini; Paolo Dario; Cecilia Laschi; Benoît Girard; Nicolas Tabareau; Alain Berthoz
In this paper we investigated the relevance of a robotic implementation in the development and validation of a neurophysiological model of the generation of saccadic eye movements. To this aim, a well-characterized model of the brainstem saccadic circuitry was implemented on a humanoid robot head with 7 degrees of freedom (DOFs), which was designed to mimic the human head in terms of the physical dimensions (i.e. geometry and masses), the kinematics (i.e. number of DOFs and ranges of motion), the dynamics (i.e. velocities and accelerations), and the functionality (i.e. the ocular movements of vergence, smooth pursuit and saccades). Our implementation makes the robot head execute saccadic eye movements upon a visual stimulus appearing in the periphery of the robot visual field, by reproducing the following steps: projection or the camera images onto collicular images, according to the modeled mapping between the retina and the superior colliculus (SC); transformation of the retinotopic coordinates of the stimulus obtained in the camera reference frame into their corresponding projections on the SC; spatio-temporal transformation of these coordinates according to what is known to happen in the brainstem saccade burst generator of primates; and execution of the eye movement by controlling one eye motor of the robot, in velocity. The capabilities of the robot head to execute saccadic movements have been tested with respect to the neurophysiological model implemented, in view of the use of this robotic implementation for validating and tuning the model itself, in further focused experimental trials
Lecture Notes in Computer Science | 2005
Eliseo Stefano Maini
This paper presents a robust and non-iterative algorithm for the least-square fitting of ellipses to scattered data. In this work, we undertake a critical analysis of a previous reported work [1] and we propose a novel approach that preserves the advantages while overcomes the major limitations and drawbacks. The modest increase of the computational burden introduced by this method is justified by the achievement of an excellent numerical stability. Furthermore the method is simple and accurate and can be implemented with fixed time of computation. These characteristics coupled to its robustness and specificity makes the algorithm well-suited for applications requiring real-time machine vision.
international conference on robotics and automation | 2007
Domenico Campolo; Eliseo Stefano Maini; Francesco Patane; Cecilia Laschi; Paolo Dario; Flavio Keller; Eugenio Guglielmelli
Neuro-developmental engineering is a new interdisciplinary research area at the intersection of developmental neuroscience and bioengineering. Applications can be found in early detection of neuro-developmental disorders via a new generation of mechatronic toys for assessing the regular development of perceptual and motor skills in infants, in particular coordination of mobile and multiple frames of reference during manipulation. This paper focuses on the design of a novel mechatronic toy, shaped as a 5 cm (diameter) ball, i.e. small enough to be grasped with a single hand by a 1 year old child. The sensorized ball is designed to embed a kinematics sensing unit, able to sense both the orientation in 3D space and linear accelerations, as well as a force sensing unit, to detect grasping patterns during manipulation. Dimensioning of batteries able to operate for 1 hour during experimental sessions as well as a wireless communication unit are also included in the design.
Autonomous Robots | 2008
Eliseo Stefano Maini; Luigi Manfredi; Cecilia Laschi; Paolo Dario
Abstract In this paper we address the problem of executing fast gaze shifts toward a visual target with a robotic platform. The robotic platform is an anthropomorphic head with seven degrees of freedom (DOFs) that was designed to mimic the physical dimensions (i.e. geometry and masses), the performances (i.e. angles and velocities) and the functional abilities (i.e. neck-movements and eyes vergence) of the human head. In our approach the “gold performance” of the robotic head is represented by the accurate eye-head coordination that is observed during head-free gaze saccades in humans. To this aim, we implemented and tested on the robotic head a well-characterized, biologically inspired model of gaze control and we investigate the effectiveness of the bioinspired paradigm to achieve an appropriate control of the multi-DOF robotic head. Moreover, in order to verify if the proposed model can reproduce the typical patterns of actual human movements, we performed a quantitative investigation of the relation between movement amplitude, duration and peak velocity. In the latter case, we compared the actual robot performances with existing data on human main sequence which is known to provide a general method for quantifying the dynamic of oculomotor control. The obtained results confirmed (1) the ability of the proposed bioinspired control to achieve and maintain and stable fixation of the target which was always well-positioned within the fovea and (2) the ability to reproduce the typical human main sequence diagrams which were never been successfully implemented on a fully anthropomorphic head. Even if fundamentally aimed at the experimental investigation of the underlying neurophysiologic models, the present study is also intended to provide some possible relevant solutions to the development of human-like eye movements in humanoid robots.
Advanced Robotics | 2008
Cecilia Laschi; Francesco Patane; Eliseo Stefano Maini; Luigi Manfredi; Giancarlo Teti; Loredana Zollo; Eugenio Guglielmelli; Paolo Dario
This paper presents the biomechatronic design and development of an anthropomorphic robotic head able to perform human eye movements as a tool for experimental investigation in neuroscience. This robotic head has been designed upon specifications derived from models of the human head and of human neck and eye movements, in terms of total mass, geometry, number of degrees of freedom (d.o.f.), joint velocities and accelerations. The robotic head has 7 d.o.f.: 4 d.o.f. specifically allowing the neck movements and 3 d.o.f. dedicated to the faster eye movements; it has a total mass of approximately 5.6 kg, and it is capable of reaching a maximum eye velocity and acceleration of 1000°/s and 10 000°/s2, respectively, compatible with human data. The coordination of neck and eye movements allows smooth pursuit; and the independent eye yaw movements allow vergence. As an example, the use of this robotic head for validating a novel neurophysiological model of gaze control is presented. The model has been implemented on the robot, which can transform the retina images into their mapping onto the relevant brain area (superior colliculus), calculate the coordinates of a visual stimulus appearing in the periphery, generate the velocity profiles for the eye motors and execute the eye movements. This implementation shows that the robotic head can perform human-like saccades and allows experimental comparison with human data, so as to validate and revise the model itself.
SPRINGER TRACTS IN ADVANCED ROBOTICS | 2007
Cecilia Laschi; Eliseo Stefano Maini; Francesco Patane; Luca Ascari; Gaetano Ciaravella; Ulisse Bertocchi; Cesare Stefanini; Paolo Dario; Alain Berthoz
This work addresses the problem of developing novel interfaces for robotic systems that can allow the most natural transmission of control commands and sensory information, in the two directions. A novel approach to the development of natural interfaces is based on the detection of the human’s motion intention, instead of the movement itself, as in traditional interfaces. Based on recent findings in neuroscience, the intention can be detected from anticipatory movements that naturally accompany more complex motor behaviors.
Archive | 2009
Luigi Manfredi; Eliseo Stefano Maini; Cecilia Laschi
Thanks to the improvements in mechanical technology, it is currently possible to design robotic platforms that are increasingly similar to humans (Laschi et al., 2008; Kaneko, 2004; Kuffner et al., 2005). However, the increasing robot complexity (i.e. presence of many degrees of freedom, non linear actuation and complex geometries), requires more sophisticated control models and heavier computational burden. The development of humanoid robot is a very relevant issue in robotic research especially when one considers the challenges related to the actual implementation of a humanoid robot both in terms of mechanics and control system. However these research efforts are justified considering that, an actual humanoid robot is regarded as a fundamental tool for neuroscience and, at the same time, neuroscience can be exploited as an alternative control solution for the design of humanoid robots (Kawato, 2000). In this chapter, the neurophysiological models for gaze (i.e. the line of sight) shift control will be discussed and their implementation on a head robotic platform is presented. In particular the rapid movement of the gaze and the issues related to the eye-head coordination were investigated from neurophysiologic and robotics points of view. In neurophysiology the rapid movement of the gaze is known as saccadic. This movements are also classified either as head-restrained visual orienting movement or head-free visual orienting movement (Barnes, 1979; Bizzi et al., 1971; Bizzi, 1972; Guitton andVolle 1987; Guitton, 1992; Goossens and Van Opstal 1997). The neurophysiologic models that will be discussed here are the visual mapping of superior colliculus and the independent gaze control model presented by Goossens and colleagues for the eye-head coordinated motion (Goossens & Van Opstal, 1997). In the case of visual colliculus mapping, the input is a visual image that is mapped from camera image to the superior colliculus. Conversely, for the gaze model control, the input data are the two angular deviations (i.e. horizontal and vertical) that may be used to define the gaze shift amplitude and the movement orientation. The eye-head saccadic model
Neurorehabilitation and Neural Repair | 2008
Marco Caimmi; Stefano Carda; Chiara Giovanzana; Eliseo Stefano Maini; Angelo M. Sabatini; Nicola Smania; Franco Molteni
Gait & Posture | 2007
Michele Coluccini; Eliseo Stefano Maini; Chiara Martelloni; Giuseppina Sgandurra; Giovanni Cioni
Electronics Letters | 2002
Angelo M. Sabatini; Vincenzo Genovese; Eliseo Stefano Maini