Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Agostino Gibaldi is active.

Publication


Featured researches published by Agostino Gibaldi.


Neurocomputing | 2010

A cortical model for binocular vergence control without explicit calculation of disparity

Agostino Gibaldi; Manuela Chessa; Andrea Canessa; Silvio P. Sabatini; Fabio Solari

A computational model for the control of horizontal vergence, based on a population of disparity tuned complex cells, is presented. Since the population is able to extract the disparity map only in a limited range, using the map to drive vergence control means to limit its functionality inside this range. The model directly extracts the disparity-vergence response by combining the outputs of the disparity detectors without explicit calculation of the disparity map. The resulting vergence control yields to stable fixation and has small response time to a wide range of disparities. Experimental simulations with synthetic stimuli in depth validated the approach.


Behavior Research Methods | 2017

Evaluation of the Tobii EyeX Eye tracking controller and Matlab toolkit for research

Agostino Gibaldi; Mauricio Vanegas; Peter J. Bex; Guido Maiello

The Tobii Eyex Controller is a new low-cost binocular eye tracker marketed for integration in gaming and consumer applications. The manufacturers claim that the system was conceived for natural eye gaze interaction, does not require continuous recalibration, and allows moderate head movements. The Controller is provided with a SDK to foster the development of new eye tracking applications. We review the characteristics of the device for its possible use in scientific research. We develop and evaluate an open source Matlab Toolkit that can be employed to interface with the EyeX device for gaze recording in behavioral experiments. The Toolkit provides calibration procedures tailored to both binocular and monocular experiments, as well as procedures to evaluate other eye tracking devices. The observed performance of the EyeX (i.e. accuracy < 0.6°, precision < 0.25°, latency < 50 ms and sampling frequency ≈55 Hz), is sufficient for some classes of research application. The device can be successfully employed to measure fixation parameters, saccadic, smooth pursuit and vergence eye movements. However, the relatively low sampling rate and moderate precision limit the suitability of the EyeX for monitoring micro-saccadic eye movements or for real-time gaze-contingent stimulus control. For these applications, research grade, high-cost eye tracking technology may still be necessary. Therefore, despite its limitations with respect to high-end devices, the EyeX has the potential to further the dissemination of eye tracking technology to a broad audience, and could be a valuable asset in consumer and gaming applications as well as a subset of basic and clinical research settings.


IEEE Transactions on Autonomous Mental Development | 2014

A hierarchical system for a distributed representation of the peripersonal space of a humanoid robot

Marco Antonelli; Agostino Gibaldi; Frederik Beuth; Angel Juan Duran; Andrea Canessa; Manuela Chessa; Fabio Solari; Angel P. Del Pobil; Fred H. Hamker; Eris Chinellato; Silvio P. Sabatini

Reaching a target object in an unknown and unstructured environment is easily performed by human beings. However, designing a humanoid robot that executes the same task requires the implementation of complex abilities, such as identifying the target in the visual field, estimating its spatial location, and precisely driving the motors of the arm to reach it. While research usually tackles the development of such abilities singularly, in this work we integrate a number of computational models into a unified framework, and demonstrate in a humanoid torso the feasibility of an integrated working representation of its peripersonal space. To achieve this goal, we propose a cognitive architecture that connects several models inspired by neural circuits of the visual, frontal and posterior parietal cortices of the brain. The outcome of the integration process is a system that allows the robot to create its internal model and its representation of the surrounding space by interacting with the environment directly, through a mutual adaptation of perception and action. The robot is eventually capable of executing a set of tasks, such as recognizing, gazing and reaching target objects, which can work separately or cooperate for supporting more structured and effective behaviors.


International Journal of Neural Systems | 2010

Learning Eye Vergence Control from a Distributed Disparity Representation

Nikolay Chumerin; Agostino Gibaldi; Silvio P. Sabatini; Marc M. Van Hulle

We present two neural models for vergence angle control of a robotic head, a simplified and a more complex one. Both models work in a closed-loop manner and do not rely on explicitly computed disparity, but extract the desired vergence angle from the post-processed response of a population of disparity tuned complex cells, the actual gaze direction and the actual vergence angle. The first model assumes that the gaze direction of the robotic head is orthogonal to its baseline and the stimulus is a frontoparallel plane orthogonal to the gaze direction. The second model goes beyond these assumptions, and operates reliably in the general case where all restrictions on the orientation of the gaze, as well as the stimulus position, type and orientation, are dropped.


ieee-ras international conference on humanoid robots | 2011

A neuromorphic control module for real-time vergence eye movements on the iCub robot head

Agostino Gibaldi; Andrea Canessa; Manuela Chessa; Silvio P. Sabatini; Fabio Solari

We implemented a cortical model of vergence eye movements on a humanoid robot head (iCub). The proposed control strategy resorts on a computational substrate of modeled V1 complex cells that provides a distributed representation of binocular disparity information. The model includes a normalization stage that allows for a vergence control independent of the texture of the object and of luminance changes. The disparity information is exploited to provide a signal able to nullify the binocular disparity in a foveal region.


Sensors | 2012

Vector Disparity Sensor with Vergence Control for Active Vision Systems

Francisco Barranco; Javier Díaz; Agostino Gibaldi; Silvio P. Sabatini; Eduardo Ros

This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system.


Robotics and Autonomous Systems | 2015

Autonomous learning of disparity-vergence behavior through distributed coding and population reward

Agostino Gibaldi; Andrea Canessa; Fabio Solari; Silvio P. Sabatini

A robotic system implementation that exhibits autonomous learning capabilities of effective control for vergence eye movements is presented. The system, directly relying on a distributed (i.e. neural) representation of binocular disparity, shows a large tolerance to the inaccuracies of real stereo heads and to the changeable environment. The proposed approach combines early binocular vision mechanisms with basic learning processes, such as synaptic plasticity and reward modulation. The computational substrate consists of a network of modeled V1 complex cells that act as oriented binocular disparity detectors. The resulting population response, besides implicit binocular depth cues about the environment, also provides a global signal (i.e. the overall activity of the population itself) to describe the state of the system and thus its deviation from the desired vergence position. The proposed network, by taking into account the modification of its internal state as a consequence of the action performed, evolves following a differential Hebbian rule. The overall activity of the population is exploited to derive an intrinsic signal that drives the weights update. Exploiting this signal implies a maximization of the population activity itself, thus providing an highly effective reward for the developing of a stable and accurate vergence behavior. The role of the different orientations in the learning process is evaluated separately against the whole population, evidencing that the interplay among the differently oriented channels allows a faster learning capability and a more accurate control. The efficacy of the proposed intrinsic reward signal is thus comparatively assessed against the ground-truth signal (the actual disparity) providing equivalent results, and thus validating the approach. Trained in a simulated environment, the proposed network, is able to cope with vergent geometry and thus to learn effective vergence movements for static and moving visual targets. Experimental tests with real robot stereo pairs demonstrate the capability of the architecture not just to directly learn from the environment, but to adapt the control to the stimulus characteristics. We implemented a cortical model for the vergence control based on a population of disparity detectors.The model is able to autonomously learn its behavior by means of an internal parameter.The speed of convergence and the precision of the control precision were evaluated on different disparity ranges and learning signals.The informative content of the different orientation channels was assessed.The learning capabilities on real robot stereo pairs demonstrate an adaptation to the stimulus characteristics.


Procedia Computer Science | 2012

How a population-based representation of binocular visual signal can intrinsically mediate autonomous learning of vergence control

Agostino Gibaldi; Andrea Canessa; Manuela Chessa; Fabio Solari; Silvio P. Sabatini

Abstract Designing an active visual system, able to autonomously learn its behavior, implies to make the learning controller independent of an external signal (e.g. the error between the actual and the desired vergence angle) or of perceptual decisions about dispar–ity (e.g. from the response of a previously trained network). The proposed approach is based on a direct use of a computational substrate of modeled V1 complex cells that provide a distributed representation of binocular disparity information. The design strategies of the cortical-like architecture, including uniform coverage in feature space and divisive normalization mechanisms, allow the global energy of the population to effectively mediate the learning process towards the proper motor control. Since the learning controller is based on an intrinsic representation of the visual signal, it comes to overlap and coincide with the system that is learning the behaviour, thus closing at an inner cycle the perception-action loop necessary for learning. Experi–mental tests proved that the control architecture is both able to learn an effective vergence behavior, and to exploit it to fixate static and moving visual targets.


international symposium on neural networks | 2013

Population coding for a reward-modulated Hebbian learning of vergence control

Agostino Gibaldi; Andrea Canessa; Manuela Chessa; Fabio Solari; Silvio P. Sabatini

We show how a cortical model of early disparity detectors is able to autonomously learn effective control signals in order to drive the vergence eye movements of a binocular active vision system. The proposed approach employs early binocular mechanisms of vision and basic learning processes such as synaptic plasticity and reward modulation. The computational substrate consists of a population of modeled V1 complex cells, that provides a distributed representation of binocular disparity information. The population response also provides a global signal to describe the state of the system and thus its deviation from the desired vergence position. The proposed network, by taking into account the modification of its internal state as a consequence of the action performed, evolves following a differential Hebbian rule. Furthermore, the weights update is driven by an intrinsic signal derived by the overall activity of the population. Exploiting this signal implies a maximization of the population activity itself, thus providing an highly effective reward for the developing of a stable and accurate vergence behaviour. The efficacy of the proposed intrinsic reward signal is comparatively assessed against the ground-truth signal (the actual disparity) providing equivalent results, and thus validating the approach. Experimental tests in a simulated environment demonstrate that the proposed network is able to cope with vergent geometry and thus to learn effective vergence movements for static and moving visual targets in realistic situations.


Scientific Reports | 2017

The Active Side of Stereopsis: Fixation Strategy and Adaptation to Natural Environments

Agostino Gibaldi; Andrea Canessa; Silvio P. Sabatini

Depth perception in near viewing strongly relies on the interpretation of binocular retinal disparity to obtain stereopsis. Statistical regularities of retinal disparities have been claimed to greatly impact on the neural mechanisms that underlie binocular vision, both to facilitate perceptual decisions and to reduce computational load. In this paper, we designed a novel and unconventional approach in order to assess the role of fixation strategy in conditioning the statistics of retinal disparity. We integrated accurate realistic three-dimensional models of natural scenes with binocular eye movement recording, to obtain accurate ground-truth statistics of retinal disparity experienced by a subject in near viewing. Our results evidence how the organization of human binocular visual system is finely adapted to the disparity statistics characterizing actual fixations, thus revealing a novel role of the active fixation strategy over the binocular visual functionality. This suggests an ecological explanation for the intrinsic preference of stereopsis for a close central object surrounded by a far background, as an early binocular aspect of the figure-ground segregation process.

Collaboration


Dive into the Agostino Gibaldi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nikolay Chumerin

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter J. Bex

Northeastern University

View shared research outputs
Researchain Logo
Decentralizing Knowledge