Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Manuela Chessa is active.

Publication


Featured researches published by Manuela Chessa.


Neurocomputing | 2010

A cortical model for binocular vergence control without explicit calculation of disparity

Agostino Gibaldi; Manuela Chessa; Andrea Canessa; Silvio P. Sabatini; Fabio Solari

A computational model for the control of horizontal vergence, based on a population of disparity tuned complex cells, is presented. Since the population is able to extract the disparity map only in a limited range, using the map to drive vergence control means to limit its functionality inside this range. The model directly extracts the disparity-vergence response by combining the outputs of the disparity detectors without explicit calculation of the disparity map. The resulting vergence control yields to stable fixation and has small response time to a wide range of disparities. Experimental simulations with synthetic stimuli in depth validated the approach.


international conference on computer vision systems | 2009

A Fast Joint Bioinspired Algorithm for Optic Flow and Two-Dimensional Disparity Estimation

Manuela Chessa; Silvio P. Sabatini; Fabio Solari

The faithful detection of the motion and of the distance of the objects in the visual scene is a desirable feature of any artificial vision system designed to operate in unknown environments characterized by conditions variable in time in an often unpredictable way. Here, we propose a distributed neuromorphic architecture, that, by sharing the computational resources to solve the stereo and the motion problems, produces fast and reliable estimates of optic flow and 2D disparity. The specific joint design approach allows us to obtain high performance at an affordable computational cost. The approach is validated with respect to the state-of-the-art algorithms and in real-world situations.


IEEE Transactions on Autonomous Mental Development | 2014

A hierarchical system for a distributed representation of the peripersonal space of a humanoid robot

Marco Antonelli; Agostino Gibaldi; Frederik Beuth; Angel Juan Duran; Andrea Canessa; Manuela Chessa; Fabio Solari; Angel P. Del Pobil; Fred H. Hamker; Eris Chinellato; Silvio P. Sabatini

Reaching a target object in an unknown and unstructured environment is easily performed by human beings. However, designing a humanoid robot that executes the same task requires the implementation of complex abilities, such as identifying the target in the visual field, estimating its spatial location, and precisely driving the motors of the arm to reach it. While research usually tackles the development of such abilities singularly, in this work we integrate a number of computational models into a unified framework, and demonstrate in a humanoid torso the feasibility of an integrated working representation of its peripersonal space. To achieve this goal, we propose a cognitive architecture that connects several models inspired by neural circuits of the visual, frontal and posterior parietal cortices of the brain. The outcome of the integration process is a system that allows the robot to create its internal model and its representation of the surrounding space by interacting with the environment directly, through a mutual adaptation of perception and action. The robot is eventually capable of executing a set of tasks, such as recognizing, gazing and reaching target objects, which can work separately or cooperate for supporting more structured and effective behaviors.


Journal of Vision | 2014

Simulated disparity and peripheral blur interact during binocular fusion.

Guido Maiello; Manuela Chessa; Fabio Solari; Peter J. Bex

We have developed a low-cost, practical gaze-contingent display in which natural images are presented to the observer with dioptric blur and stereoscopic disparity that are dependent on the three-dimensional structure of natural scenes. Our system simulates a distribution of retinal blur and depth similar to that experienced in real-world viewing conditions by emmetropic observers. We implemented the system using light-field photographs taken with a plenoptic camera which supports digital refocusing anywhere in the images. We coupled this capability with an eye-tracking system and stereoscopic rendering. With this display, we examine how the time course of binocular fusion depends on depth cues from blur and stereoscopic disparity in naturalistic images. Our results show that disparity and peripheral blur interact to modify eye-movement behavior and facilitate binocular fusion, and the greatest benefit was gained by observers who struggled most to achieve fusion. Even though plenoptic images do not replicate an individual’s aberrations, the results demonstrate that a naturalistic distribution of depth-dependent blur may improve 3-D virtual reality, and that interruptions of this pattern (e.g., with intraocular lenses) which flatten the distribution of retinal blur may adversely affect binocular fusion.


Signal Processing-image Communication | 2015

What can we expect from a V1-MT feedforward architecture for optical flow estimation?

Fabio Solari; Manuela Chessa; N. V. Kartheek Medathati; Pierre Kornprobst

Motion estimation has been studied extensively in neuroscience in the last two decades. Even though there has been some early interaction between the biological and computer vision communities at a modelling level, comparatively little work has been done on the examination or extension of the biological models in terms of their engineering efficacy on modern optical flow estimation datasets. An essential contribution of this paper is to show how a neural model can be enriched to deal with real sequences. We start from a classical V1-MT feedforward architecture. We model V1 cells by motion energy (based on spatio-temporal filtering), and MT pattern cells (by pooling V1 cell responses). The efficacy of this architecture and its inherent limitations in the case of real videos are not known. To answer this question, we propose a velocity space sampling of MT neurons (using a decoding scheme to obtain the local velocity from their activity) coupled with a multi-scale approach. After this, we explore the performance of our model on the Middlebury dataset. To the best of our knowledge, this is the only neural model in this dataset. The results are promising and suggest several possible improvements, in particular to better deal with discontinuities. Overall, this work provides a baseline for future developments of bio-inspired scalable computer vision algorithms and the code is publicly available to encourage research in this direction. Graphical abstractDisplay Omitted HighlightsWe show how a V1-MT neural model can be adapted to handle real sequences.For the first time, a neural model is benchmarked on a Middlebury dataset.We share our code to encourage research in bio-inspired scalable vision algorithms.


ieee-ras international conference on humanoid robots | 2011

A neuromorphic control module for real-time vergence eye movements on the iCub robot head

Agostino Gibaldi; Andrea Canessa; Manuela Chessa; Silvio P. Sabatini; Fabio Solari

We implemented a cortical model of vergence eye movements on a humanoid robot head (iCub). The proposed control strategy resorts on a computational substrate of modeled V1 complex cells that provides a distributed representation of binocular disparity information. The model includes a normalization stage that allows for a vergence control independent of the texture of the object and of luminance changes. The disparity information is exploited to provide a signal able to nullify the binocular disparity in a foveal region.


PLOS ONE | 2015

The (In)Effectiveness of Simulated Blur for Depth Perception in Naturalistic Images

Guido Maiello; Manuela Chessa; Fabio Solari; Peter J. Bex

We examine depth perception in images of real scenes with naturalistic variation in pictorial depth cues, simulated dioptric blur and binocular disparity. Light field photographs of natural scenes were taken with a Lytro plenoptic camera that simultaneously captures images at up to 12 focal planes. When accommodation at any given plane was simulated, the corresponding defocus blur at other depth planes was extracted from the stack of focal plane images. Depth information from pictorial cues, relative blur and stereoscopic disparity was separately introduced into the images. In 2AFC tasks, observers were required to indicate which of two patches extracted from these images was farther. Depth discrimination sensitivity was highest when geometric and stereoscopic disparity cues were both present. Blur cues impaired sensitivity by reducing the contrast of geometric information at high spatial frequencies. While simulated generic blur may not assist depth perception, it remains possible that dioptric blur from the optics of an observer’s own eyes may be used to recover depth information on an individual basis. The implications of our findings for virtual reality rendering technology are discussed.


Displays | 2013

Natural perception in dynamic stereoscopic augmented reality environments

Fabio Solari; Manuela Chessa; Matteo Garibotti; Silvio P. Sabatini

Abstract Notwithstanding the recent diffusion of the stereoscopic 3D technologies for the development of powerful human computer interaction systems based on augmented reality environment, with the conventional approaches an observer freely moving in front of a 3D display could experience a misperception of the depth and of the shape of virtual objects. Such distortions can cause eye fatigue and stress for entertainment applications, and they can have serious consequences in scientific and medical fields, where a veridical perception of the scene layout is required. We propose a novel technique to obtain augmented reality systems capable to correctly render 3D virtual objects to an observer that changes his/her position in the real world and acts in the virtual scenario. By tracking the positions of the observer’s eyes, the proposed technique generates the correct virtual view points through asymmetric frustums, thus obtaining the correct left and right projections on the screen. The natural perception of the scene layout is assessed through three experimental sessions with several observers.


Pattern Recognition Letters | 2012

Design strategies for direct multi-scale and multi-orientation feature extraction in the log-polar domain

Fabio Solari; Manuela Chessa; Silvio P. Sabatini

Despite the well known advantages that a space-variant representation of the visual signal offers, the required adaptation of the algorithms developed in the Cartesian domain, before applying them in the log-polar space, has limited a wide use of such representation in visual processing applications. In this paper, we present a set of original rules for designing a discrete log-polar mapping that allows a direct application in the log-polar domain of the algorithms, based on spatial multi-scale and multi-orientation filtering, originally developed for the Cartesian domain. The advantage of the approach is to gain, without modifications, an effective space-variance and data reduction. Such design strategies are based on a quantitative analysis of the relationships between the spatial filtering and the space-variant representation. We assess the devised rules by using a distributed approach based on a bank of band-pass filters to compute reliable disparity maps, by providing quantitative measures of the computational load and of the accuracy of the computed visual features.


Network: Computation In Neural Systems | 2012

Real-time simulation of large-scale neural architectures for visual features computation based on GPU

Manuela Chessa; Valentina Bianchi; Massimo Zampetti; Silvio P. Sabatini; Fabio Solari

The intrinsic parallelism of visual neural architectures based on distributed hierarchical layers is well suited to be implemented on the multi-core architectures of modern graphics cards. The design strategies that allow us to optimally take advantage of such parallelism, in order to efficiently map on GPU the hierarchy of layers and the canonical neural computations, are proposed. Specifically, the advantages of a cortical map-like representation of the data are exploited. Moreover, a GPU implementation of a novel neural architecture for the computation of binocular disparity from stereo image pairs, based on populations of binocular energy neurons, is presented. The implemented neural model achieves good performances in terms of reliability of the disparity estimates and a near real-time execution speed, thus demonstrating the effectiveness of the devised design strategies. The proposed approach is valid in general, since the neural building blocks we implemented are a common basis for the modeling of visual neural functionalities.

Collaboration


Dive into the Manuela Chessa's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter J. Bex

Northeastern University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge