Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Luigi F. Cuturi is active.

Publication


Featured researches published by Luigi F. Cuturi.


PLOS ONE | 2011

Reducing crowding by weakening inhibitory lateral interactions in the periphery with perceptual learning.

Marcello Maniglia; Andrea Pavan; Luigi F. Cuturi; Gianluca Campana; Giovanni Sato; Clara Casco

We investigated whether lateral masking in the near-periphery, due to inhibitory lateral interactions at an early level of central visual processing, could be weakened by perceptual learning and whether learning transferred to an untrained, higher-level lateral masking known as crowding. The trained task was contrast detection of a Gabor target presented in the near periphery (4°) in the presence of co-oriented and co-aligned high contrast Gabor flankers, which featured different target-to-flankers separations along the vertical axis that varied from 2λ to 8λ. We found both suppressive and facilitatory lateral interactions at target-to-flankers distances (2λ - 4λ and 8λ, respectively) that were larger than those found in the fovea. Training reduces suppression but does not increase facilitation. Most importantly, we found that learning reduces crowding and improves contrast sensitivity, but has no effect on visual acuity (VA). These results suggest a different pattern of connectivity in the periphery with respect to the fovea as well as a different modulation of this connectivity via perceptual learning that not only reduces low-level lateral masking but also reduces crowding. These results have important implications for the rehabilitation of low-vision patients who must use peripheral vision to perform tasks, such as reading and refined figure-ground segmentation, which normal sighted subjects perform in the fovea.


PLOS ONE | 2013

Systematic biases in human heading estimation.

Luigi F. Cuturi; Paul R. MacNeilage

Heading estimation is vital to everyday navigation and locomotion. Despite extensive behavioral and physiological research on both visual and vestibular heading estimation over more than two decades, the accuracy of heading estimation has not yet been systematically evaluated. Therefore human visual and vestibular heading estimation was assessed in the horizontal plane using a motion platform and stereo visual display. Heading angle was overestimated during forward movements and underestimated during backward movements in response to both visual and vestibular stimuli, indicating an overall multimodal bias toward lateral directions. Lateral biases are consistent with the overrepresentation of lateral preferred directions observed in neural populations that carry visual and vestibular heading information, including MSTd and otolith afferent populations. Due to this overrepresentation, population vector decoding yields patterns of bias remarkably similar to those observed behaviorally. Lateral biases are inconsistent with standard Bayesian accounts which predict that estimates should be biased toward the most common straight forward heading direction. Nevertheless, lateral biases may be functionally relevant. They effectively constitute a perceptual scale expansion around straight ahead which could allow for more precise estimation and provide a high gain feedback signal to facilitate maintenance of straight-forward heading during everyday navigation and locomotion.


Vision Research | 2011

Implied motion from static photographs influences the perceived position of stationary objects.

Andrea Pavan; Luigi F. Cuturi; Marcello Maniglia; Clara Casco; Gianluca Campana

A growing amount of evidence suggests that viewing a photograph depicting motion activates the same direction-selective neurons involved in the perception of real motion. It has been shown that prolonged exposure (adaptation) to photographs depicting directional motion can induce motion adaptation and consequently motion aftereffect. The present study investigated whether adapting to photographs depicting humans, animals, and vehicles that move leftward or rightward also generates a positional aftereffect (the motion-induced position shift--MIPS), in which the perceived spatial position of a target pattern is shifted in the opposite direction to that of adaptation. Results showed that adapting to still photographs depicting objects that move in a particular direction shifts the perceived position of subsequently presented stationary objects opposite to the depicted adaptation direction and that this effect depends on the retinotopic location of the adapting stimulus. These results suggest that the implied motion could activate the same direction-selective and speed-tuned mechanisms that produce positional aftereffect when viewing real motion.


Multisensory Research | 2016

Multisensory Integration in Self Motion Perception

Mark W. Greenlee; Sebastian M. Frank; Mariia Kaliuzhna; Olaf Blanke; Frank Bremmer; Jan Churan; Luigi F. Cuturi; Paul R. MacNeilage; Andrew T. Smith

Self motion perception involves the integration of visual, vestibular, somatosensory and motor signals. This article reviews the findings from single unit electrophysiology, functional and structural magnetic resonance imaging and psychophysics to present an update on how the human and non-human primate brain integrates multisensory information to estimate ones position and motion in space. The results indicate that there is a network of regions in the non-human primate and human brain that processes self motion cues from the different sense modalities.


Current Biology | 2014

Optic Flow Induces Nonvisual Self-Motion Aftereffects

Luigi F. Cuturi; Paul R. MacNeilage

There is strong evidence of shared neurophysiological substrates for visual and vestibular processing that likely support our capacity for estimating our own movement through the environment. We examined behavioral consequences of these shared substrates in the form of crossmodal aftereffects. In particular, we examined whether sustained exposure to a visual self-motion stimulus (i.e., optic flow) induces a subsequent bias in nonvisual (i.e., vestibular) self-motion perception in the opposite direction in darkness. Although several previous studies have investigated self-motion aftereffects, none have demonstrated crossmodal transfer, which is the strongest proof that the adapted mechanisms are generalized for self-motion processing. The crossmodal aftereffect was quantified using a motion-nulling procedure in which observers were physically translated on a motion platform to find the movement required to cancel the visually induced aftereffect. Crossmodal transfer was elicited only with the longest-duration visual adaptor (15 s), suggesting that transfer requires sustained vection (i.e., visually induced self-motion perception). Visual-only aftereffects were also measured, but the magnitudes of visual-only and crossmodal aftereffects were not correlated, indicating distinct underlying mechanisms. We propose that crossmodal aftereffects can be understood as an example of contingent or contextual adaptation that arises in response to correlations across signals and functions to reduce these correlations in order to increase coding efficiency. According to this view, crossmodal aftereffects in general (e.g., visual-auditory or visual-tactile) can be explained as accidental manifestations of mechanisms that constantly function to calibrate sensory modalities with each other as well as with the environment.


Journal of Vision | 2016

The effect of supine body position on human heading perception

Nadine Hummel; Luigi F. Cuturi; Paul R. MacNeilage; Virginia L. Flanagin

The use of virtual environments in functional imaging experiments is a promising method to investigate and understand the neural basis of human navigation and self-motion perception. However, the supine position in the fMRI scanner is unnatural for everyday motion. In particular, the head-horizontal self-motion plane is parallel rather than perpendicular to gravity. Earlier studies have shown that perception of heading from visual self-motion stimuli, such as optic flow, can be modified due to visuo-vestibular interactions. With this study, we aimed to identify the effects of the supine body position on visual heading estimation, which is a basic component of human navigation. Visual and vestibular heading judgments were measured separately in 11 healthy subjects in upright and supine body positions. We measured two planes of self-motion, the transverse and the coronal plane, and found that, although vestibular heading perception was strongly modified in a supine position, visual performance, in particular for the preferred head-horizontal (i.e., transverse) plane, did not change. This provides behavioral evidence in humans that direction estimation from self-motion consistent optic flow is not modified by supine body orientation, demonstrating that visual heading estimation is one component of human navigation that is not influenced by the supine body position required for functional brain imaging experiments.


Vision Research | 2011

The effect of spatial orientation on detecting motion trajectories in noise

Andrea Pavan; Clara Casco; George Mather; Rosilari M. Bellacosa; Luigi F. Cuturi; Gianluca Campana


PLOS ONE | 2013

Correction: Systematic Biases in Human Heading Estimation

Luigi F. Cuturi; Paul R. MacNeilage


Archive | 2014

Report Optic Flow Induces Nonvisual Self-Motion Aftereffects

Luigi F. Cuturi; Paul R. MacNeilage


F1000Research | 2012

Similar systematic biases in visual and vestibular heading perception

Luigi F. Cuturi; Paul R. MacNeilage

Collaboration


Dive into the Luigi F. Cuturi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrea Pavan

International School for Advanced Studies

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mariia Kaliuzhna

École Polytechnique Fédérale de Lausanne

View shared research outputs
Researchain Logo
Decentralizing Knowledge