Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Carlo Fantoni is active.

Publication


Featured researches published by Carlo Fantoni.


Journal of Vision | 2003

Contour interpolation by vector-field combination.

Carlo Fantoni; Walter Gerbino

We model the visual interpolation of missing contours by extending contour fragments under a smoothness constraint. Interpolated trajectories result from an algorithm that computes the vector sum of two fields corresponding to different unification factors: the good continuation (GC) field and the minimal path (MP) field. As the distance from terminators increases, the GC field decreases and the MP field increases. Viewer-independent and viewer-dependent variables modulate GC-MP contrast (i.e., the relative strength of GC and MP maximum vector magnitudes). Viewer-independent variables include the local geometry as well as more global properties such as contour support ratio and shape regularity. Viewer-dependent variables include the retinal gap between contour endpoints and the retinal orientation of their stems. GC-MP contrast is the only free parameter of our field model. In the case of partially occluded angles, interpolated trajectories become flatter as GC-MP contrast decreases. Once GC-MP contrast is set to a specific value, derived from empirical measures on a given configuration, the model predicts all interpolation trajectories corresponding to different types of occlusion of the same angle. Model predictions fit psychophysical data on the effects of viewer-independent and viewer-dependent variables.


Vision Research | 2005

Contour curvature polarity and surface interpolation.

Carlo Fantoni; Marco Bertamini; W. Gerbino

Contour curvature polarity (i.e., concavity/convexity) is recognized as an important factor in shape perception. However, current interpolation models do not consider it among the factors that modulate the trajectory of amodally-completed contours. Two hypotheses generate opposite predictions about the effect of contour polarity on surface interpolation. Convexity advantage: if convexities are preferred over concavities, contours of convex portions should be more extrapolated than those of concave portions. Minimal area: if the area of amodally-completed surfaces tends to be minimized, contours of convex portions should be less extrapolated than contours of concave portions. We ran three experiments using two methods, simultaneous length comparison and probe localization, and different displays (pictures vs. random dot stereograms). Results indicate that contour polarity affects the amodally-completed angles of regular and irregular surfaces. As predicted by the minimal area hypothesis, image contours are less extrapolated when the amodal portion is convex rather than concave. The field model of interpolation [Fantoni, C., & Gerbino, W. (2003). Contour interpolation by vector-field combination. Journal of Vision, 3, 281-303. Available from http://journalofvision.org/3/4/4/] has been revised to take into account surface-level factors and to explain area minimization as an effect of surface support ratio.


The Journal of Neuroscience | 2013

Visuomotor adaptation changes stereoscopic depth perception and tactile discrimination.

Robert Volcic; Carlo Fantoni; Corrado Caudek; John A. Assad; Fulvio Domini

Perceptual judgments of relative depth from binocular disparity are systematically distorted in humans, despite in principle having access to reliable 3D information. Interestingly, these distortions vanish at a natural grasping distance, as if perceived stereo depth is contingent on a specific reference distance for depth-disparity scaling that corresponds to the length of our arm. Here we show that the brains representation of the arm indeed powerfully modulates depth perception, and that this internal calibration can be quickly updated. We used a classic visuomotor adaptation task in which subjects execute reaching movements with the visual feedback of their reaching finger displaced farther in depth, as if they had a longer arm. After adaptation, 3D perception changed dramatically, and became accurate at the “new” natural grasping distance, the updated disparity scaling reference distance. We further tested whether the rapid adaptive changes were restricted to the visual modality or were characteristic of sensory systems in general. Remarkably, we found an improvement in tactile discrimination consistent with a magnified internal image of the arm. This suggests that the brain integrates sensory signals with information about arm length, and quickly adapts to an artificially updated body structure. These adaptive processes are most likely a relic of the mechanisms needed to optimally correct for changes in size and shape of the body during ontogenesis.


Vision Research | 2006

Visual interpolation is not scale invariant.

Walter Gerbino; Carlo Fantoni

According to the scale-dependence hypothesis, the visual interpolation of contour fragments depends on the retinal separation of endpoints: as the retinal size of a partially occluded angle increases, the interpolated contour gradually deviates from the shortest connecting path and approaches the shape of the unoccluded angle. In the field model, as the retinal size increases the strength of good continuation increases while the strength of the minimal-path tendency decreases. To test the scale-dependence hypothesis--as well as other hypotheses connected to inclusion, support-ratio dependence, and extended relatability--we ran two experiments using the probe localization technique. Stimuli were regular polygons with rectilinear contours bounding symmetrically occluded angles. Retinal size was manipulated by changing viewing distance. Observers were asked to judge if a probe, briefly superposed on the occlusion region, was inside or outside the amodally completed angle. Retinal size strongly influenced the penetration of interpolated trajectories in the predicted direction. However, support ratio and interpolated angle size interacted with retinal size, consistently with the idea that unification factors are effective within a spatial window. We modified the field model to include the size of such a window as a new parameter and generated model-based trajectories that fitted empirical data closely.


Acta Psychologica | 2011

Integration of disparity and velocity information for haptic and perceptual judgments of object depth

Rachel Foster; Carlo Fantoni; Corrado Caudek; Fulvio Domini

Do reach-to-grasp (prehension) movements require a metric representation of three-dimensional (3D) layouts and objects? We propose a model relying only on direct sensory information to account for the planning and execution of prehension movements in the absence of haptic feedback and when the hand is not visible. In the present investigation, we isolate relative motion and binocular disparity information from other depth cues and we study their efficacy for reach-to-grasp movements and visual judgments. We show that (i) the amplitude of the grasp increases when relative motion is added to binocular disparity information, even if depth from disparity information is already veridical, and (ii) similar distortions of derived depth are found for haptic tasks and perceptual judgments. With a quantitative test, we demonstrate that our results are consistent with the Intrinsic Constraint model and do not require 3D metric inferences (Domini, Caudek, & Tassinari, 2006). By contrast, the linear cue integration model (Landy, Maloney, Johnston, & Young, 1995) cannot explain the present results, even if the flatness cues are taken into account.


Journal of Vision | 2010

Systematic distortions of perceived planar surface motion in active vision

Carlo Fantoni; Corrado Caudek; Fulvio Domini

Recent studies suggest that the active observer combines optic flow information with extra-retinal signals resulting from head motion. Such a combination allows, in principle, a correct discrimination of the presence or absence of surface rotation. In Experiments 1 and 2, observers were asked to perform such discrimination task while performing a lateral head shift. In Experiment 3, observers were shown the optic flow generated by their own movement with respect to a stationary planar slanted surface and were asked to classify perceived surface rotation as being small or large. We found that the perception of surface motion was systematically biased. We found that, in active, as well as in passive vision, perceived surface rotation was affected by the deformation component of the first-order optic flow, regardless of the actual surface rotation. We also found that the addition of a null disparity field increased the likelihood of perceiving surface rotation in active, but not in passive vision. Both these results suggest that vestibular information, provided by active vision, is not sufficient for veridical 3D shape and motion recovery from the optic flow.


PLOS ONE | 2011

Bayesian Modeling of Perceived Surface Slant from Actively-Generated and Passively-Observed Optic Flow

Corrado Caudek; Carlo Fantoni; Fulvio Domini

We measured perceived depth from the optic flow (a) when showing a stationary physical or virtual object to observers who moved their head at a normal or slower speed, and (b) when simulating the same optic flow on a computer and presenting it to stationary observers. Our results show that perceived surface slant is systematically distorted, for both the active and the passive viewing of physical or virtual surfaces. These distortions are modulated by head translation speed, with perceived slant increasing directly with the local velocity gradient of the optic flow. This empirical result allows us to determine the relative merits of two alternative approaches aimed at explaining perceived surface slant in active vision: an “inverse optics” model that takes head motion information into account, and a probabilistic model that ignores extra-retinal signals. We compare these two approaches within the framework of the Bayesian theory. The “inverse optics” Bayesian model produces veridical slant estimates if the optic flow and the head translation velocity are measured with no error; because of the influence of a “prior” for flatness, the slant estimates become systematically biased as the measurement errors increase. The Bayesian model, which ignores the observers motion, always produces distorted estimates of surface slant. Interestingly, the predictions of this second model, not those of the first one, are consistent with our empirical findings. The present results suggest that (a) in active vision perceived surface slant may be the product of probabilistic processes which do not guarantee the correct solution, and (b) extra-retinal signals may be mainly used for a better measurement of retinal information.


Vision Research | 2008

3D surface orientation based on a novel representation of the orientation disparity field

Carlo Fantoni

The orientation disparity field from two orthographic views of an inclined planar surface patch (covered by straight lines) is analyzed, and a new tool to extract the patch orientation is provided: the function coupling the average orientation of each pair of corresponding surface contours with their orientation disparity. This function allows identifying the tilt of the surface, and two indeterminacy functions describing the set of surface inclinations (around the vertical and horizontal axes) over convergence angle values compatible with the orientation disparity field. Results of simulations show that the selection of inclination values matching the difference between the areas below the indeterminacy functions are consistent with some surface orientation effects found in psychophysical and computational experiments, like: the unbiased tilt vs. biased slant estimates, the slant underestimation, the surface orientation anisotropy, and the slant/tilt covariation.


Proceedings of SPIE | 2014

A framework for the study of vision in active observers

Carlo Nicolini; Carlo Fantoni; Giovanni Mancuso; Robert Volcic; Fulvio Domini

We present a framework for the study of active vision, i.e., the functioning of the visual system during actively self-generated body movements. In laboratory settings, human vision is usually studied with a static observer looking at static or, at best, dynamic stimuli. In the real world, however, humans constantly move within dynamic environments. The resulting visual inputs are thus an intertwined mixture of self- and externally-generated movements. To fill this gap, we developed a virtual environment integrated with a head-tracking system in which the influence of self- and externally-generated movements can be manipulated independently. As a proof of principle, we studied perceptual stationarity of the visual world during lateral translation or rotation of the head. The movement of the visual stimulus was thus parametrically tethered to self-generated movements. We found that estimates of object stationarity were less biased and more precise during head rotation than translation. In both cases the visual stimulus had to partially follow the head movement to be perceived as immobile. We discuss a range of possibilities for our setup among which the study of shape perception in active and passive conditions, where the same optic flow is replayed to stationary observers.


PLOS ONE | 2014

Body actions change the appearance of facial expressions.

Carlo Fantoni; Walter Gerbino

Perception, cognition, and emotion do not operate along segregated pathways; rather, their adaptive interaction is supported by various sources of evidence. For instance, the aesthetic appraisal of powerful mood inducers like music can bias the facial expression of emotions towards mood congruency. In four experiments we showed similar mood-congruency effects elicited by the comfort/discomfort of body actions. Using a novel Motor Action Mood Induction Procedure, we let participants perform comfortable/uncomfortable visually-guided reaches and tested them in a facial emotion identification task. Through the alleged mediation of motor action induced mood, action comfort enhanced the quality of the participant’s global experience (a neutral face appeared happy and a slightly angry face neutral), while action discomfort made a neutral face appear angry and a slightly happy face neutral. Furthermore, uncomfortable (but not comfortable) reaching improved the sensitivity for the identification of emotional faces and reduced the identification time of facial expressions, as a possible effect of hyper-arousal from an unpleasant bodily experience.

Collaboration


Dive into the Carlo Fantoni's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Giovanni Mancuso

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Robert Volcic

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge