Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Francesco Pavani is active.

Publication


Featured researches published by Francesco Pavani.


Psychological Science | 2000

Visual Capture of Touch: Out-of-the-Body Experiences With Rubber Gloves

Francesco Pavani; Charles Spence; Jon Driver

When the apparent visual location of a body part conflicts with its veridical location, vision can dominate proprioception and kinesthesia. In this article, we show that vision can capture tactile localization. Participants discriminated the location of vibrotactile stimuli (upper, at the index finger, vs. lower, at the thumb), while ignoring distractor lights that could independently be upper or lower. Such tactile discriminations were slowed when the distractor light was incongruent with the tactile target (e.g., an upper light during lower touch) rather than congruent, especially when the lights appeared near the stimulated hand. The hands were occluded under a table, with all distractor lights above the table. The effect of the distractor lights increased when rubber hands were placed on the table, “holding” the distractor lights, but only when the rubber hands were spatially aligned with the participants own hands. In this aligned situation, participants were more likely to report the illusion of feeling touch at the rubber hands. Such visual capture of touch appears cognitively impenetrable.


Journal of Experimental Psychology: Human Perception and Performance | 2000

Crossmodal links between vision and touch in covert endogenous spatial attention.

Charles Spence; Francesco Pavani; Jon Driver

The authors report a series of 6 experiments investigating crossmodal links between vision and touch in covert endogenous spatial attention. When participants were informed that visual and tactile targets were more likely on 1 side than the other, speeded discrimination responses (continuous vs. pulsed, Experiments 1 and 2; or up vs. down, Experiment 3) for targets in both modalities were significantly faster on the expected side, even though target modality was entirely unpredictable. When participants expected a target on a particular side in just one modality, corresponding shifts of covert attention also took place in the other modality, as evidenced by faster elevation judgments on that side (Experiment 4). Larger attentional effects were found when directing visual and tactile attention to the same position rather than to different positions (Experiment 5). A final study with crossed hands revealed that these visuotactile links in spatial attention apply to common positions in external space.


Cognitive, Affective, & Behavioral Neuroscience | 2004

Spatial constraints on visual-tactile cross-modal distractor congruency effects

Charles Spence; Francesco Pavani; J. O. N. Driver

Across three experiments, participants made speeded elevation discrimination responses to vibrotactile targets presented to the thumb (held in a lower position) or the index finger (upper position) of either hand, while simultaneously trying to ignore visual distractors presented independently from either the same or a different elevation. Performance on the vibrotactile elevation discrimination task was slower and less accurate when the visual distractor was incongruent with the elevation of the vibrotactile target (e.g., a lower light during the presentation of an upper vibrotactile target to the index finger) than when they were congruent, showing that people cannot completely ignore vision when selectively attending to vibrotactile information. We investigated the attentional, temporal, and spatial modulation of these cross-modal congruency effects by manipulating the direction of endogenous tactile spatial attention, the stimulus onset asynchrony between target and distractor, and the spatial separation between the vibrotactile target, any visual distractors, and the participant’s two hands within and across hemifields. Our results provide new insights into the spatiotemporal modulation of crossmodal congruency effects and highlight the utility of this paradigm for investigating the contributions of visual, tactile, and proprioceptive inputs to the multisensory representation of peripersonal space.


Experimental Brain Research | 1999

Are perception and action affected differently by the Titchener circles illusion

Francesco Pavani; Irina Boscagli; Francesco Benvenuti; M. Rabuffetti; Alessandro Farnè

Abstract In the present study, we investigated the effects of the Titchener circles illusion in perception and action. In this illusion, two identical discs can be perceived as being different in size when one is surrounded by an annulus of smaller circles and the other is surrounded by an annulus of larger circles. This classic size-contrast illusion, known as Ebbinghaus or Titchener Circles Illusion, has a strong perceptual effect. By contrast, it has recently been demonstrated that when subjects are required to pick up one of the discs, their grip aperture during reaching is largely appropriate to the size of the target. This result has been considered as evidence of a clear dissociation between visual perception and visuomotor behaviour in the intact human brain. In this study, we suggest and investigate an alternative explanation for these results. We argue that, in a previous study, while perception was subjected to the simultaneous influence of the large and small circles displays, in the grasping task only the annulus of circles surrounding the target object was influential. We tested this hypothesis by requiring 18 subjects to perceptually estimate and grasp a disc centred in a single annulus of Titchener circles. The results showed that both the perceptual estimation and the hand shaping while grasping the disc were similarly influenced by the illusion. Moreover, the stronger the perceptual illusion, the greater the effect on the grip scaling. We discuss the results as evidence of an interaction between the functional pathways for perception and action in the intact human brain.


Psychological Science | 2010

Synchronous multisensory stimulation blurs self-other boundaries.

Maria Paola Paladino; Mara Mazzurega; Francesco Pavani; Thomas W. Schubert

In a study that builds on recent cognitive neuroscience research on body perception and social psychology research on social relations, we tested the hypothesis that synchronous multisensory stimulation leads to self-other merging. We brushed the cheek of each study participant as he or she watched a stranger’s cheek being brushed in the same way, either in synchrony or in asynchrony. We found that this multisensory procedure had an effect on participants’ body perception as well as social perception. Study participants exposed to synchronous stimulation showed more merging of self and the other than participants exposed to asynchronous stimulation. The degree of self-other merging was determined by measuring participants’ body sensations and their perception of face resemblance, as well as participants’ judgment of the inner state of the other, closeness felt toward the other, and conformity behavior. The results of this study show how multisensory integration can affect social perception and create a sense of self-other similarity.


Current Biology | 2002

A Common Cortical Substrate Activated by Horizontal and Vertical Sound Movement in the Human Brain

Francesco Pavani; Emiliano Macaluso; Jason D. Warren; Jon Driver; Timothy D. Griffiths

Perception of movement in acoustic space depends on comparison of the sound waveforms reaching the two ears (binaural cues) as well as spectrotemporal analysis of the waveform at each ear (monaural cues). The relative importance of these two cues is different for perception of vertical or horizontal motion, with spectrotemporal analysis likely to be more important for perceiving vertical shifts. In humans, functional imaging studies have shown that sound movement in the horizontal plane activates brain areas distinct from the primary auditory cortex, in parietal and frontal lobes and in the planum temporale. However, no previous work has examined activations for vertical sound movement. It is therefore difficult to generalize previous imaging studies, based on horizontal movement only, to multidimensional auditory space perception. Using externalized virtual-space sounds in a functional magnetic resonance imaging (fMRI) paradigm to investigate this, we compared vertical and horizontal shifts in sound location. A common bilateral network of brain areas was activated in response to both horizontal and vertical sound movement. This included the planum temporale, superior parietal cortex, and premotor cortex. Sounds perceived laterally in virtual space were associated with contralateral activation of the auditory cortex. These results demonstrate that sound movement in vertical and horizontal dimensions engages a common processing network in the human cerebral cortex and show that multidimensional spatial properties of sounds are processed at this level.


Perception | 2007

The Role of Hand Size in the Fake-Hand Illusion Paradigm

Francesco Pavani; Massimiliano Zampini

When a hand (either real or fake) is stimulated in synchrony with our own hand concealed from view, the felt position of our own hand can be biased toward the location of the seen hand. This intriguing phenomenon relies on the brains ability to detect statistical correlations in the multisensory inputs (ie visual, tactile, and proprioceptive), but it is also modulated by the pre-existing representation of ones own body. Nonetheless, researchers appear to have accepted the assumption that the size of the seen hand does not matter for this illusion to occur. Here we used a real-time video image of the participants own hand to elicit the illusion, but we varied the hand size in the video image so that the seen hand was either reduced, veridical, or enlarged in comparison to the participants own hand. The results showed that visible-hand size modulated the illusion, which was present for veridical and enlarged images of the hand, but absent when the visible hand was reduced. These findings indicate that very specific aspects of our own body image (ie hand size) can constrain the multisensory modulation of the body schema highlighted by the fake-hand illusion paradigm. In addition, they suggest an asymmetric tendency to acknowledge enlarged (but not reduced) images of body parts within our body representation.


Journal of Cognitive Neuroscience | 2002

Acoustical Vision of Neglected Stimuli: Interaction among Spatially Converging Audiovisual Inputs in Neglect Patients

Francesca Frassinetti; Francesco Pavani; Elisabetta Làdavas

Cross-modal spatial integration between auditory and visual stimuli is a common phenomenon in space perception. The principles underlying such integration have been outlined by neurophysiological and behavioral studies in animals (Stein & Meredith, 1993), but little evidence exists proving that similar principles occur also in humans. In the present study, we explored such possibility in patients with visual neglect, namely, patients with visuospatial impairment. To test this hypothesis, neglect patients were required to detect brief flash of light presented in one of six spatial positions, either in a unimodal condition (i.e., only visual stimuli were presented) or in a cross-modal condition (i.e., a sound was presented simultaneously to the visual target, either at the same spatial position or a tone of the remaining five possible positions). The results showed an improvement of visual detection when visual and auditory stimuli were originating from the same position in space or at close spatial disparity (168). In contrast, no improvement was found when the spatial separation of visual and auditory stimuli was larger than 168. Moreover, the improvement was larger for visual positions that were more affected by the spatial impairment, i.e., the most peripheral positions in the left visual field (LVF). In conclusion, the results of the present study considerably extend our knowledge about the multisensory integration, by showing in humans the existence of an integrated visuoauditory system with functional properties similar to those found in animals.


Vision Research | 2000

Reappraising the apparent costs of attending to two separate visual objects

Greg Davis; Jon Driver; Francesco Pavani; Alex J. Shepherd

Support for object-based accounts of visual attention has been drawn from several different types of effect. One effect is found when observers try to restrict their attention to a particular region of a display. Other regions belonging to the same object are often selected as well, suggesting that attention spreads spatially over entire objects. Another effect is found when judging two visual attributes; performance is often less efficient when the attributes belong to separate objects rather than both belonging to a single object. This latter effect has been taken to imply that only one segmented object can be attended at a time. However, it may instead merely be a variant of the first effect. If, as we assume here, attention spreads to task-irrelevant regions of relevant objects, it will encompass a larger spatial region and more information when judging attributes of two objects rather than one. Here we compared judging one versus two objects, while manipulating whether the two objects occupied a wider extent than the single object condition (as in previous work), or not. Costs were found for judging two objects versus one only when together they occupied a wider spatial extent. We conclude that reported difficulties in attending two objects may be due to attention spreading across the entire spatial extent of objects when judging their parts, rather than a fixed inability to process more than object at a time.


Neuroreport | 2009

Grasping actions remap peripersonal space

Claudio Brozzoli; Francesco Pavani; Christian Urquizar; Lucilla Cardinali; Alessandro Farnè

The portion of space that closely surrounds our body parts is termed peripersonal space, and it has been shown to be represented in the brain through multisensory processing systems. Here, we tested whether voluntary actions, such as grasping an object, may remap such multisensory spatial representation. Participants discriminated touches on the hand they used to grasp an object containing task-irrelevant visual distractors. Compared with a static condition, reach-to-grasp movements increased the interference exerted by visual distractors over tactile targets. This remapping of multisensory space was triggered by action onset and further enhanced in real time during the early action execution phase. Additional experiments showed that this phenomenon is hand-centred. These results provide the first evidence of a functional link between voluntary object-oriented actions and multisensory coding of the space around us.

Collaboration


Dive into the Francesco Pavani's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jon Driver

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge