Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Roberto Cecere is active.

Publication


Featured researches published by Roberto Cecere.


Current Biology | 2015

Individual differences in alpha frequency drive crossmodal illusory perception.

Roberto Cecere; Geraint Rees; Vincenzo Romei

Summary Perception routinely integrates inputs from different senses. Stimulus temporal proximity critically determines whether or not these inputs are bound together. Despite the temporal window of integration being a widely accepted notion, its neurophysiological substrate remains unclear. Many types of common audio-visual interactions occur within a time window of ∼100 ms [1–5]. For example, in the sound-induced double-flash illusion, when two beeps are presented within ∼100 ms together with one flash, a second illusory flash is often perceived [2]. Due to their intrinsic rhythmic nature, brain oscillations are one candidate mechanism for gating the temporal window of integration. Interestingly, occipital alpha band oscillations cycle on average every ∼100 ms, with peak frequencies ranging between 8 and 14 Hz (i.e., 120–60 ms cycle). Moreover, presenting a brief tone can phase-reset such oscillations in visual cortex [6, 7]. Based on these observations, we hypothesized that the duration of each alpha cycle might provide the temporal unit to bind audio-visual events. Here, we first recorded EEG while participants performed the sound-induced double-flash illusion task [4] and found positive correlation between individual alpha frequency (IAF) peak and the size of the temporal window of the illusion. Participants then performed the same task while receiving occipital transcranial alternating current stimulation (tACS), to modulate oscillatory activity [8] either at their IAF or at off-peak alpha frequencies (IAF±2 Hz). Compared to IAF tACS, IAF−2 Hz and IAF+2 Hz tACS, respectively, enlarged and shrunk the temporal window of illusion, suggesting that alpha oscillations might represent the temporal unit of visual processing that cyclically gates perception and the neurophysiological substrate promoting audio-visual interactions.


Cortex | 2013

I am blind, but I "see" fear.

Caterina Bertini; Roberto Cecere; Elisabetta Làdavas

The ability to process unseen emotional signals might offer an evolutionary advantage in enabling threat-detection. In the present study, patients with visual field defects, without any subjective awareness of stimuli presented in the blind field and performing at the chance level in two alternative discrimination tasks (Experiment 1), were tested with go-no go tasks where they were asked to discriminate the emotional valence (Experiment 2) or the gender (Experiment 3) of faces displayed in the intact field, during the concurrent presentation of emotional faces in the blind field. The results showed a facilitative effect when fearful faces were presented in the blind field, both when the emotional content of the stimuli was relevant (Experiment 2) and irrelevant (Experiment 3) to the task. These findings are in contrast with performances of healthy subjects and patients tested in classical blindsight investigations, who showed response facilitation for congruent pairs of emotional stimuli. The observed implicit visual processing for unseen fearful stimuli might represent an adaptive mechanism for the implementation of efficient defensive responses, probably mediated by a spared sub-cortical and short-latency pathway.


Journal of Cognitive Neuroscience | 2014

Unseen fearful faces influence face encoding: Evidence from erps in hemianopic patients

Roberto Cecere; Caterina Bertini; Martin E. Maier; Elisabetta L davas

Visual threat-related signals are not only processed via a cortical geniculo-striatal pathway to the amygdala but also via a subcortical colliculo-pulvinar-amygdala pathway, which presumably mediates implicit processing of fearful stimuli. Indeed, hemianopic patients with unilateral damage to the geniculo-striatal pathway have been shown to respond faster to seen happy faces in their intact visual field when unseen fearful faces were concurrently presented in their blind field [Bertini, C., Cecere, R., & Làdavas, E. I am blind, but I “see” fear. Cortex, 49, 985–993, 2013]. This behavioral facilitation in the presence of unseen fear might reflect enhanced processing of consciously perceived faces because of early activation of the subcortical pathway for implicit fear perception, which possibly leads to a modulation of cortical activity. To test this hypothesis, we examined ERPs elicited by fearful and happy faces presented to the intact visual field of right and left hemianopic patients, whereas fearful, happy, or neutral faces were concurrently presented in their blind field. Results showed that the amplitude of the N170 elicited by seen happy faces was selectively increased when an unseen fearful face was concurrently presented in the blind field of right hemianopic patients. These results suggest that when the geniculo-striate visual pathway is lesioned, the rapid and implicit processing of threat signals can enhance facial encoding. Notably, the N170 modulation was only observed in left-lesioned patients, favoring the hypothesis that implicit subcortical processing of fearful signals can influence face encoding only when the right hemisphere is intact.


The Journal of Neuroscience | 2013

Differential Contribution of Cortical and Subcortical Visual Pathways to the Implicit Processing of Emotional Faces: A tDCS Study

Roberto Cecere; Caterina Bertini; Elisabetta Làdavas

The visual processing of emotional faces is subserved by both a cortical and a subcortical route. To investigate the specific contribution of these two functional pathways, two groups of neurologically healthy humans were tested using transcranial direct current stimulation (tDCS). In Experiment 1, participants received sham and active cathodal-inhibitory tDCS over the left occipital cortex, while, in control Experiment 2, participants received sham and active cathodal-inhibitory tDCS over the vertex, to exclude any unspecific effect of tDCS. After tDCS, participants performed a go/no-go task responding to happy or fearful target faces presented in the left visual field, while backwardly masked faces (emotionally congruent, incongruent, or neutral) were concurrently displayed in the right visual field. After both suppressing activity in the vertex (Experiment 2) and sham stimulation (Experiment 1 and 2) a reduction of reaction times was found for pairs of emotionally congruent stimuli. However, after suppressing the activity in the left occipital cortex, the congruency-dependent response facilitation disappeared, while a specific facilitative affect was evident when masked fearful faces were coupled with happy target faces. These results parallel the performances of hemianopic patients and suggest that when the occipital cortex is damaged or inhibited, and the visual processing for emotional faces is mainly dependent on the activation of the “low road” subcortical route, fearful faces represent the only visually processed stimuli capable of facilitating a behavioral response. This effect might reflect an adaptive mechanism implemented by the brain to quickly react to potential threats before their conscious identification.


Neuropsychologia | 2013

Crossmodal enhancement of visual orientation discrimination by looming sounds requires functional activation of primary visual areas: A case study

Roberto Cecere; Vincenzo Romei; Caterina Bertini; Elisabetta Làdavas

Approaching or looming sounds are salient, potentially threatening stimuli with particular impact on visual processing. The early crossmodal effects by looming sounds (Romei, Murray, Cappe, & Thut, 2009) and their selective impact on visual orientation discrimination (Leo, Romei, Freeman, Ladavas, & Driver, 2011) suggest that these multisensory interactions may take place already within low-level visual cortices. To investigate this hypothesis, we tested a patient (SDV) with bilateral occipital lesion and spared residual portions of V1/V2. Accordingly, SDV׳s visual perimetry revealed blindness of the central visual field with some residual peripheral vision. In two experiments we tested for the influence of looming vs. receding and stationary sounds on SDV׳s line orientation discrimination (orientation discrimination experiment) and visual detection abilities (detection experiment) in the preserved or blind portions of the visual field, corresponding to spared and lesioned areas of V1, respectively. In the visual orientation discrimination experiment we found that SDV visual orientation sensitivity significantly improved for visual targets paired with looming sounds but only for lines presented in the partially preserved visual field. In the visual detection experiment, where SDV was required to simply detect the same stimuli presented in the orientation discrimination experiment, a generalised sound-induced visual improvement both in the intact and in blind portion of the visual field was observed. These results provide direct evidence that early visual areas are critically involved in crossmodal modulation of visual orientation sensitivity by looming sounds. Thus, a lesion in V1 prevents the enhancement of visual orientation sensitivity. In contrast, the same lesion does not prevent the visual detection enhancement by a sound, probably due to alternative visual pathways (e.g. retino-colliculo-extrastriate) which are usually spared in these patients and able to mediate the crossmodal enhancement of basic visual abilities such as detection.


PLOS ONE | 2013

The duration of a co-occurring sound modulates visual detection performance in humans.

Benjamin de Haas; Roberto Cecere; Harriet Cullen; Jon Driver; Vincenzo Romei

Background The duration of sounds can affect the perceived duration of co-occurring visual stimuli. However, it is unclear whether this is limited to amodal processes of duration perception or affects other non-temporal qualities of visual perception. Methodology/Principal Findings Here, we tested the hypothesis that visual sensitivity - rather than only the perceived duration of visual stimuli - can be affected by the duration of co-occurring sounds. We found that visual detection sensitivity (d’) for unimodal stimuli was higher for stimuli of longer duration. Crucially, in a cross-modal condition, we replicated previous unimodal findings, observing that visual sensitivity was shaped by the duration of co-occurring sounds. When short visual stimuli (∼24 ms) were accompanied by sounds of matching duration, visual sensitivity was decreased relative to the unimodal visual condition. However, when the same visual stimuli were accompanied by longer auditory stimuli (∼60–96 ms), visual sensitivity was increased relative to the performance for ∼24 ms auditory stimuli. Across participants, this sensitivity enhancement was observed within a critical time window of ∼60–96 ms. Moreover, the amplitude of this effect correlated with visual sensitivity enhancement found for longer lasting visual stimuli across participants. Conclusions/Significance Our findings show that the duration of co-occurring sounds affects visual perception; it changes visual sensitivity in a similar way as altering the (actual) duration of the visual stimuli does.


Neuropsychologia | 2017

Unseen fearful faces facilitate visual discrimination in the intact field.

Caterina Bertini; Roberto Cecere; Elisabetta Làdavas

Implicit visual processing of emotional stimuli has been widely investigated since the classical studies on affective blindsight, in which patients with primary visual cortex lesions showed discriminatory abilities for unseen emotional stimuli in the absence of awareness. In addition, more recent evidence from hemianopic patients showed response facilitation and enhanced early visual encoding of seen faces, only when fearful faces were presented concurrently in the blind field. However, it is still unclear whether unseen fearful faces specifically facilitate visual processing of facial stimuli, or whether the facilitatory effect constitutes an adaptive mechanism prioritizing the visual analysis of any stimulus. To test this question, we tested a group of hemianopic patients who perform at chance in forced-choice discrimination tasks of stimuli in the blind field. Patients performed a go/no-go task in which they were asked to discriminate simple visual stimuli (Gabor patches) presented in their intact field, while fearful, happy and neutral faces were concurrently presented in the blind field. The results showed a reduction in response times to the Gabor patches presented in the intact field, when fearful faces were concurrently presented in the blind field, but only in patients with left hemispheric lesions. No facilitatory effect was observed in patients with right hemispheric lesions. These results suggest that unseen fearful faces are implicitly processed and can facilitate the visual analysis of simple visual stimuli presented in the intact field. This effect might be subserved by activity in the spared colliculo-amygdala-extrastriate pathway that promotes efficient visual analysis of the environment and rapid execution of defensive responses. Such a facilitation is observed only in patients with left lesions, favouring the hypothesis that the right hemisphere mediates implicit visual processing of fear signals.


European Journal of Neuroscience | 2016

Behavioural evidence for separate mechanisms of audiovisual temporal binding as a function of leading sensory modality

Roberto Cecere; Joachim Gross; Gregor Thut

The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non‐malleable for auditory‐leading stimulus pairs and wider and trainable for visual‐leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory‐before‐visual vs. visual‐before‐auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory‐leading stimulus pairs (group 1), visual‐leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non‐trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory–visual vs. visual–auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory–visual) did not affect the other type (e.g. visual–auditory), even if trainable by within‐condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory‐leading vs. visual‐leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration.


The Journal of Neuroscience | 2017

Being first matters: topographical representational similarity analysis of ERP signals reveals separate networks for audiovisual temporal binding depending on the leading sense

Roberto Cecere; Joachim Gross; Ashleigh Willis; Gregor Thut

In multisensory integration, processing in one sensory modality is enhanced by complementary information from other modalities. Intersensory timing is crucial in this process because only inputs reaching the brain within a restricted temporal window are perceptually bound. Previous research in the audiovisual field has investigated various features of the temporal binding window, revealing asymmetries in its size and plasticity depending on the leading input: auditory–visual (AV) or visual–auditory (VA). Here, we tested whether separate neuronal mechanisms underlie this AV–VA dichotomy in humans. We recorded high-density EEG while participants performed an audiovisual simultaneity judgment task including various AV–VA asynchronies and unisensory control conditions (visual-only, auditory-only) and tested whether AV and VA processing generate different patterns of brain activity. After isolating the multisensory components of AV–VA event-related potentials (ERPs) from the sum of their unisensory constituents, we ran a time-resolved topographical representational similarity analysis (tRSA) comparing the AV and VA ERP maps. Spatial cross-correlation matrices were built from real data to index the similarity between the AV and VA maps at each time point (500 ms window after stimulus) and then correlated with two alternative similarity model matrices: AVmaps = VAmaps versus AVmaps ≠ VAmaps. The tRSA results favored the AVmaps ≠ VAmaps model across all time points, suggesting that audiovisual temporal binding (indexed by synchrony perception) engages different neural pathways depending on the leading sense. The existence of such dual route supports recent theoretical accounts proposing that multiple binding mechanisms are implemented in the brain to accommodate different information parsing strategies in auditory and visual sensory systems. SIGNIFICANCE STATEMENT Intersensory timing is a crucial aspect of multisensory integration, determining whether and how inputs in one modality enhance stimulus processing in another modality. Our research demonstrates that evaluating synchrony of auditory-leading (AV) versus visual-leading (VA) audiovisual stimulus pairs is characterized by two distinct patterns of brain activity. This suggests that audiovisual integration is not a unitary process and that different binding mechanisms are recruited in the brain based on the leading sense. These mechanisms may be relevant for supporting different classes of multisensory operations, for example, auditory enhancement of visual attention (AV) and visual enhancement of auditory speech (VA).


eNeuro | 2017

Prestimulus EEG power predicts conscious awareness but not objective visual performance

Christopher S.Y. Benwell; Chiara Francesca Tagliabue; Domenica Veniero; Roberto Cecere; Silvia Savazzi; Gregor Thut

Abstract Prestimulus oscillatory neural activity has been linked to perceptual outcomes during performance of psychophysical detection and discrimination tasks. Specifically, the power and phase of low frequency oscillations have been found to predict whether an upcoming weak visual target will be detected or not. However, the mechanisms by which baseline oscillatory activity influences perception remain unclear. Recent studies suggest that the frequently reported negative relationship between α power and stimulus detection may be explained by changes in detection criterion (i.e., increased target present responses regardless of whether the target was present/absent) driven by the state of neural excitability, rather than changes in visual sensitivity (i.e., more veridical percepts). Here, we recorded EEG while human participants performed a luminance discrimination task on perithreshold stimuli in combination with single-trial ratings of perceptual awareness. Our aim was to investigate whether the power and/or phase of prestimulus oscillatory activity predict discrimination accuracy and/or perceptual awareness on a trial-by-trial basis. Prestimulus power (3–28 Hz) was inversely related to perceptual awareness ratings (i.e., higher ratings in states of low prestimulus power/high excitability) but did not predict discrimination accuracy. In contrast, prestimulus oscillatory phase did not predict awareness ratings or accuracy in any frequency band. These results provide evidence that prestimulus α power influences the level of subjective awareness of threshold visual stimuli but does not influence visual sensitivity when a decision has to be made regarding stimulus features. Hence, we find a clear dissociation between the influence of ongoing neural activity on conscious awareness and objective performance.

Collaboration


Dive into the Roberto Cecere's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jon Driver

University College London

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge