Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Agnès Alsius is active.

Publication


Featured researches published by Agnès Alsius.


Experimental Brain Research | 2007

Attention to touch weakens audiovisual speech integration

Agnès Alsius; Jordi Navarra; Salvador Soto-Faraco

One of the classic examples of multisensory integration in humans occurs when speech sounds are combined with the sight of corresponding articulatory gestures. Despite the longstanding assumption that this kind of audiovisual binding operates in an attention-free mode, recent findings (Alsius et al. in Curr Biol, 15(9):839–843, 2005) suggest that audiovisual speech integration decreases when visual or auditory attentional resources are depleted. The present study addressed the generalization of this attention constraint by testing whether a similar decrease in multisensory integration is observed when attention demands are imposed on a sensory domain that is not involved in speech perception, such as touch. We measured the McGurk illusion in a dual task paradigm involving a difficult tactile task. The results showed that the percentage of visually influenced responses to audiovisual stimuli was reduced when attention was diverted to a tactile task. This finding is attributed to a modulatory effect on audiovisual integration of speech mediated by supramodal attention limitations. We suggest that the interactions between the attentional system and crossmodal binding mechanisms may be much more extensive and dynamic than it was advanced in previous studies.


Journal of Experimental Psychology: Human Perception and Performance | 2009

Deconstructing the McGurk-MacDonald Illusion

Salvador Soto-Faraco; Agnès Alsius

Cross-modal illusions such as the McGurk-MacDonald effect have been used to illustrate the automatic, encapsulated nature of multisensory integration. This characterization is based in the widespread assumption that the illusory percept arising from intersensory conflict reflects only the end-product of the multisensory integration process, with the mismatch between the original unisensory events remaining largely hidden from awareness. Here the authors show that when presented with desynchronized audiovisual speech syllables, observers are often able to detect the temporal mismatch while experiencing the McGurk-MacDonald illusion. Thus, contrary to previous assumptions, it seems possible to gain access to information about the individual sensory components of a multisensory (integrated) percept. On the basis of this and similar findings, the authors argue that multisensory integration is a multifaceted process during which different attributes of the (multisensory) object might be bound by different mechanisms and possibly at different times. This proposal contrasts with classic conceptions of multisensory integration as a homogeneous process whereby all the attributes of a multisensory event are treated in a unified manner.


Psychological Science | 2013

Detection of Audiovisual Speech Correspondences Without Visual Awareness

Agnès Alsius; Kevin G. Munhall

Mounting physiological and behavioral evidence has shown that the detectability of a visual stimulus can be enhanced by a simultaneously presented sound. The mechanisms underlying these cross-sensory effects, however, remain largely unknown. Using continuous flash suppression (CFS), we rendered a complex, dynamic visual stimulus (i.e., a talking face) consciously invisible to participants. We presented the visual stimulus together with a suprathreshold auditory stimulus (i.e., a voice speaking a sentence) that either matched or mismatched the lip movements of the talking face. We compared how long it took for the talking face to overcome interocular suppression and become visible to participants in the matched and mismatched conditions. Our results showed that the detection of the face was facilitated by the presentation of a matching auditory sentence, in comparison with the presentation of a mismatching sentence. This finding indicates that the registration of audiovisual correspondences occurs at an early stage of processing, even when the visual information is blocked from conscious awareness.


Experimental Brain Research | 2008

Semantic congruency and the Colavita visual dominance effect

Camille Koppen; Agnès Alsius; Charles Spence

Participants presented with auditory, visual, or bimodal audiovisual stimuli in a speeded discrimination task, fail to respond to the auditory component of bimodal targets significantly more often than to the visual component, a phenomenon known as the Colavita visual dominance effect. Given that spatial and temporal factors have recently been shown to modulate the Colavita effect, the aim of the present study, was to investigate whether semantic congruency also modulates the effect. In the three experiments reported here, participants were presented with a version of the Colavita task in which the stimulus congruency between the auditory and visual components of the bimodal targets was manipulated. That is, the auditory and visual stimuli could refer to the same or different object (in Experiments 1 and 2) or audiovisual speech event (Experiment 3). Surprisingly, semantic/stimulus congruency had no effect on the magnitude of the Colavita effect in any of the experiments, although it exerted a significant effect on certain other aspects of participants’ performance. This finding contrasts with the results of other recent studies showing that semantic/stimulus congruency can affect certain multisensory interactions.


Frontiers in Psychology | 2014

Effect of attentional load on audiovisual speech perception: evidence from ERPs.

Agnès Alsius; Riikka Möttönen; Mikko Sams; Salvador Soto-Faraco; Kaisa Tiippana

Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.


PLOS ONE | 2011

Cross-Modal Prediction in Speech Perception

Carolina Sánchez-García; Agnès Alsius; James T. Enns; Salvador Soto-Faraco

Speech perception often benefits from vision of the speakers lip movements when they are available. One potential mechanism underlying this reported gain in perception arising from audio-visual integration is on-line prediction. In this study we address whether the preceding speech context in a single modality can improve audiovisual processing and whether this improvement is based on on-line information-transfer across sensory modalities. In the experiments presented here, during each trial, a speech fragment (context) presented in a single sensory modality (voice or lips) was immediately continued by an audiovisual target fragment. Participants made speeded judgments about whether voice and lips were in agreement in the target fragment. The leading single sensory context and the subsequent audiovisual target fragment could be continuous in either one modality only, both (context in one modality continues into both modalities in the target fragment) or neither modalities (i.e., discontinuous). The results showed quicker audiovisual matching responses when context was continuous with the target within either the visual or auditory channel (Experiment 1). Critically, prior visual context also provided an advantage when it was cross-modally continuous (with the auditory channel in the target), but auditory to visual cross-modal continuity resulted in no advantage (Experiment 2). This suggests that visual speech information can provide an on-line benefit for processing the upcoming auditory input through the use of predictive mechanisms. We hypothesize that this benefit is expressed at an early level of speech analysis.


Psychonomic Bulletin & Review | 2005

Spatial orienting of tactile attention induced by social cues

Salvador Soto-Faraco; Scott Sinnett; Agnès Alsius; Alan Kingstone

Several studies have established that humans orient their visual attention reflexively in response to social cues such as the direction of someone else’s gaze. However, the consequences of this kind of orienting have been addressed only for the visual system. We investigated whether visual social attention cues can induce shifts in tactile attention by combining a central noninformative eye-gaze cue with tactile targets presented to participants’ fingertips. Data from speeded detection, speeded discrimination, and signal detection tasks converged on the same conclusion: Eye-gaze-based orienting facilitates the processing of tactile targets at the location of the gazed-at body location. In addition, we examined the effects of other directional cues, such as conventional arrows, and found that they can be equally effective. This is the first demonstration that social attention cues have consequences that reach beyond their own sensory modality.


Experimental Brain Research | 2011

Searching for audiovisual correspondence in multiple speaker scenarios

Agnès Alsius; Salvador Soto-Faraco

A critical question in multisensory processing is how the constant information flow that arrives to our different senses is organized in coherent representations. Some authors claim that pre-attentive detection of inter-sensory correlations supports crossmodal binding, whereas other findings indicate that attention plays a crucial role. We used visual and auditory search tasks for speaking faces to address the role of selective spatial attention in audiovisual binding. Search efficiency amongst faces for the match with a voice declined with the number of faces being monitored concurrently, consistent with an attentive search mechanism. In contrast, search amongst auditory speech streams for the match with a face was independent of the number of streams being monitored concurrently, as long as localization was not required. We suggest that the fundamental differences in the way in which auditory and visual information is encoded play a limiting role in crossmodal binding. Based on these unisensory limitations, we provide a unified explanation for several previous apparently contradictory findings.


Neuropsychologia | 2015

The contribution of dynamic visual cues to audiovisual speech perception

Philip Jaekl; Ana Pesquita; Agnès Alsius; Kevin G. Munhall; Salvador Soto-Faraco

Seeing a speakers facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speakers facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speakers facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech.


Multisensory Research | 2018

Forty Years After Hearing Lips and Seeing Voices : the McGurk Effect Revisited

Agnès Alsius; Martin Paré; Kevin G. Munhall

Since its discovery 40 years ago, the McGurk illusion has been usually cited as a prototypical paradigmatic case of multisensory binding in humans, and has been extensively used in speech perception studies as a proxy measure for audiovisual integration mechanisms. Despite the well-established practice of using the McGurk illusion as a tool for studying the mechanisms underlying audiovisual speech integration, the magnitude of the illusion varies enormously across studies. Furthermore, the processing of McGurk stimuli differs from congruent audiovisual processing at both phenomenological and neural levels. This questions the suitability of this illusion as a tool to quantify the necessary and sufficient conditions under which audiovisual integration occurs in natural conditions. In this paper, we review some of the practical and theoretical issues related to the use of the McGurk illusion as an experimental paradigm. We believe that, without a richer understanding of the mechanisms involved in the processing of the McGurk effect, experimenters should be really cautious when generalizing data generated by McGurk stimuli to matching audiovisual speech events.

Collaboration


Dive into the Agnès Alsius's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ruth Campbell

University College London

View shared research outputs
Top Co-Authors

Avatar

Takashi Mitsuya

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge