Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Salvador Soto-Faraco is active.

Publication


Featured researches published by Salvador Soto-Faraco.


Trends in Cognitive Sciences | 2010

The multifaceted interplay between attention and multisensory integration

Durk Talsma; Daniel Senkowski; Salvador Soto-Faraco; Marty G. Woldorff

Multisensory integration has often been characterized as an automatic process. Recent findings indicate that multisensory integration can occur across various stages of stimulus processing that are linked to, and can be modulated by, attention. Stimulus-driven, bottom-up mechanisms induced by crossmodal interactions can automatically capture attention towards multisensory events, particularly when competition to focus elsewhere is relatively low. Conversely, top-down attention can facilitate the integration of multisensory inputs and lead to a spread of attention across sensory modalities. These findings point to a more intimate and multifaceted interplay between attention and multisensory integration than was previously thought. We review developments in the current understanding of the interactions between attention and multisensory processing, and propose a framework that unifies previous, apparently discordant, findings.


Cognition | 2005

Speech segmentation by statistical learning depends on attention.

Juan M. Toro; Scott Sinnett; Salvador Soto-Faraco

We addressed the hypothesis that word segmentation based on statistical regularities occurs without the need of attention. Participants were presented with a stream of artificial speech in which the only cue to extract the words was the presence of statistical regularities between syllables. Half of the participants were asked to passively listen to the speech stream, while the other half were asked to perform a concurrent task. In Experiment 1, the concurrent task was performed on a separate auditory stream (noises), in Experiment 2 it was performed on a visual stream (pictures), and in Experiment 3 it was performed on pitch changes in the speech stream itself. Invariably, passive listening to the speech stream led to successful word extraction (as measured by a recognition test presented after the exposure phase), whereas diverted attention led to a dramatic impairment in word segmentation performance. These findings demonstrate that when attentional resources are depleted, word segmentation based on statistical regularities is seriously compromised.


Proceedings of the National Academy of Sciences of the United States of America | 2009

Narrowing of intersensory speech perception in infancy

Ferran Pons; David J. Lewkowicz; Salvador Soto-Faraco; Núria Sebastián-Gallés

The conventional view is that perceptual/cognitive development is an incremental process of acquisition. Several striking findings have revealed, however, that the sensitivity to non-native languages, faces, vocalizations, and music that is present early in life declines as infants acquire experience with native perceptual inputs. In the language domain, the decline in sensitivity is reflected in a process of perceptual narrowing that is thought to play a critical role during the acquisition of a native-language phonological system. Here, we provide evidence that such a decline also occurs in infant response to multisensory speech. We found that infant intersensory response to a non-native phonetic contrast narrows between 6 and 11 months of age, suggesting that the perceptual system becomes increasingly more tuned to key native-language audiovisual correspondences. Our findings lend support to the notion that perceptual narrowing is a domain-general as well as a pan-sensory developmental process.


Cognition | 1999

Online processing of native and non-native phonemic contrasts in early bilinguals

Núria Sebastián-Gallés; Salvador Soto-Faraco

There is considerable debate about whether bilinguals can distinguish L2 phonemic contrasts as efficiently as first language speakers can. To test this issue, a group of highly proficient Spanish-dominant Catalan-Spanish bilinguals (who had been exposed to Catalan between the ages of 3 and 4, but who, previous to this age, had been exposed only to Spanish) and another group of Catalan-dominant bilinguals (who had been exposed to Catalan from birth) were compared in a gating task. We developed a variation of the gating procedure that included a two-alternative forced choice test after each fragment was played. The differences between the two alternatives consisted of phonemic contrasts existing in Catalan but not in Spanish. Four contrasts were tested: two vocalic contrasts [symbols: see text] and two consonantal contrasts [symbols: see text]. The results showed that Spanish-dominant bilinguals, even the subset who were able to make correct identifications at the last gate, systematically performed worse than the group of Catalan-dominant bilinguals, needing longer portions of the signal to be able to correctly identify the stimuli. We argue that these results support the hypothesis that L1 shapes the perceptual system at early stages of development in such a way that it will determine the perception of non-native phonemic contrasts, even if there is extensive and early exposure to L2.


Current Biology | 2010

The Posterior Parietal Cortex Remaps Touch into External Space

Elena Azañón; Matthew R. Longo; Salvador Soto-Faraco; Patrick Haggard

Localizing tactile events in external space is required for essential functions such as orienting, haptic exploration, and goal-directed action in peripersonal space. In order to map somatosensory input into a spatiotopic representation, information about skin location must be integrated with proprioceptive information about body posture. We investigated the neural bases of this tactile remapping mechanism in humans by disrupting neural activity in the putative human homolog of the monkey ventral intraparietal area (hVIP), within the right posterior parietal cortex (rPPC), which is thought to house external spatial representations. Participants judged the elevation of touches on their (unseen) forearm relative to touches on their face. Arm posture was passively changed along the vertical axis, so that elevation judgments required the use of an external reference frame. Single-pulse transcranial magnetic stimulation (TMS) over the rPPC significantly impaired performance compared to a control site (vertex). Crucially, proprioceptive judgments of arm elevation or tactile localization on the skin remained unaffected by rPPC TMS. This selective disruption of tactile remapping suggests a distinct computational process dissociable from pure proprioceptive and somatosensory localization. Furthermore, this finding highlights the causal role of human PPC, putatively VIP, in remapping touch into external space.


Attention Perception & Psychophysics | 2007

Visual dominance and attention: The Colavita effect revisited

Scott Sinnett; Charles Spence; Salvador Soto-Faraco

Under many conditions, humans display a robust tendency to rely more on visual information than on other forms of sensory information. Colavita (1974) illustrated this visual dominance effect by showing that naive observers typically fail to respond to clearly suprathreshold tones if these are presented simultaneously with a visual target flash. In the present study, we demonstrate that visual dominance influences performance under more complex stimulation conditions and address the role played by attention in mediating this effect. In Experiment 1, we show the Colavita effect in the simple speeded detection of line drawings and naturalistic sounds, whereas in Experiment 2 we demonstrate visual dominance when the task targets (auditory, visual, or bimodal combinations) are embedded among continuous streams of irrelevant distractors. In Experiments 3–5, we address the consequences of varying the probability of occurrence of targets in each sensory modality. In Experiment 6, we further investigate the role played by attention on visual dominance by manipulating perceptual load in either the visual or the auditory modality. Our results demonstrate that selective attention to a particular sensory modality can modulate—although not completely reverse—visual dominance as illustrated by the Colavita effect.


Quarterly Journal of Experimental Psychology | 2002

Modality-specific auditory and visual temporal processing deficits

Salvador Soto-Faraco; Charles Spence

We studied the attentional blink (AB) and the repetition blindness (RB) effects using an audio-visual presentation procedure designed to overcome several potential methodological confounds in previous cross-modal research. In Experiment 1, two target digits were embedded amongst letter distractors in two concurrent streams (one visual and the other auditory) presented from the same spatial location. Targets appeared in either modality unpredictably at different temporal lags, and the participants’ task was to recall the digits at the end of the trial. We evaluated both AB and RB for pairs of targets presented in either the same or different modalities. Under these conditions both AB and RB were observed in vision, AB but not RB was observed in audition, and there was no evidence of AB or RB cross-modally from audition to vision or vice versa. In Experiment 2, we further investigated the AB by including Lag 1 items and observed Lag 1 sparing, thus ruling out the possibility that the observed effects were due to perceptual and/or conceptual masking. Our results support a distinction between a modality-specific interference at the attentional selection stage and a modality-independent interference at later processing stages. They also provide a new dissociation between the AB and RB.


Experimental Brain Research | 2007

Attention to touch weakens audiovisual speech integration

Agnès Alsius; Jordi Navarra; Salvador Soto-Faraco

One of the classic examples of multisensory integration in humans occurs when speech sounds are combined with the sight of corresponding articulatory gestures. Despite the longstanding assumption that this kind of audiovisual binding operates in an attention-free mode, recent findings (Alsius et al. in Curr Biol, 15(9):839–843, 2005) suggest that audiovisual speech integration decreases when visual or auditory attentional resources are depleted. The present study addressed the generalization of this attention constraint by testing whether a similar decrease in multisensory integration is observed when attention demands are imposed on a sensory domain that is not involved in speech perception, such as touch. We measured the McGurk illusion in a dual task paradigm involving a difficult tactile task. The results showed that the percentage of visually influenced responses to audiovisual stimuli was reduced when attention was diverted to a tactile task. This finding is attributed to a modulatory effect on audiovisual integration of speech mediated by supramodal attention limitations. We suggest that the interactions between the attentional system and crossmodal binding mechanisms may be much more extensive and dynamic than it was advanced in previous studies.


Acta Psychologica | 2008

The co-occurrence of multisensory competition and facilitation.

Scott Sinnett; Salvador Soto-Faraco; Charles Spence

Previous studies of multisensory integration have often stressed the beneficial effects that may arise when information concerning an event arrives via different sensory modalities at the same time, as, for example, exemplified by research on the redundant target effect (RTE). By contrast, studies of the Colavita visual dominance effect (e.g., [Colavita, F. B. (1974). Human sensory dominance. Perception & Psychophysics, 16, 409-412]) highlight the inhibitory consequences of the competition between signals presented simultaneously in different sensory modalities instead. Although both the RTE and the Colavita effect are thought to occur at early sensory levels and the stimulus conditions under which they are typically observed are very similar, the interplay between these two opposing behavioural phenomena (facilitation vs. competition) has yet to be addressed empirically. We hypothesized that the dissociation may reflect two of the fundamentally different ways in which humans can perceive concurrent auditory and visual stimuli. In Experiment 1, we demonstrated both multisensory facilitation (RTE) and the Colavita visual dominance effect using exactly the same audiovisual displays, by simply changing the task from a speeded detection task to a speeded modality discrimination task. Meanwhile, in Experiment 2, the participants exhibited multisensory facilitation when responding to visual targets and multisensory inhibition when responding to auditory targets while keeping the task constant. These results therefore indicate that both multisensory facilitation and inhibition can be demonstrated in reaction to the same bimodal event.


Quarterly Journal of Experimental Psychology | 2006

Manipulating inattentional blindness within and across sensory modalities

Scott Sinnett; Albert Costa; Salvador Soto-Faraco

People often fail to consciously perceive visual events that are outside the focus of attention, a phenomenon referred to as inattentional blindness or IB (i.e., Mack & Rock, 1998). Here, we investigated IB for words within and across sensory modalities (visually and auditorily) in order to assess whether dividing attention across different senses has the same consequences as dividing attention within an individual sensory modality. Participants were asked to monitor a rapid stream of pictures or sounds presented concurrently with task-irrelevant words (spoken or written). A word recognition test was used to measure the processing for unattended words compared to word recognition levels after explicitly monitoring the word stream. We were able to produce high levels of IB for visually and auditorily presented words under unimodal conditions (Experiment 1) as well as under crossmodal conditions (Experiment 2). A further manipulation revealed, however, that IB is less prevalent when attention is divided across modalities than within the same modality (Experiment 3). These findings are explained in terms of the attentional load hypothesis and suggest that, contrary to some claims, attention resources are to a certain extent shared across sensory modalities.

Collaboration


Dive into the Salvador Soto-Faraco's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan Kingstone

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Scott Sinnett

University of Hawaii at Manoa

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge