Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jordi Navarra is active.

Publication


Featured researches published by Jordi Navarra.


Experimental Brain Research | 2007

Attention to touch weakens audiovisual speech integration

Agnès Alsius; Jordi Navarra; Salvador Soto-Faraco

One of the classic examples of multisensory integration in humans occurs when speech sounds are combined with the sight of corresponding articulatory gestures. Despite the longstanding assumption that this kind of audiovisual binding operates in an attention-free mode, recent findings (Alsius et al. in Curr Biol, 15(9):839–843, 2005) suggest that audiovisual speech integration decreases when visual or auditory attentional resources are depleted. The present study addressed the generalization of this attention constraint by testing whether a similar decrease in multisensory integration is observed when attention demands are imposed on a sensory domain that is not involved in speech perception, such as touch. We measured the McGurk illusion in a dual task paradigm involving a difficult tactile task. The results showed that the percentage of visually influenced responses to audiovisual stimuli was reduced when attention was diverted to a tactile task. This finding is attributed to a modulatory effect on audiovisual integration of speech mediated by supramodal attention limitations. We suggest that the interactions between the attentional system and crossmodal binding mechanisms may be much more extensive and dynamic than it was advanced in previous studies.


Neuroscience Letters | 2007

Adaptation to audiotactile asynchrony.

Jordi Navarra; Salvador Soto-Faraco; Charles Spence

Previous research has revealed the existence of perceptual mechanisms that compensate for slight temporal asynchronies between auditory and visual signals. We investigated whether temporal recalibration would also occur between auditory and tactile stimuli. Participants were exposed to streams of brief auditory and tactile stimuli presented in synchrony, or else with the auditory stimulus leading by 75ms. After the exposure phase, the participants made temporal order judgments regarding pairs of auditory and tactile events occurring at varying stimulus onset asynchronies. The results showed that the minimal interval necessary to correctly resolve audiotactile temporal order was larger after exposure to the desynchronized streams than after exposure to the synchronous streams. This suggests the existence of a mechanism to compensate for audiotactile asynchronies that results in a widening of the temporal window for multisensory integration.


Attention Perception & Psychophysics | 2007

Discriminating languages by speech-reading

Salvador Soto-Faraco; Jordi Navarra; Whitney M. Weikum; Athena Vouloumanos; Núria Sebastián-Gallés; Janet F. Werker

The goal of this study was to explore the ability to discriminate languages using the visual correlates of speech (i.e., speech-reading). Participants were presented with silent video clips of an actor pronouncing two sentences (in Catalan and/or Spanish) and were asked to judge whether the sentences were in the same language or in different languages. Our results established that Spanish—Catalan bilingual speakers could discriminate running speech from their two languages on the basis of visual cues alone (Experiment 1). However, we found that this ability was critically restricted by linguistic experience, since Italian and English speakers who were unfamiliar with the test languages could not successfully discriminate the stimuli (Experiment 2). A test of Spanish monolingual speakers revealed that knowledge of only one of the two test languages was sufficient to achieve the discrimination, although at a lower level of accuracy than that seen in bilingual speakers (Experiment 3). Finally, we evaluated the ability to identify the language by speech-reading particularly distinctive words (Experiment 4). The results obtained are in accord with recent proposals arguing that the visual speech signal is rich in informational content, above and beyond what traditional accounts based solely on visemic confusion matrices would predict.


Journal of Experimental Psychology: Human Perception and Performance | 2005

The perception of second language sounds in early bilinguals : New evidence from an implicit measure

Jordi Navarra; Núria Sebastián-Gallés; Salvador Soto-Faraco

Previous studies have suggested that nonnative (L2) linguistic sounds are accommodated to native language (L1) phonemic categories. However, this conclusion may be compromised by the use of explicit discrimination tests. The present study provides an implicit measure of L2 phoneme discrimination in early bilinguals (Catalan and Spanish). Participants classified the 1st syllable of disyllabic stimuli embedded in lists where the 2nd, task-irrelevant, syllable could contain a Catalan contrastive variation (/epsilon/-/e/) or no variation. Catalan dominants responded more slowly in lists where the 2nd syllable could vary from trial to trial, suggesting an indirect effect of the /epsilon/-/e/ discrimination. Spanish dominants did not suffer this interference, performing indistinguishably from Spanish monolinguals. The present findings provide implicit evidence that even proficient bilinguals categorize L2 sounds according to their L1 representations.


Seeing and Perceiving | 2012

Spatial recoding of sound: Pitch-varying auditory cues modulate up/down visual spatial attention

Irune Fernández-Prieto; Fátima Vera-Constán; Joel García-Morera; Jordi Navarra

Previous studies suggest the existence of facilitatory effects between, for example, responding upwards/downwards while hearing a high/low-pitched tone, respectively (e.g., Occeli et al., 2009; Rusconi et al., 2006). Neuroimaging research has started to reveal the activation of parietal areas (e.g., the intraparietal sulcus, IPS) during the performance of various pitch-based musical tasks (see Foster and Zatorre, 2010a, 2010b). Since several areas in the parietal cortex (e.g., the IPS; see Chica et al., 2011) are strongly involved in orienting visual attention towards external events, we investigated the possible effects of perceiving pitch-varying stimuli (i.e., ‘ascending’ or ‘descending’ flutter sounds) on the spatial processing of visual stimuli. In a variation of the Posner cueing paradigm (Posner, 1980), participants performed a speeded detection task of a visual target that could appear at one of four different spatial positions (two above and two below the fixation point). Irrelevant ascending (200–700 Hz) or descending (700–200 Hz) flutter sounds were randomly presented 550 ms before the onset of the visual target. According to our results, faster reaction times were observed when the visual target appeared in a position (up/down) that was compatible with the ‘pitch direction’ (ascending or descending) of the previously-presented auditory ‘cuing’ stimulus. Our findings suggest that pitch-varying sounds are recoded spatially, thus modulating visual spatial attention.


I-perception | 2017

Does Language Influence the Vertical Representation of Auditory Pitch and Loudness

Irune Fernández-Prieto; Charles Spence; Ferran Pons; Jordi Navarra

Higher frequency and louder sounds are associated with higher positions whereas lower frequency and quieter sounds are associated with lower locations. In English, “high” and “low” are used to label pitch, loudness, and spatial verticality. By contrast, different words are preferentially used, in Catalan and Spanish, for pitch (high: “agut/agudo”; low: “greu/grave”) and for loudness/verticality (high: “alt/alto”; low: “baix/bajo”). Thus, English and Catalan/Spanish differ in the spatial connotations for pitch. To analyze the influence of language on these crossmodal associations, a task was conducted in which English and Spanish/Catalan speakers had to judge whether a tone was higher or lower (in pitch or loudness) than a reference tone. The response buttons were located at crossmodally congruent or incongruent positions with respect to the probe tone. Crossmodal correspondences were evidenced in both language groups. However, English speakers showed greater effects for pitch, suggesting an influence of linguistic background.


PLOS ONE | 2013

Realigning thunder and lightning: temporal adaptation to spatiotemporally distant events.

Jordi Navarra; Irune Fernández-Prieto; Joel García-Morera

The brain is able to realign asynchronous signals that approximately coincide in both space and time. Given that many experience-based links between visual and auditory stimuli are established in the absence of spatiotemporal proximity, we investigated whether or not temporal realignment arises in these conditions. Participants received a 3-min exposure to visual and auditory stimuli that were separated by 706 ms and appeared either from the same (Experiment 1) or from different spatial positions (Experiment 2). A simultaneity judgment task (SJ) was administered right afterwards. Temporal realignment between vision and audition was observed, in both Experiment 1 and 2, when comparing the participants’ SJs after this exposure phase with those obtained after a baseline exposure to audiovisual synchrony. However, this effect was present only when the visual stimuli preceded the auditory stimuli during the exposure to asynchrony. A similar pattern of results (temporal realignment after exposure to visual-leading asynchrony but not after exposure to auditory-leading asynchrony) was obtained using temporal order judgments (TOJs) instead of SJs (Experiment 3). Taken together, these results suggest that temporal recalibration still occurs for visual and auditory stimuli that fall clearly outside the so-called temporal window for multisensory integration and appear from different spatial positions. This temporal realignment may be modulated by long-term experience with the kind of asynchrony (vision-leading) that we most frequently encounter in the outside world (e.g., while perceiving distant events).


Psychology of Music | 2017

The higher the pitch the larger its crossmodal influence on visuospatial processing

Irune Fernández-Prieto; Jordi Navarra

High-pitched sounds generate larger neural responses than low-pitched sounds. We investigated whether this neural difference has implications, at cognitive level, for the “vertical” representation of pitch. Participants performed a speeded detection of visual targets that could appear at one of four different spatial positions. Rising or falling frequency sweeps were randomly presented before the visual target. Faster reaction times to visual targets appearing above (but not below) a central fixation point were observed after the presentation of rising frequencies. No significant effects were found for falling frequency sweeps and visual targets presented below fixation point. These results suggest that the difference in the level of arousal between rising and falling frequencies influences their capacity for generating spatial representations. The fact that no difference was found, in terms of crossmodal effects, between the two upper positions may indicate that this “spatial representation of pitch” is not specific for any particular spatial location but rather has a widespread influence over stimuli appearing in the upper visual field. The present findings are relevant for the study of music performance, the design of musical instruments, and research in areas where visual and auditory stimuli with certain complexity are combined (music in advertisements, movies, etc.).


Neuropsychologia | 2018

Seeing music: The perception of melodic ‘ups and downs’ modulates the spatial processing of visual stimuli.

Carlos Romero-Rivas; Fátima Vera-Constán; Sara Rodríguez-Cuadrado; Laura Puigcerver; Irune Fernández-Prieto; Jordi Navarra

ABSTRACT Musical melodies have “peaks” and “valleys”. Although the vertical component of pitch and music is well‐known, the mechanisms underlying its mental representation still remain elusive. We show evidence regarding the importance of previous experience with melodies for crossmodal interactions to emerge. The impact of these crossmodal interactions on other perceptual and attentional processes was also studied. Melodies including two tones with different frequency (e.g., E4 and D3) were repeatedly presented during the study. These melodies could either generate strong predictions (e.g., E4‐D3‐E4‐D3‐E4‐[D3]) or not (e.g., E4‐D3‐E4‐E4‐D3‐[?]). After the presentation of each melody, the participants had to judge the colour of a visual stimulus that appeared in a position that was, according to the traditional vertical connotations of pitch, either congruent (e.g., high‐low‐high‐low‐[up]), incongruent (high‐low‐high‐low‐[down]) or unpredicted with respect to the melody. Behavioural and electroencephalographic responses to the visual stimuli were obtained. Congruent visual stimuli elicited faster responses at the end of the experiment than at the beginning. Additionally, incongruent visual stimuli that broke the spatial prediction generated by the melody elicited larger P3b amplitudes (reflecting ‘surprise’ responses). Our results suggest that the passive (but repeated) exposure to melodies elicits spatial predictions that modulate the processing of other sensory events. HighlightsListening to melodic ‘ups and downs’ elicits spatial (vertical) representations.These representations modulate the spatial encoding of visual stimuli.The spatial remapping of pitch occurs even in passive listening conditions.This remapping takes place at relatively late stages of signal processing.


Smart Biomedical and Physiological Sensor Technology XV | 2018

FocusLocus: ADHD management gaming system for educational achievement and social inclusion

Tassos Kanellos; Adam Doulgerakis; Eftichia Georgiou; Maria Bessa; Argiro Vatakis; Áine Behan; Jordi Navarra; Stelios C. A. Thomopoulos; Jon Arambarri

Attention Deficit and Hyperactivity Disorder (ADHD) is associated with symptoms of inattention, hyperactivity, and impulsivity and affects a significant part of the population. Current treatment approaches entail high costs and are commonly based on stimulant medication, which may lead to undesirable side-effects. FocusLocus is an EU H2020 Innovation Action project that proposes a highly disruptive and innovative gamified monitoring and intervention programme for assisting children to manage and overcome ADHD symptoms. FocusLocus implements game mechanics that rely on cognitive training methods for mental and motor skill acquisition and behavioural change. The proposed programme is delivered through a multifaceted and adaptive gaming experience that permeates the child’s daily life and activities, spanning across virtual and physical space and comprising two modes: (a) a mobile tablet game for home use by individual users and (b) a multisensory mixed reality game for supervised sessions at specialised facilities (e.g., clinics). FocusLocus is designed to be highly personalised and adaptive to each child’s individual condition, symptoms, age, and character. By introducing advanced remote monitoring and management features, FocusLocus actively involves all stakeholders associated with children with ADHD (parents, clinicians, and special needs educators). FocusLocus employs an unobtrusive and multimodal sensing, assessment and monitoring methodology relying on (a) mobile device embedded sensors, (b) Electroencephalography (EEG) neurofeedback mechanisms, (c) Augmented Reality (AR) user tracking, (d) RFID-based object tracking for tangible user interaction, (e) in-game cognitive skill performance measurements, (f) cloud performance analytics, and (g) web-based secure access for remote profile monitoring and management.

Collaboration


Dive into the Jordi Navarra's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Janet F. Werker

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ferran Pons

University of Barcelona

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge