Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marianne Latinus is active.

Publication


Featured researches published by Marianne Latinus.


NeuroImage | 2015

The human voice areas: Spatial organization and inter-individual variability in temporal and extra-temporal cortices.

Cyril Pernet; Philip McAleer; Marianne Latinus; Krzysztof J. Gorgolewski; Ian Charest; Patricia E. G. Bestelmeyer; Rebecca Watson; David Fleming; Frances Crabbe; Mitchell Valdés-Sosa; Pascal Belin

fMRI studies increasingly examine functions and properties of non-primary areas of human auditory cortex. However there is currently no standardized localization procedure to reliably identify specific areas across individuals such as the standard ‘localizers’ available in the visual domain. Here we present an fMRI ‘voice localizer’ scan allowing rapid and reliable localization of the voice-sensitive ‘temporal voice areas’ (TVA) of human auditory cortex. We describe results obtained using this standardized localizer scan in a large cohort of normal adult subjects. Most participants (94%) showed bilateral patches of significantly greater response to vocal than non-vocal sounds along the superior temporal sulcus/gyrus (STS/STG). Individual activation patterns, although reproducible, showed high inter-individual variability in precise anatomical location. Cluster analysis of individual peaks from the large cohort highlighted three bilateral clusters of voice-sensitivity, or “voice patches” along posterior (TVAp), mid (TVAm) and anterior (TVAa) STS/STG, respectively. A series of extra-temporal areas including bilateral inferior prefrontal cortex and amygdalae showed small, but reliable voice-sensitivity as part of a large-scale cerebral voice network. Stimuli for the voice localizer scan and probabilistic maps in MNI space are available for download.


Current Biology | 2013

Norm-based coding of voice identity in human auditory cortex

Marianne Latinus; Philip McAleer; Patricia E. G. Bestelmeyer; Pascal Belin

Summary Listeners exploit small interindividual variations around a generic acoustical structure to discriminate and identify individuals from their voice—a key requirement for social interactions. The human brain contains temporal voice areas (TVA) [1] involved in an acoustic-based representation of voice identity [2–6], but the underlying coding mechanisms remain unknown. Indirect evidence suggests that identity representation in these areas could rely on a norm-based coding mechanism [4, 7–11]. Here, we show by using fMRI that voice identity is coded in the TVA as a function of acoustical distance to two internal voice prototypes (one male, one female)—approximated here by averaging a large number of same-gender voices by using morphing [12]. Voices more distant from their prototype are perceived as more distinctive and elicit greater neuronal activity in voice-sensitive cortex than closer voices—a phenomenon not merely explained by neuronal adaptation [13, 14]. Moreover, explicit manipulations of distance-to-mean by morphing voices toward (or away from) their prototype elicit reduced (or enhanced) neuronal activity. These results indicate that voice-sensitive cortex integrates relevant acoustical features into a complex representation referenced to idealized male and female voice prototypes. More generally, they shed light on remarkable similarities in cerebral representations of facial and vocal identity.


Cerebral Cortex | 2011

Learning-Induced Changes in the Cerebral Processing of Voice Identity

Marianne Latinus; Frances Crabbe; Pascal Belin

Temporal voice areas showing a larger activity for vocal than non-vocal sounds have been identified along the superior temporal sulcus (STS); more voice-sensitive areas have been described in frontal and parietal lobes. Yet, the role of voice-sensitive regions in representing voice identity remains unclear. Using a functional magnetic resonance adaptation design, we aimed at disentangling acoustic- from identity-based representations of voices. Sixteen participants were scanned while listening to pairs of voices drawn from morphed continua between 2 initially unfamiliar voices, before and after a voice learning phase. In a given pair, the first and second stimuli could be identical or acoustically different and, at the second session, perceptually similar or different. At both sessions, right mid-STS/superior temporal gyrus (STG) and superior temporal pole (sTP) showed sensitivity to acoustical changes. Critically, voice learning induced changes in the acoustical processing of voices in inferior frontal cortices (IFCs). At the second session only, right IFC and left cingulate gyrus showed sensitivity to changes in perceived identity. The processing of voice identity appears to be subserved by a large network of brain areas ranging from the sTP, involved in an acoustic-based representation of unfamiliar voices, to areas along the convexity of the IFC for identity-related processing of familiar voices.


Journal of Neuroscience Methods | 2015

Cluster-based computational methods for mass univariate analyses of event-related brain potentials/fields: A simulation study.

Cyril Pernet; Marianne Latinus; Thomas E. Nichols; Guillaume A. Rousselet

Background In recent years, analyses of event related potentials/fields have moved from the selection of a few components and peaks to a mass-univariate approach in which the whole data space is analyzed. Such extensive testing increases the number of false positives and correction for multiple comparisons is needed. Method Here we review all cluster-based correction for multiple comparison methods (cluster-height, cluster-size, cluster-mass, and threshold free cluster enhancement – TFCE), in conjunction with two computational approaches (permutation and bootstrap). Results Data driven Monte-Carlo simulations comparing two conditions within subjects (two sample Students t-test) showed that, on average, all cluster-based methods using permutation or bootstrap alike control well the family-wise error rate (FWER), with a few caveats. Conclusions (i) A minimum of 800 iterations are necessary to obtain stable results; (ii) below 50 trials, bootstrap methods are too conservative; (iii) for low critical family-wise error rates (e.g. p = 1%), permutations can be too liberal; (iv) TFCE controls best the type 1 error rate with an attenuated extent parameter (i.e. power < 1).


The Journal of Neuroscience | 2014

Crossmodal Adaptation in Right Posterior Superior Temporal Sulcus during Face–Voice Emotional Integration

Rebecca Watson; Marianne Latinus; Takao Noguchi; Oliver Garrod; Frances Crabbe; Pascal Belin

The integration of emotional information from the face and voice of other persons is known to be mediated by a number of “multisensory” cerebral regions, such as the right posterior superior temporal sulcus (pSTS). However, whether multimodal integration in these regions is attributable to interleaved populations of unisensory neurons responding to face or voice or rather by multimodal neurons receiving input from the two modalities is not fully clear. Here, we examine this question using functional magnetic resonance adaptation and dynamic audiovisual stimuli in which emotional information was manipulated parametrically and independently in the face and voice via morphing between angry and happy expressions. Healthy human adult subjects were scanned while performing a happy/angry emotion categorization task on a series of such stimuli included in a fast event-related, continuous carryover design. Subjects integrated both face and voice information when categorizing emotion—although there was a greater weighting of face information—and showed behavioral adaptation effects both within and across modality. Adaptation also occurred at the neural level: in addition to modality-specific adaptation in visual and auditory cortices, we observed for the first time a crossmodal adaptation effect. Specifically, fMRI signal in the right pSTS was reduced in response to a stimulus in which facial emotion was similar to the vocal emotion of the preceding stimulus. These results suggest that the integration of emotional information from face and voice in the pSTS involves a detectable proportion of bimodal neurons that combine inputs from visual and auditory cortices.


Cortex | 2014

People-selectivity, audiovisual integration and heteromodality in the superior temporal sulcus

Rebecca Watson; Marianne Latinus; Ian Charest; Frances Crabbe; Pascal Belin

The functional role of the superior temporal sulcus (STS) has been implicated in a number of studies, including those investigating face perception, voice perception, and face–voice integration. However, the nature of the STS preference for these ‘social stimuli’ remains unclear, as does the location within the STS for specific types of information processing. The aim of this study was to directly examine properties of the STS in terms of selective response to social stimuli. We used functional magnetic resonance imaging (fMRI) to scan participants whilst they were presented with auditory, visual, or audiovisual stimuli of people or objects, with the intention of localising areas preferring both faces and voices (i.e., ‘people-selective’ regions) and audiovisual regions designed to specifically integrate person-related information. Results highlighted a ‘people-selective, heteromodal’ region in the trunk of the right STS which was activated by both faces and voices, and a restricted portion of the right posterior STS (pSTS) with an integrative preference for information from people, as compared to objects. These results point towards the dedicated role of the STS as a ‘social-information processing’ centre.


The Journal of Neuroscience | 2014

Adaptation to vocal expressions reveals multistep perception of auditory emotion

Patricia E. G. Bestelmeyer; Pierre Maurage; Julien Rouger; Marianne Latinus; Pascal Belin

The human voice carries speech as well as important nonlinguistic signals that influence our social interactions. Among these cues that impact our behavior and communication with other people is the perceived emotional state of the speaker. A theoretical framework for the neural processing stages of emotional prosody has suggested that auditory emotion is perceived in multiple steps (Schirmer and Kotz, 2006) involving low-level auditory analysis and integration of the acoustic information followed by higher-level cognition. Empirical evidence for this multistep processing chain, however, is still sparse. We examined this question using functional magnetic resonance imaging and a continuous carry-over design (Aguirre, 2007) to measure brain activity while volunteers listened to non-speech-affective vocalizations morphed on a continuum between anger and fear. Analyses dissociated neuronal adaptation effects induced by similarity in perceived emotional content between consecutive stimuli from those induced by their acoustic similarity. We found that bilateral voice-sensitive auditory regions as well as right amygdala coded the physical difference between consecutive stimuli. In contrast, activity in bilateral anterior insulae, medial superior frontal cortex, precuneus, and subcortical regions such as bilateral hippocampi depended predominantly on the perceptual difference between morphs. Our results suggest that the processing of vocal affect recognition is a multistep process involving largely distinct neural networks. Amygdala and auditory areas predominantly code emotion-related acoustic information while more anterior insular and prefrontal regions respond to the abstract, cognitive representation of vocal affect.


Cerebral Cortex | 2013

Cerebral Processing of Voice Gender Studied Using a Continuous Carryover fMRI Design

Ian Charest; Cyril Pernet; Marianne Latinus; Frances Crabbe; Pascal Belin

Normal listeners effortlessly determine a persons gender by voice, but the cerebral mechanisms underlying this ability remain unclear. Here, we demonstrate 2 stages of cerebral processing during voice gender categorization. Using voice morphing along with an adaptation-optimized functional magnetic resonance imaging design, we found that secondary auditory cortex including the anterior part of the temporal voice areas in the right hemisphere responded primarily to acoustical distance with the previously heard stimulus. In contrast, a network of bilateral regions involving inferior prefrontal and anterior and posterior cingulate cortex reflected perceived stimulus ambiguity. These findings suggest that voice gender recognition involves neuronal populations along the auditory ventral stream responsible for auditory feature extraction, functioning in pair with the prefrontal cortex in voice gender perception.


Frontiers in Psychology | 2011

Anti-Voice Adaptation Suggests Prototype-Based Coding of Voice Identity

Marianne Latinus; Pascal Belin

We used perceptual aftereffects induced by adaptation with anti-voice stimuli to investigate voice identity representations. Participants learned a set of voices then were tested on a voice identification task with vowel stimuli morphed between identities, after different conditions of adaptation. In Experiment 1, participants chose the identity opposite to the adapting anti-voice significantly more often than the other two identities (e.g., after being adapted to anti-A, they identified the average voice as A). In Experiment 2, participants showed a bias for identities opposite to the adaptor specifically for anti-voice, but not for non-anti-voice adaptors. These results are strikingly similar to adaptation aftereffects observed for facial identity. They are compatible with a representation of individual voice identities in a multidimensional perceptual voice space referenced on a voice prototype.


BMC Neuroscience | 2010

Top-down and bottom-up modulation in processing bimodal face/voice stimuli

Marianne Latinus; Rufin VanRullen; Margot J. Taylor

BackgroundProcessing of multimodal information is a critical capacity of the human brain, with classic studies showing bimodal stimulation either facilitating or interfering in perceptual processing. Comparing activity to congruent and incongruent bimodal stimuli can reveal sensory dominance in particular cognitive tasks.ResultsWe investigated audiovisual interactions driven by stimulus properties (bottom-up influences) or by task (top-down influences) on congruent and incongruent simultaneously presented faces and voices while ERPs were recorded. Subjects performed gender categorisation, directing attention either to faces or to voices and also judged whether the face/voice stimuli were congruent in terms of gender. Behaviourally, the unattended modality affected processing in the attended modality: the disruption was greater for attended voices. ERPs revealed top-down modulations of early brain processing (30-100 ms) over unisensory cortices. No effects were found on N170 or VPP, but from 180-230 ms larger right frontal activity was seen for incongruent than congruent stimuli.ConclusionsOur data demonstrates that in a gender categorisation task the processing of faces dominate over the processing of voices. Brain activity showed different modulation by top-down and bottom-up information. Top-down influences modulated early brain activity whereas bottom-up interactions occurred relatively late.

Collaboration


Dive into the Marianne Latinus's collaboration.

Top Co-Authors

Avatar

Pascal Belin

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ian Charest

Cognition and Brain Sciences Unit

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cyril Pernet

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sylvie Roux

François Rabelais University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge