Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nathalie Gosselin is active.

Publication


Featured researches published by Nathalie Gosselin.


Current Biology | 2009

Universal Recognition of Three Basic Emotions in Music

Thomas Fritz; Sebastian Jentschke; Nathalie Gosselin; Daniela Sammler; Isabelle Peretz; Robert Turner; Angela D. Friederici; Stefan Koelsch

It has long been debated which aspects of music perception are universal and which are developed only after exposure to a specific musical culture. Here, we report a crosscultural study with participants from a native African population (Mafa) and Western participants, with both groups being naive to the music of the other respective culture. Experiment 1 investigated the ability to recognize three basic emotions (happy, sad, scared/fearful) expressed in Western music. Results show that the Mafas recognized happy, sad, and scared/fearful Western music excerpts above chance, indicating that the expression of these basic emotions in Western music can be recognized universally. Experiment 2 examined how a spectral manipulation of original, naturalistic music affects the perceived pleasantness of music in Western as well as in Mafa listeners. The spectral manipulation modified, among other factors, the sensory dissonance of the music. The data show that both groups preferred original Western music and also original Mafa music over their spectrally manipulated versions. It is likely that the sensory dissonance produced by the spectral manipulation was at least partly responsible for this effect, suggesting that consonance and permanent sensory dissonance universally influence the perceived pleasantness of music.


Cognition & Emotion | 2008

Happy, sad, scary and peaceful musical excerpts for research on emotions

Sandrine Vieillard; Isabelle Peretz; Nathalie Gosselin; Stéphanie Khalfa; Lise Gagnon; Bernard Bouchard

Three experiments were conducted in order to validate 56 musical excerpts that conveyed four intended emotions (happiness, sadness, threat and peacefulness). In Experiment 1, the musical clips were rated in terms of how clearly the intended emotion was portrayed, and for valence and arousal. In Experiment 2, a gating paradigm was used to evaluate the course for emotion recognition. In Experiment 3, a dissimilarity judgement task and multidimensional scaling analysis were used to probe emotional content with no emotional labels. The results showed that emotions are easily recognised and discriminated on the basis of valence and arousal and with relative immediacy. Happy and sad excerpts were identified after the presentation of fewer than three musical events. With no labelling, emotion discrimination remained highly accurate and could be mapped on energetic and tense dimensions. The present study provides suitable musical material for research on emotions.Keywords.


Neuropsychologia | 2007

Amygdala damage impairs emotion recognition from music.

Nathalie Gosselin; Isabelle Peretz; Erica L. Johnsen; Ralph Adolphs

The role of the amygdala in recognition of danger is well established for visual stimuli such as faces. A similar role in another class of emotionally potent stimuli -- music -- has been recently suggested by the study of epileptic patients with unilateral resection of the anteromedian part of the temporal lobe [Gosselin, N., Peretz, I., Noulhiane, M., Hasboun, D., Beckett, C., & Baulac, M., et al. (2005). Impaired recognition of scary music following unilateral temporal lobe excision. Brain, 128(Pt 3), 628-640]. The goal of the present study was to assess the specific role of the amygdala in the recognition of fear from music. To this aim, we investigated a rare subject, S.M., who has complete bilateral damage relatively restricted to the amygdala and not encompassing other sectors of the temporal lobe. In Experiment 1, S.M. and four matched controls were asked to rate the intensity of fear, peacefulness, happiness, and sadness from computer-generated instrumental music purposely created to express those emotions. Subjects also rated the arousal and valence of each musical stimulus. An error detection task assessed basic auditory perceptual function. S.M. performed normally in this perceptual task, but was selectively impaired in the recognition of scary and sad music. In contrast, her recognition of happy music was normal. Furthermore, S.M. judged the scary music to be less arousing and the peaceful music less relaxing than did the controls. Overall, the pattern of impairment in S.M. is similar to that previously reported in patients with unilateral anteromedial temporal lobe damage. S.M.s impaired emotional judgments occur in the face of otherwise intact processing of musical features that are emotionally determinant. The use of tempo and mode cues in distinguishing happy from sad music was also spared in S.M. Thus, the amygdala appears to be necessary for emotional processing of music rather than the perceptual processing itself.


Annals of the New York Academy of Sciences | 2009

Music lexical networks: the cortical organization of music recognition.

Isabelle Peretz; Nathalie Gosselin; Pascal Belin; Robert J. Zatorre; Jane Plailly; Barbara Tillmann

Successful recognition of a familiar tune depends on a selection procedure that takes place in a memory system that contains all the representations of the specific musical phrases to which one has been exposed during ones lifetime. We refer to this memory system as the musical lexicon. The goal of the study was to identify its neural correlates. The brains of students with little musical training were scanned with functional magnetic resonance imaging (fMRI) while the student listened to familiar musical themes, unfamiliar music, and random tones. The familiar themes were selected from instrumental pieces well‐known to the participants; the unfamiliar music was the retrograde version of the familiar themes, which the participants did not recognize; and the random sequences contained the same musical tones but in a random order. All stimuli were synthesized and played with the sound of a piano, thereby keeping low‐level acoustical factors similar across conditions. The comparison of cerebral responses to familiar versus unfamiliar music reveals focal activation in the right superior temporal sulcus (STS). Re‐analysis of the data obtained in a previous study by Plailly et al. also points to the STS as the critical region involved in musical memories. The neuroimaging data further suggest that these auditory memories are tightly coupled with action (singing), by showing left activation in the planum temporale, in the supplementary motor area (SMA), and in the inferior frontal gyrus. Such a cortical organization of music recognition is analogous to the dorsal stream model of speech processing proposed by Hickok and Poeppel.


Annals of the New York Academy of Sciences | 2009

Impaired Memory for Pitch in Congenital Amusia

Nathalie Gosselin; Pierre Jolicœur; Isabelle Peretz

We examined memory for pitch in congenital amusia in two tasks. In one task, we varied the pitch distance between the target and comparison tone from 4 to 9 semitones and inserted either a silence or 6 interpolated tones between the tones to be compared. In a second task, we manipulated the number of pitches to be retained in sequences of length 1, 3, or 5. Amusics’ sensitivity to pitch distance was exacerbated by the presence of interpolated tones, and amusics’ performance was more strongly affected by the number of pitches to maintain in memory than controls. A pitch perception deficit could not account for the pitch memory deficit of amusics.


Annals of the New York Academy of Sciences | 2009

Emotional Recognition from Face, Voice, and Music in Dementia of the Alzheimer Type

Joanie Drapeau; Nathalie Gosselin; Lise Gagnon; Isabelle Peretz; Dominique Lorrain

Persons with dementia of the Alzheimer type (DAT) are impaired in recognizing emotions from face and voice. Yet clinical practitioners use these mediums to communicate with DAT patients. Music is also used in clinical practice, but little is known about emotional processing from music in DAT. This study aims to assess emotional recognition in mild DAT. Seven patients with DAT and 16 healthy elderly adults were given three tasks of emotional recognition for face, prosody, and music. DAT participants were only impaired in the emotional recognition from the face. These preliminary results suggest that dynamic auditory emotions are preserved in DAT.


International Journal of Psychophysiology | 2009

Modulation of the startle reflex by pleasant and unpleasant music.

Mathieu Roy; Jean-Philippe Mailhot; Nathalie Gosselin; Sébastien Paquette; Isabelle Peretz

The issue of emotional feelings to music is the object of a classic debate in music psychology. Emotivists argue that emotions are really felt in response to music, whereas cognitivists believe that music is only representative of emotions. Psychophysiological recordings of emotional feelings to music might help to resolve the debate, but past studies have failed to show clear and consistent differences between musical excerpts of different emotional valence. Here, we compared the effects of pleasant and unpleasant musical excerpts on the startle eye blink reflex and associated body markers (such as the corrugator and zygomatic activity, skin conductance level and heart rate). The startle eye blink amplitude was larger and its latency was shorter during unpleasant compared with pleasant music, suggesting that the defensive emotional system was indeed modulated by music. Corrugator activity was also enhanced during unpleasant music, whereas skin conductance level was higher for pleasant excerpts. The startle reflex was the response that contributed the most in distinguishing pleasant and unpleasant music. Taken together, these results provide strong evidence that emotions were felt in response to music, supporting the emotivist stance.


Frontiers in Psychology | 2011

Congenital Amusia (or Tone-Deafness) Interferes with Pitch Processing in Tone Languages

Barbara Tillmann; Denis Burnham; Sébastien Nguyen; Nicolas Grimault; Nathalie Gosselin; Isabelle Peretz

Congenital amusia is a neurogenetic disorder that affects music processing and that is ascribed to a deficit in pitch processing. We investigated whether this deficit extended to pitch processing in speech, notably the pitch changes used to contrast lexical tones in tonal languages. Congenital amusics and matched controls, all non-tonal language speakers, were tested for lexical tone discrimination in Mandarin Chinese (Experiment 1) and in Thai (Experiment 2). Tones were presented in pairs and participants were required to make same/different judgments. Experiment 2 additionally included musical analogs of Thai tones for comparison. Performance of congenital amusics was inferior to that of controls for all materials, suggesting a domain-general pitch-processing deficit. The pitch deficit of amusia is thus not limited to music, but may compromise the ability to process and learn tonal languages. Combined with acoustic analyses of the tone material, the present findings provide new insights into the nature of the pitch-processing deficit exhibited by amusics.


Cognitive Neuropsychology | 2007

Harmonic priming in an amusic patient: The power of implicit tasks

Barbara Tillmann; Isabelle Peretz; Emmanuel Bigand; Nathalie Gosselin

Our study investigated with an implicit method (i.e., priming paradigm) whether I.R.—a brain-damaged patient exhibiting severe amusia—processes implicitly musical structures. The task consisted in identifying one of two phonemes (Experiment 1) or timbres (Experiment 2) on the last chord of eight-chord sequences (i.e., target). The targets were harmonically related or less related to the prior chords. I.R. displayed harmonic priming effects: Phoneme and timbre identification was faster for related than for less related targets (Experiments 1 and 2). However, I.R.s explicit judgements of completion for the same sequences did not differ between related and less related contexts (Experiment 3). Her impaired performance in explicit judgements was not due to general difficulties with task demands since she performed like controls for completion judgements on spoken sentences (Experiment 4). The findings indicate that implicit knowledge of musical structures might remain intact and accessible, even when explicit judgements and overt recognition have been lost.


Frontiers in Psychology | 2010

Identification of Changes along a Continuum of Speech Intonation is Impaired in Congenital Amusia

Sean Hutchins; Nathalie Gosselin; Isabelle Peretz

A small number of individuals have severe musical problems that have neuro-genetic underpinnings. This musical disorder is termed “congenital amusia,” an umbrella term for lifelong musical disabilities that cannot be attributed to deafness, lack of exposure, or brain damage after birth. Amusics seem to lack the ability to detect fine pitch differences in tone sequences. However, differences between statements and questions, which vary in final pitch, are well perceived by most congenital amusic individuals. We hypothesized that the origin of this apparent domain-specificity of the disorder lies in the range of pitch variations, which are very coarse in speech as compared to music. Here, we tested this hypothesis by using a continuum of gradually increasing final pitch in both speech and tone sequences. To this aim, nine amusic cases and nine matched controls were presented with statements and questions that varied on a pitch continuum from falling to rising in 11 steps. The sentences were either naturally spoken or were tone sequence versions of these. The task was to categorize the sentences as statements or questions and the tone sequences as falling or rising. In each case, the observation of an S-shaped identification function indicates that amusics can accurately identify unambiguous examples of statements and questions but have problems with fine variations between these endpoints. Thus, the results indicate that a deficient pitch perception might compromise music, not because it is specialized for that domain but because music requirements are more fine-grained.

Collaboration


Dive into the Nathalie Gosselin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joanie Drapeau

Université de Sherbrooke

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lise Gagnon

Université de Sherbrooke

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge