Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sébastien Paquette is active.

Publication


Featured researches published by Sébastien Paquette.


International Journal of Psychophysiology | 2009

Modulation of the startle reflex by pleasant and unpleasant music.

Mathieu Roy; Jean-Philippe Mailhot; Nathalie Gosselin; Sébastien Paquette; Isabelle Peretz

The issue of emotional feelings to music is the object of a classic debate in music psychology. Emotivists argue that emotions are really felt in response to music, whereas cognitivists believe that music is only representative of emotions. Psychophysiological recordings of emotional feelings to music might help to resolve the debate, but past studies have failed to show clear and consistent differences between musical excerpts of different emotional valence. Here, we compared the effects of pleasant and unpleasant musical excerpts on the startle eye blink reflex and associated body markers (such as the corrugator and zygomatic activity, skin conductance level and heart rate). The startle eye blink amplitude was larger and its latency was shorter during unpleasant compared with pleasant music, suggesting that the defensive emotional system was indeed modulated by music. Corrugator activity was also enhanced during unpleasant music, whereas skin conductance level was higher for pleasant excerpts. The startle reflex was the response that contributed the most in distinguishing pleasant and unpleasant music. Taken together, these results provide strong evidence that emotions were felt in response to music, supporting the emotivist stance.


Frontiers in Psychology | 2013

The "Musical Emotional Bursts": a validated set of musical affect bursts to investigate auditory affective processing.

Sébastien Paquette; Isabelle Peretz; Pascal Belin

The Musical Emotional Bursts (MEB) consist of 80 brief musical executions expressing basic emotional states (happiness, sadness and fear) and neutrality. These musical bursts were designed to be the musical analog of the Montreal Affective Voices (MAV)—a set of brief non-verbal affective vocalizations portraying different basic emotions. The MEB consist of short (mean duration: 1.6 s) improvisations on a given emotion or of imitations of a given MAV stimulus, played on a violin (10 stimuli × 4 [3 emotions + neutral]), or a clarinet (10 stimuli × 4 [3 emotions + neutral]). The MEB arguably represent a primitive form of music emotional expression, just like the MAV represent a primitive form of vocal, non-linguistic emotional expression. To create the MEB, stimuli were recorded from 10 violinists and 10 clarinetists, and then evaluated by 60 participants. Participants evaluated 240 stimuli [30 stimuli × 4 (3 emotions + neutral) × 2 instruments] by performing either a forced-choice emotion categorization task, a valence rating task or an arousal rating task (20 subjects per task); 40 MAVs were also used in the same session with similar task instructions. Recognition accuracy of emotional categories expressed by the MEB (n:80) was lower than for the MAVs but still very high with an average percent correct recognition score of 80.4%. Highest recognition accuracies were obtained for happy clarinet (92.0%) and fearful or sad violin (88.0% each) MEB stimuli. The MEB can be used to compare the cerebral processing of emotional expressions in music and vocal communication, or used for testing affective perception in patients with communication problems.


Conservation Genetics | 2007

Riverbeds demarcate distinct conservation units of the radiated tortoise (Geochelone radiata) in southern Madagascar

Sébastien Paquette; Sandra M. Behncke; Susan H. O’Brien; Rick A. Brenneman; Edward E. Louis; François-Joseph Lapointe

The radiated tortoise (Geochelone radiata) is an endangered species endemic to Madagascar. It inhabits the semiarid spiny forest of the southern part of the island, an ecosystem heavily affected by habitat destruction. Furthermore, illegal harvesting greatly threatens this species. The main objective of our study was to acquire better knowledge of its genetic structure, in order to take appropriate management decisions concerning, for instance, the reintroduction of confiscated individuals. Our hypothesis was that rivers represent effective barriers to tortoise dispersal despite the fact that they are dry most of the year. We used 13 polymorphic microsatellite markers to compare samples from six populations across the range of the species. All analyses (Fisher’s exact tests, FST values, AMOVA) indicated that the radiated tortoise exhibits moderate levels of genetic structure throughout its range. In addition, we used a multiple regression approach that revealed the importance of rivers to explain the observed structure. This analysis supported the role of the Menarandra and Manambovo Rivers as major barriers to the dispersal of most radiated tortoises, but Markov chain Monte Carlo simulations revealed that low levels of recurrent gene flow may explain why FST values were not higher. We identified three distinct conservation units with relatively high assignments rates (87%), which should be valuable for the management of the species. This is the first study to report the genetic structure of a species sampled throughout the Malagasy spiny forest.


Journal of Heredity | 2010

Microsatellite analyses provide evidence of male-biased dispersal in the radiated tortoise Astrochelys radiata (Chelonia: Testudinidae).

Sébastien Paquette; Edward E. Louis; François-Joseph Lapointe

Dispersal is a major force in shaping the genetic structure and dynamics of species; thus, its understanding is critical in formulating appropriate conservation strategies. In many species, sexes do not face the same evolutionary pressures, and consequently dispersal is often asymmetrical between males and females. This is well documented in birds and mammals but has seldom been investigated in other taxa, including reptiles and, more specifically, nonmarine chelonians. In these species, nest-site fidelity observations are frequent but still remain to be associated with natal homing. Here, we tested for sex-biased dispersal in the radiated tortoise (Astrochelys radiata) from southern Madagascar. Using data from 13 microsatellite markers, we investigated patterns of relatedness between sexes in 2 populations. All Mantel tests indicated significant isolation by distance at the individual level in females but not in males. Furthermore, spatial autocorrelation analyses and 2 analytical approaches designed to assess general trends in sex-specific dispersal also supported male-biased dispersal. On the other hand, comparisons of overall genetic structure among sampling sites did not provide conclusive support for greater philopatry in females, but these tests may have low statistical power because of methodological and biological constraints. Radiated tortoises appear to be both polyandrous and polygynous, and evolutionary processes that may lead to a sex bias in dispersal are discussed with respect to tortoise breeding biology. Female natal homing is hypothesized as a key trait explaining greater female philopatry in A. radiata. These findings highlight the necessity of additional research on natal homing in tortoises, a behavioral trait with direct implications for conservation.


Behavior Research Methods | 2018

The Montreal Protocol for Identification of Amusia

Dominique T. Vuvan; Sébastien Paquette; G. Mignault Goulet; I. Royal; M. Felezeu; Isabelle Peretz

The Montreal Battery for the Evaluation of Amusia (MBEA; Peretz, Champod, & Hyde Annals of the New York Academy of Sciences, 999, 58–75, 2003) is an empirically grounded quantitative tool that is widely used to identify individuals with congenital amusia. The use of such a standardized measure ensures that the individuals tested will conform to a specific neuropsychological profile, allowing for comparisons across studies and research groups. Recently, a number of researchers have published credible critiques of the usefulness of the MBEA as a diagnostic tool for amusia. Here we argue that the MBEA and its online counterpart, the AMUSIA tests (Peretz et al. Music Perception, 25, 331–343, 2008), should be considered steps in a screening process for amusia, rather than standalone diagnostic tools. The goal of this article is to present, in detailed and easily replicable format, the full protocol through which congenital amusics should be identified. In providing information that has often gone unreported in published articles, we aim to clarify the strengths and limitations of the MBEA and to make recommendations for its continued use by the research community as part of the Montreal Protocol for Identification of Amusia.


Scientific Reports | 2017

Voice selectivity in the temporal voice area despite matched low-level acoustic cues.

Trevor R. Agus; Sébastien Paquette; Clara Suied; Daniel Pressnitzer; Pascal Belin

In human listeners, the temporal voice areas (TVAs) are regions of the superior temporal gyrus and sulcus that respond more to vocal sounds than a range of nonvocal control sounds, including scrambled voices, environmental noises, and animal cries. One interpretation of the TVA’s selectivity is based on low-level acoustic cues: compared to control sounds, vocal sounds may have stronger harmonic content or greater spectrotemporal complexity. Here, we show that the right TVA remains selective to the human voice even when accounting for a variety of acoustical cues. Using fMRI, single vowel stimuli were contrasted with single notes of musical instruments with balanced harmonic-to-noise ratios and pitches. We also used “auditory chimeras”, which preserved subsets of acoustical features of the vocal sounds. The right TVA was preferentially activated only for the natural human voice. In particular, the TVA did not respond more to artificial chimeras preserving the exact spectral profile of voices. Additional acoustic measures, including temporal modulations and spectral complexity, could not account for the increased activation. These observations rule out simple acoustical cues as a basis for voice selectivity in the TVAs.


Frontiers in Neuroscience | 2015

Cross-domain processing of musical and vocal emotions in cochlear implant users

Alexandre Lehmann; Sébastien Paquette

Music and voice bear many similarities and share neural resources to some extent. Experience dependent plasticity provides a window into the neural overlap between these two domains. Here, we suggest that research on auditory deprived individuals whose hearing has been bionically restored offers a unique insight into the functional and structural overlap between music and voice. Studying how basic emotions (happiness, sadness, and fear) are perceived in auditory stimuli constitutes a favorable terrain for such an endeavor. We outline a possible neuro-behavioral approach to study the effect of plasticity on cross-domain processing of musical and vocal emotions, using cochlear implant users as a model of reversible sensory deprivation and comparing them to normal-hearing individuals. We discuss the implications of such developments on the current understanding of cross-domain neural overlap.


Annals of the New York Academy of Sciences | 2018

Cross‐classification of musical and vocal emotions in the auditory cortex

Sébastien Paquette; Sylvain Takerkart; Shinji Saget; Isabelle Peretz; Pascal Belin

Whether emotions carried by voice and music are processed by the brain using similar mechanisms has long been investigated. Yet neuroimaging studies do not provide a clear picture, mainly due to lack of control over stimuli. Here, we report a functional magnetic resonance imaging (fMRI) study using comparable stimulus material in the voice and music domains—the Montreal Affective Voices and the Musical Emotional Bursts—which include nonverbal short bursts of happiness, fear, sadness, and neutral expressions. We use a multivariate emotion‐classification fMRI analysis involving cross‐timbre classification as a means of comparing the neural mechanisms involved in processing emotional information in the two domains. We find, for affective stimuli in the violin, clarinet, or voice timbres, that local fMRI patterns in the bilateral auditory cortex and upper premotor regions support above‐chance emotion classification when training and testing sets are performed within the same timbre category. More importantly, classifier performance generalized well across timbre in cross‐classifying schemes, albeit with a slight accuracy drop when crossing the voice–music boundary, providing evidence for a shared neural code for processing musical and vocal emotions, with possibly a cost for the voice due to its evolutionary significance.


NeuroImage | 2017

The cerebellum's contribution to beat interval discrimination

Sébastien Paquette; Shinya Fujii; H.C. Li; Gottfried Schlaug

&NA; From expert percussionists to individuals who cannot dance, there are widespread differences in peoples abilities to perceive and synchronize with a musical beat. The aim of our study was to identify candidate brain regions that might be associated with these abilities. For this purpose, we used Voxel‐Based‐Morphometry to correlate inter‐individual differences in performance on the Harvard Beat Assessment Tests (H‐BAT) with local inter‐individual variations in gray matter volumes across the entire brain space in 60 individuals. Analysis revealed significant co‐variations between performances on two perceptual tasks of the Harvard Beat Assessment Tests associated with beat interval change discrimination (faster, slower) and gray matter volume variations in the cerebellum. Participant discrimination thresholds for the Beat Finding Interval Test (quarter note beat) were positively associated with gray matter volume variation in cerebellum lobule IX in the left hemisphere and crus I bilaterally. Discrimination thresholds for the Beat Interval Test (simple series of tones) revealed the tendency for a positive association with gray matter volume variations in crus I/II of the left cerebellum. Our results demonstrate the importance of the cerebellum in beat interval discrimination skills, as measured by two perceptual tasks of the Harvard Beat Assessment Tests. Current findings, in combination with evidence from patients with cerebellar degeneration and expert dancers, suggest that cerebellar gray matter and overall cerebellar integrity are important for temporal discrimination abilities. HighlightsBeat interval discrimination abilities correlate with gray matter volume in the cerebellum.Poor beat perception might be linked to a malformation during cerebellum development.Cerebellar integrity is directly linked to beat discrimination.Beat perception and production may not draw on the same neural structures.


Journal of the Acoustical Society of America | 2018

Decoding musical and vocal emotions

Sébastien Paquette

Many studies support the idea of common neural substrates for the perception of vocal and musical emotions. It is proposed that music, in order to make us perceive emotions, recruits the emotional circuits that evolved mainly for the processing of biologically important vocalizations (e.g., cries, screams). Although some studies have found great similarities between voice and music in terms of acoustic cues (emotional expression) and neural correlates (emotional processing), some studies reported differences specific to each medium. However, it is possible that the differences described may not be specific to the medium, but may instead be specific to the stimuli used (e.g., complexity, length). To understand how these vocal and musical emotions are perceived and how they can be affected by hearing impairments, we assessed recognition of the most basic forms of auditory emotion (musical/vocal bursts) through a series of studies in normal hearing individuals and in cochlear implant users. Multi-voxel pattern analyses of fMRI images provide evidence for a shared neural code for processing musical and vocal emotions. Correlational analyses of emotional ratings helped highlight the importance of timbral acoustic cues (brightness, energy, and roughness) common to voice and music for emotion perception in cochlear implant users.Many studies support the idea of common neural substrates for the perception of vocal and musical emotions. It is proposed that music, in order to make us perceive emotions, recruits the emotional circuits that evolved mainly for the processing of biologically important vocalizations (e.g., cries, screams). Although some studies have found great similarities between voice and music in terms of acoustic cues (emotional expression) and neural correlates (emotional processing), some studies reported differences specific to each medium. However, it is possible that the differences described may not be specific to the medium, but may instead be specific to the stimuli used (e.g., complexity, length). To understand how these vocal and musical emotions are perceived and how they can be affected by hearing impairments, we assessed recognition of the most basic forms of auditory emotion (musical/vocal bursts) through a series of studies in normal hearing individuals and in cochlear implant users. Multi-voxel patte...

Collaboration


Dive into the Sébastien Paquette's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pascal Belin

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

I. Royal

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar

M. Felezeu

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Edward E. Louis

University of Texas at Austin

View shared research outputs
Researchain Logo
Decentralizing Knowledge