Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexandre Lehmann is active.

Publication


Featured researches published by Alexandre Lehmann.


Frontiers in Human Neuroscience | 2014

Your body, my body, our coupling moves our bodies.

Guillaume Dumas; Julien Laroche; Alexandre Lehmann

Sensitivity to temporal contingencies appears early in life and plays a key role in the ontogeny of socio-cognitive abilities in humans (Nadel et al., 1999; Gratier and Apter-Danon, 2009). The tendency for rhythmic coordination, sometimes referred to as “entrainment,” requires sensory-motor coupling (Phillips-Silver et al., 2010). In most of the fields of cognitive science, action-perception and agent-world coupling views are replacing the classical stimulus-response dichotomy (Marsh et al., 2009; Silberstein and Chemero, 2012; Schilbach et al., 2013; Novembre and Keller, 2014). Such conceptual frameworks are well suited to study coordination phenomena as they emphasize the dynamical nature of cognition (Varela et al., 1993; Kelso, 1995; Buzsaki and Draguhn, 2004; Lehmann and Schonwiesner, 2014). Moreover, they leave room for the balance of autonomy, a central feature of complex biological systems, and interactive coupling, through which such systems relate to—and make sense of—their environment (Di Paolo, 2005; Barandiaran et al., 2009; Buhrmann et al., 2013). A naturalistic study of autonomy and coupling requires both embracing ecological situations and considering first-person perspective. Furthermore, many social coordination phenomena cannot be observed in the laboratory without the interaction of at least two subjects. We propose to consider linking first- and third-person measures, and even relate them across multiple interacting individuals. We will discuss how these concepts are intertwined in coordination phenomena, and outline existing methods to address those issues.


NeuroImage | 2016

Enhanced brainstem and cortical encoding of sound during synchronized movement

Sylvie Nozaradan; Marc Schönwiesner; Laura Caron-Desrochers; Alexandre Lehmann

Movement to a steady beat has been widely studied as a model of alignment of motor outputs on sensory inputs. However, how the encoding of sensory inputs is shaped during synchronized movements along the sensory pathway remains unknown. To investigate this, we simultaneously recorded brainstem and cortical electro-encephalographic activity while participants listened to periodic amplitude-modulated tones. Participants listened either without moving or while tapping in sync on every second beat. Cortical responses were identified at the envelope modulation rate (beat frequency), whereas brainstem responses were identified at the partials frequencies of the chord and at their modulation by the beat frequency (sidebands). During sensorimotor synchronization, cortical responses at beat frequency were larger than during passive listening. Importantly, brainstem responses were also enhanced, with a selective amplification of the sidebands, in particular at the lower-pitched tone of the chord, and no significant correlation with electromyographic measures at tapping frequency. These findings provide first evidence for an online gain in the cortical and subcortical encoding of sounds during synchronized movement, selective to behavior-relevant sound features. Moreover, the frequency-tagging method to isolate concurrent brainstem and cortical activities even during actual movements appears promising to reveal coordinated processes along the human auditory pathway.


PLOS ONE | 2016

Individual Differences in the Frequency-Following Response: Relation to Pitch Perception

Emily B. J. Coffey; Emilia M. G. Colagrosso; Alexandre Lehmann; Marc Schönwiesner; Robert J. Zatorre

The scalp-recorded frequency-following response (FFR) is a measure of the auditory nervous system’s representation of periodic sound, and may serve as a marker of training-related enhancements, behavioural deficits, and clinical conditions. However, FFRs of healthy normal subjects show considerable variability that remains unexplained. We investigated whether the FFR representation of the frequency content of a complex tone is related to the perception of the pitch of the fundamental frequency. The strength of the fundamental frequency in the FFR of 39 people with normal hearing was assessed when they listened to complex tones that either included or lacked energy at the fundamental frequency. We found that the strength of the fundamental representation of the missing fundamental tone complex correlated significantly with peoples general tendency to perceive the pitch of the tone as either matching the frequency of the spectral components that were present, or that of the missing fundamental. Although at a group level the fundamental representation in the FFR did not appear to be affected by the presence or absence of energy at the same frequency in the stimulus, the two conditions were statistically distinguishable for some subjects individually, indicating that the neural representation is not linearly dependent on the stimulus content. In a second experiment using a within-subjects paradigm, we showed that subjects can learn to reversibly select between either fundamental or spectral perception, and that this is accompanied both by changes to the fundamental representation in the FFR and to cortical-based gamma activity. These results suggest that both fundamental and spectral representations coexist, and are available for later auditory processing stages, the requirements of which may also influence their relative strength and thus modulate FFR variability. The data also highlight voluntary mode perception as a new paradigm with which to study top-down vs bottom-up mechanisms that support the emerging view of the FFR as the outcome of integrated processing in the entire auditory system.


European Journal of Neuroscience | 2018

Neural bases of rhythmic entrainment in humans: critical transformation between cortical and lower-level representations of auditory rhythm

Sylvie Nozaradan; Marc Schönwiesner; Peter E. Keller; Tomas Lenc; Alexandre Lehmann

The spontaneous ability to entrain to meter periodicities is central to music perception and production across cultures. There is increasing evidence that this ability involves selective neural responses to meter‐related frequencies. This phenomenon has been observed in the human auditory cortex, yet it could be the product of evolutionarily older lower‐level properties of brainstem auditory neurons, as suggested by recent recordings from rodent midbrain. We addressed this question by taking advantage of a new method to simultaneously record human EEG activity originating from cortical and lower‐level sources, in the form of slow (< 20 Hz) and fast (> 150 Hz) responses to auditory rhythms. Cortical responses showed increased amplitudes at meter‐related frequencies compared to meter‐unrelated frequencies, regardless of the prominence of the meter‐related frequencies in the modulation spectrum of the rhythmic inputs. In contrast, frequency‐following responses showed increased amplitudes at meter‐related frequencies only in rhythms with prominent meter‐related frequencies in the input but not for a more complex rhythm requiring more endogenous generation of the meter. This interaction with rhythm complexity suggests that the selective enhancement of meter‐related frequencies does not fully rely on subcortical auditory properties, but is critically shaped at the cortical level, possibly through functional connections between the auditory cortex and other, movement‐related, brain structures. This process of temporal selection would thus enable endogenous and motor entrainment to emerge with substantial flexibility and invariance with respect to the rhythmic input in humans in contrast with non‐human animals.


Otolaryngology-Head and Neck Surgery | 2016

Stapedotomy vs Cochlear Implantation for Advanced Otosclerosis Systematic Review and Meta-analysis

Yasin Abdurehim; Alexandre Lehmann; Anthony Zeitouni

Objectives To compare the hearing outcomes of stapedotomy vs cochlear implantation in patients with advanced otosclerosis. Data Sources PubMed, EMBASE, and The Cochrane Library were searched for the terms otosclerosis, stapedotomy, and cochlear implantation and their synonyms with no language restrictions up to March 10, 2015. Methods Studies comparing the hearing outcomes of stapedotomy with cochlear implantation and studies comparing the hearing outcomes of primary cochlear implantation with salvage cochlear implantation after an unsuccessful stapedotomy in patients with advanced otosclerosis were included. Postoperative speech recognition scores were compared using the weighted mean difference and a 95% confidence interval. Results Only 4 studies met our inclusion criteria. Cochlear implantation leads to significantly better speech recognition scores than stapedotomy (P < .0001). However, this appears to be due to the variability in outcomes after stapedotomy. Cochlear implantation does not lead to superior speech recognition scores compared with the subgroup of successful cases of stapedotomy plus hearing aid (P = .47). There is also no significant difference with respect to speech recognition between primary cochlear implantation and those secondary to a failed stapedotomy (P = .22). Conclusions Cochlear implantation leads to a statistically greater and consistent improvement in speech recognition scores. Stapedotomy is not universally effective; however, it yields good results comparable to cochlear implantations in at least half of patients. For cases of unsuccessful stapedotomy, the option of cochlear implantation is still open, and the results obtained through salvage cochlear implantation are as good as those of primary cochlear implantation.


PLOS ONE | 2015

On the Relevance of Natural Stimuli for the Study of Brainstem Correlates: The Example of Consonance Perception

Marion Cousineau; Gavin M. Bidelman; Isabelle Peretz; Alexandre Lehmann

Some combinations of musical tones sound pleasing to Western listeners, and are termed consonant, while others sound discordant, and are termed dissonant. The perceptual phenomenon of consonance has been traced to the acoustic property of harmonicity. It has been repeatedly shown that neural correlates of consonance can be found as early as the auditory brainstem as reflected in the harmonicity of the scalp-recorded frequency-following response (FFR). “Neural Pitch Salience” (NPS) measured from FFRs—essentially a time-domain equivalent of the classic pattern recognition models of pitch—has been found to correlate with behavioral judgments of consonance for synthetic stimuli. Following the idea that the auditory system has evolved to process behaviorally relevant natural sounds, and in order to test the generalizability of this finding made with synthetic tones, we recorded FFRs for consonant and dissonant intervals composed of synthetic and natural stimuli. We found that NPS correlated with behavioral judgments of consonance and dissonance for synthetic but not for naturalistic sounds. These results suggest that while some form of harmonicity can be computed from the auditory brainstem response, the general percept of consonance and dissonance is not captured by this measure. It might either be represented in the brainstem in a different code (such as place code) or arise at higher levels of the auditory pathway. Our findings further illustrate the importance of using natural sounds, as a complementary tool to fully-controlled synthetic sounds, when probing auditory perception.


Frontiers in Neuroscience | 2015

Robust Encoding in the Human Auditory Brainstem: Use It or Lose It?

Alexandre Lehmann; Erika Skoe

The human auditory brainstem faithfully represents the acoustic structure of sounds (Galbraith et al., 1995). In musicians, presumably because music training and exposure places high demands on the auditory system, brainstem encoding of both speech and musical sounds is more robust than in non-musicians. It is believed that the corticofugal system, a vast network of efferent connections within the auditory neuroaxis, drives top-down plastic changes underlying the enhancements in brainstem encoding in musicians and other populations with fine-tuned auditory abilities (Wong et al., 2007). Could these same mechanisms lead to impoverished brainstem encoding in those individuals whose musical skill proficiency is below average? Two recent studies on brainstem encoding in amusia, a congenital music disorder, shed a complementary light on this issue (Lehmann et al., 2015; Liu et al., 2015). Together they suggest that a subcortical deficit in auditory processing, which varies as a function of the degree of musical (dis)ability, can emerge as a consequence of limited meaningful interactions with music. We discuss how this notion of auditory detuning fits with the emerging view on top-down induced subcortical plasticity in auditory processing and address its implications for auditory rehabilitation.


Frontiers in Neuroscience | 2015

Cross-domain processing of musical and vocal emotions in cochlear implant users

Alexandre Lehmann; Sébastien Paquette

Music and voice bear many similarities and share neural resources to some extent. Experience dependent plasticity provides a window into the neural overlap between these two domains. Here, we suggest that research on auditory deprived individuals whose hearing has been bionically restored offers a unique insight into the functional and structural overlap between music and voice. Studying how basic emotions (happiness, sadness, and fear) are perceived in auditory stimuli constitutes a favorable terrain for such an endeavor. We outline a possible neuro-behavioral approach to study the effect of plasticity on cross-domain processing of musical and vocal emotions, using cochlear implant users as a model of reversible sensory deprivation and comparing them to normal-hearing individuals. We discuss the implications of such developments on the current understanding of cross-domain neural overlap.


Clinical Eeg and Neuroscience | 2018

Neural Processing of Musical and Vocal Emotions Through Cochlear Implants Simulation

Duha G. Ahmed; Sebastian Paquette; Anthony Zeitouni; Alexandre Lehmann

Cochlear implants (CIs) partially restore the sense of hearing in the deaf. However, the ability to recognize emotions in speech and music is reduced due to the implant’s electrical signal limitations and the patient’s altered neural pathways. Electrophysiological correlations of these limitations are not yet well established. Here we aimed to characterize the effect of CIs on auditory emotion processing and, for the first time, directly compare vocal and musical emotion processing through a CI-simulator. We recorded 16 normal hearing participants’ electroencephalographic activity while listening to vocal and musical emotional bursts in their original form and in a degraded (CI-simulated) condition. We found prolonged P50 latency and reduced N100-P200 complex amplitude in the CI-simulated condition. This points to a limitation in encoding sound signals processed through CI simulation. When comparing the processing of vocal and musical bursts, we found a delay in latency with the musical bursts compared to the vocal bursts in both conditions (original and CI-simulated). This suggests that despite the cochlear implants’ limitations, the auditory cortex can distinguish between vocal and musical stimuli. In addition, it adds to the literature supporting the complexity of musical emotion. Replicating this study with actual CI users might lead to characterizing emotional processing in CI users and could ultimately help develop optimal rehabilitation programs or device processing strategies to improve CI users’ quality of life.


Scientific Reports | 2017

Monkeys share the neurophysiological basis for encoding sound periodicities captured by the frequency-following response with humans

Yaneri A. Ayala; Alexandre Lehmann; Hugo Merchant

The extraction and encoding of acoustical temporal regularities are fundamental for human cognitive auditory abilities such as speech or beat entrainment. Because the comparison of the neural sensitivity to temporal regularities between human and animals is fundamental to relate non-invasive measures of auditory processing to their neuronal basis, here we compared the neural representation of auditory periodicities between human and non-human primates by measuring scalp-recorded frequency-following response (FFR). We found that rhesus monkeys can resolve the spectrotemporal structure of periodic stimuli to a similar extent as humans by exhibiting a homologous FFR potential to the speech syllable /da/. The FFR in both species is robust and phase-locked to the fundamental frequency of the sound, reflecting an effective neural processing of the fast-periodic information of subsyllabic cues. Our results thus reveal a conserved neural ability to track acoustical regularities within the primate order. These findings open the possibility to study the neurophysiology of complex sound temporal processing in the macaque subcortical and cortical areas, as well as the associated experience-dependent plasticity across the auditory pathway in behaving monkeys.

Collaboration


Dive into the Alexandre Lehmann's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Erika Skoe

University of Connecticut

View shared research outputs
Top Co-Authors

Avatar

Sylvie Nozaradan

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar

Alexandra Ladouceur

Université du Québec à Trois-Rivières

View shared research outputs
Top Co-Authors

Avatar

Diana Jimena Arias

Université du Québec à Montréal

View shared research outputs
Researchain Logo
Decentralizing Knowledge