Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nicolas Escoffier is active.

Publication


Featured researches published by Nicolas Escoffier.


Acta Psychologica | 2010

Unattended musical beats enhance visual processing

Nicolas Escoffier; Darren Yeo Jian Sheng; Annett Schirmer

The present study investigated whether and how a musical rhythm entrains a listeners visual attention. To this end, participants were presented with pictures of faces and houses and indicated whether picture orientation was upright or inverted. Participants performed this task in silence or with a musical rhythm playing in the background. In the latter condition, pictures could occur off-beat or on a rhythmically implied, silent beat. Pictures presented without the musical rhythm and off-beat were responded to more slowly than pictures presented on-beat. This effect was comparable for faces and houses. Together these results indicate that musical rhythm both synchronizes and facilitates concurrent stimulus processing.


Brain Research | 2007

Listen up! Processing of intensity change differs for vocal and nonvocal sounds

Annett Schirmer; Elizabeth A. Simpson; Nicolas Escoffier

Changes in the intensity of both vocal and nonvocal sounds can be emotionally relevant. However, as only vocal sounds directly reflect communicative intent, intensity change of vocal but not nonvocal sounds is socially relevant. Here we investigated whether a change in sound intensity is processed differently depending on its social relevance. To this end, participants listened passively to a sequence of vocal or nonvocal sounds that contained rare deviants which differed from standards in sound intensity. Concurrently recorded event-related potentials (ERPs) revealed a mismatch negativity (MMN) and P300 effect for intensity change. Direction of intensity change was of little importance for vocal stimulus sequences, which recruited enhanced sensory and attentional resources for both loud and soft deviants. In contrast, intensity change in nonvocal sequences recruited more sensory and attentional resources for loud as compared to soft deviants. This was reflected in markedly larger MMN/P300 amplitudes and shorter P300 latencies for the loud as compared to soft nonvocal deviants. Furthermore, while the processing pattern observed for nonvocal sounds was largely comparable between men and women, sex differences for vocal sounds suggest that women were more sensitive to their social relevance. These findings extend previous evidence of sex differences in vocal processing and add to reports of voice specific processing mechanisms by demonstrating that simple acoustic change recruits more processing resources if it is socially relevant.


Human Brain Mapping | 2013

Emotional expressions in voice and music: Same code, same effect?

Nicolas Escoffier; Jidan Zhong; Annett Schirmer; Anqi Qiu

Scholars have documented similarities in the way voice and music convey emotions. By using functional magnetic resonance imaging (fMRI) we explored whether these similarities imply overlapping processing substrates. We asked participants to trace changes in either the emotion or pitch of vocalizations and music using a joystick. Compared to music, vocalizations more strongly activated superior and middle temporal cortex, cuneus, and precuneus. However, despite these differences, overlapping rather than differing regions emerged when comparing emotion with pitch tracing for music and vocalizations, respectively. Relative to pitch tracing, emotion tracing activated medial superior frontal and anterior cingulate cortex regardless of stimulus type. Additionally, we observed emotion specific effects in primary and secondary auditory cortex as well as in medial frontal cortex that were comparable for voice and music. Together these results indicate that similar mechanisms support emotional inferences from vocalizations and music and that these mechanisms tap on a general system involved in social cognition. Hum Brain Mapp, 2013.


Clinical Neurophysiology | 2010

Emotional MMN: Anxiety and heart rate correlate with the ERP signature for auditory change detection

Annett Schirmer; Nicolas Escoffier

OBJECTIVE Previous work established the mismatch negativity (MMN) as a correlate of pre-attentive auditory change detection. The present study aimed at investigating the relationship between the MMN and emotional processes associated with the detection of change. METHODS To this end, we assessed state anxiety with a questionnaire and subsequently recorded the electroencephalogram (EEG) and heart rate while participants watched a silent movie and listened to a task-irrelevant auditory oddball sequence. The oddball sequence comprised meaningless syllables of which some were deviants spoken with an angry or neutral voice. RESULTS The MMN to angry voice deviants was larger than that to neutral deviants and correlated positively with ensuing heart rate acceleration. Additionally, both the MMN and heart rate acceleration to angry voice deviants were increased with increasing state anxiety. A similar effect for neutral voice deviants was non-significant. CONCLUSION Taken together, these results suggest that the pre-attentive processing of threat, as reflected by the MMN, is linked to an activation of the sympathetic nervous system. Moreover, this link is more strongly activated in individuals with high state anxiety. SIGNIFICANCE Thus, the MMN may be used as a marker for an individuals state dependent sensitivity to unattended, emotionally relevant change.


Emotion | 2006

Task and sex modulate the brain response to emotional incongruity in Asian listeners.

Annett Schirmer; Ming Lui; Burkhard Maess; Nicolas Escoffier; Mandy Chan; Trevor B. Penney

In order to recognize banter or sarcasm in social interactions, listeners must integrate verbal and vocal emotional expressions. Here, we investigated event-related potential correlates of this integration in Asian listeners. We presented emotional words spoken with congruous or incongruous emotional prosody. When listeners classified word meaning as positive or negative and ignored prosody, incongruous trials elicited a larger late positivity than congruous trials in women but not in men. Sex differences were absent when listeners evaluated the congruence between word meaning and emotional prosody. The similarity of these results to those obtained in Western listeners suggests that sex differences in emotional speech processing depend on attentional focus and may reflect culturally independent mechanisms.


NeuroImage | 2015

Auditory rhythms entrain visual processes in the human brain: Evidence from evoked oscillations and event-related potentials ☆

Nicolas Escoffier; Christoph Herrmann; Annett Schirmer

Temporal regularities in the environment are thought to guide the allocation of attention in time. Here, we explored whether entrainment of neuronal oscillations underpins this phenomenon. Participants viewed a regular stream of images in silence, or in-synchrony or out-of-synchrony with an unmarked beat position of a slow (1.3 Hz) auditory rhythm. Focusing on occipital recordings, we analyzed evoked oscillations shortly before and event-related potentials (ERPs) shortly after image onset. The phase of beta-band oscillations in the in-synchrony condition differed from that in the out-of-synchrony and silence conditions. Additionally, ERPs revealed rhythm effects for a stimulus onset potential (SOP) and the N1. Both were more negative for the in-synchrony as compared to the out-of-synchrony and silence conditions and their amplitudes positively correlated with the beta phase effects. Taken together, these findings indicate that rhythmic expectations are supported by a reorganization of neural oscillations that seems to benefit stimulus processing at expected time points. Importantly, this reorganization emerges from global rhythmic cues, across modalities, and for frequencies significantly higher than the external rhythm. As such, our findings support the idea that entrainment of neuronal oscillations represents a general mechanism through which the brain uses predictive elements in the environment to optimize attention and stimulus perception.


Psychoneuroendocrinology | 2008

What grabs his attention but not hers? Estrogen correlates with neurophysiological measures of vocal change detection

Annett Schirmer; Nicolas Escoffier; Qing Yang Li; Hui Li; Jennifer Strafford-Wilson; Wan-I Li

Prior research revealed sex differences in the processing of unattended changes in speaker prosody. The present study aimed at investigating the role of estrogen in mediating these effects. To this end, the electroencephalogram (EEG) was recorded while participants watched a silent movie with subtitles and passively listened to a syllable sequence that contained occasional changes in speaker prosody. In one block, these changes were neutral, whereas in another block they were emotional. Estrogen values were obtained for each participant and correlated with the mismatch negativity (MMN) amplitude elicited in the EEG. As predicted, female listeners had higher estrogen values than male listeners and showed reduced MMN amplitudes to neutral as compared to emotional change in speaker prosody. Moreover, in both, male and female listeners, MMN amplitudes were negatively correlated with estrogen when the change in speaker prosody was neutral, but not when it was emotional. This suggests that estrogen is associated with reduced distractibility by neutral, but not emotional, events. Emotional events are spared from this reduction in distractibility and more likely to penetrate voluntary attention directed elsewhere. Taken together, the present findings provide evidence for a role of estrogen in human cognition and emotion.


Timing & Time Perception | 2016

Emotional Voices Distort Time: Behavioral and Neural Correlates

Annett Schirmer; Nicolas Escoffier; Tabitha Ng; Trevor B. Penney

The present study explored the effect of vocally expressed emotions on duration perception. Recordings of the syllable ‘ah’ spoken in a disgusted (negative), surprised (positive), and neutral voice were subjected to a compression/stretching algorithm producing seven durations ranging from 300 to 1200 ms. The resulting stimuli served in a duration bisection procedure in which participants indicated whether a stimulus was more similar in duration to a previously studied 300 ms (short) or 1200 ms (long) 440 Hz tone. Behavioural results indicate that disgusted expressions were perceived as shorter than surprised expressions in both men and women and this effect was related to perceived valence. Additionally, both emotional expressions were perceived as shorter than neutral expressions in women only and this effect was related to perceived arousal. Event-related potentials showed an influence of emotion and rate of acoustic change (fast for compressed/short and slow for stretched/long stimuli) on stimulus encoding in women only. Based on these findings, we suggest that emotions interfere with temporal processes and facilitate the influence of contextual information (e.g., rate of acoustic change, attention) on duration judgements. Because women are more sensitive than men to unattended vocal emotions, their temporal judgements are more strongly distorted.


Dysphagia | 2016

Evaluating the Training Effects of Two Swallowing Rehabilitation Therapies Using Surface Electromyography—Chin Tuck Against Resistance (CTAR) Exercise and the Shaker Exercise

Wei Ping Sze; Wai Lam Yoon; Nicolas Escoffier; Susan J. Rickard Liow

In this study, the efficacy of two dysphagia interventions, the Chin Tuck against Resistance (CTAR) and Shaker exercises, were evaluated based on two principles in exercise science—muscle-specificity and training intensity. Both exercises were developed to strengthen the suprahyoid muscles, whose contractions facilitate the opening of the upper esophageal sphincter, thereby improving bolus transfer. Thirty-nine healthy adults performed two trials of both exercises in counter-balanced order. Surface electromyography (sEMG) recordings were simultaneously collected from suprahyoid muscle group and sternocleidomastoid muscle during the exercises. Converging results using sEMG amplitude analyses suggested that the CTAR was more specific in targeting the suprahyoid muscles than the Shaker exercise. Fatigue analyses on sEMG signals further indicated that the suprahyoid muscle group were equally or significantly fatigued (depending on metric), when participants carried out CTAR compared to the Shaker exercise. Importantly, unlike during Shaker exercise, the sternocleidomastoid muscles were significantly less activated and fatigued during CTAR. Lowering the chin against resistance is therefore sufficiently specific and intense to fatigue the suprahyoid muscles.


Frontiers in Psychology | 2016

Detecting Temporal Change in Dynamic Sounds: On the Role of Stimulus Duration, Speed, and Emotion

Annett Schirmer; Nicolas Escoffier; Xiaoqin Cheng; Yenju Feng; Trevor B. Penney

For dynamic sounds, such as vocal expressions, duration often varies alongside speed. Compared to longer sounds, shorter sounds unfold more quickly. Here, we asked whether listeners implicitly use this confound when representing temporal regularities in their environment. In addition, we explored the role of emotions in this process. Using a mismatch negativity (MMN) paradigm, we asked participants to watch a silent movie while passively listening to a stream of task-irrelevant sounds. In Experiment 1, one surprised and one neutral vocalization were compressed and stretched to create stimuli of 378 and 600 ms duration. Stimuli were presented in four blocks, two of which used surprised and two of which used neutral expressions. In one surprised and one neutral block, short and long stimuli served as standards and deviants, respectively. In the other two blocks, the assignment of standards and deviants was reversed. We observed a climbing MMN-like negativity shortly after deviant onset, which suggests that listeners implicitly track sound speed and detect speed changes. Additionally, this MMN-like effect emerged earlier and was larger for long than short deviants, suggesting greater sensitivity to duration increments or slowing down than to decrements or speeding up. Last, deviance detection was facilitated in surprised relative to neutral blocks, indicating that emotion enhances temporal processing. Experiment 2 was comparable to Experiment 1 with the exception that sounds were spectrally rotated to remove vocal emotional content. This abolished the emotional processing benefit, but preserved the other effects. Together, these results provide insights into listener sensitivity to sound speed and raise the possibility that speed biases duration judgements implicitly in a feed-forward manner. Moreover, this bias may be amplified for duration increments relative to decrements and within an emotional relative to a neutral stimulus context.

Collaboration


Dive into the Nicolas Escoffier's collaboration.

Top Co-Authors

Avatar

Annett Schirmer

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Trevor B. Penney

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

April Ching

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mandy Chan

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Ming Lui

Hong Kong Baptist University

View shared research outputs
Top Co-Authors

Avatar

Anqi Qiu

National University of Singapore

View shared research outputs
Researchain Logo
Decentralizing Knowledge