Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniela Sammler is active.

Publication


Featured researches published by Daniela Sammler.


Nature Neuroscience | 2004

Music, language and meaning: Brain signatures of semantic processing

Stefan Koelsch; Elisabeth Kasper; Daniela Sammler; Katrin Schulze; Thomas C. Gunter; Angela D. Friederici

Semantics is a key feature of language, but whether or not music can activate brain mechanisms related to the processing of semantic meaning is not known. We compared processing of semantic meaning in language and music, investigating the semantic priming effect as indexed by behavioral measures and by the N400 component of the event-related brain potential (ERP) measured by electroencephalography (EEG). Human subjects were presented visually with target words after hearing either a spoken sentence or a musical excerpt. Target words that were semantically unrelated to prime sentences elicited a larger N400 than did target words that were preceded by semantically related sentences. In addition, target words that were preceded by semantically unrelated musical primes showed a similar N400 effect, as compared to target words preceded by related musical primes. The N400 priming effect did not differ between language and music with respect to time course, strength or neural generators. Our results indicate that both music and language can prime the meaning of a word, and that music can, as language, determine physiological indices of semantic processing.


Current Biology | 2009

Universal Recognition of Three Basic Emotions in Music

Thomas Fritz; Sebastian Jentschke; Nathalie Gosselin; Daniela Sammler; Isabelle Peretz; Robert Turner; Angela D. Friederici; Stefan Koelsch

It has long been debated which aspects of music perception are universal and which are developed only after exposure to a specific musical culture. Here, we report a crosscultural study with participants from a native African population (Mafa) and Western participants, with both groups being naive to the music of the other respective culture. Experiment 1 investigated the ability to recognize three basic emotions (happy, sad, scared/fearful) expressed in Western music. Results show that the Mafas recognized happy, sad, and scared/fearful Western music excerpts above chance, indicating that the expression of these basic emotions in Western music can be recognized universally. Experiment 2 examined how a spectral manipulation of original, naturalistic music affects the perceived pleasantness of music in Western as well as in Mafa listeners. The spectral manipulation modified, among other factors, the sensory dissonance of the music. The data show that both groups preferred original Western music and also original Mafa music over their spectrally manipulated versions. It is likely that the sensory dissonance produced by the spectral manipulation was at least partly responsible for this effect, suggesting that consonance and permanent sensory dissonance universally influence the perceived pleasantness of music.


Journal of Cognitive Neuroscience | 2005

Interaction between Syntax Processing in Language and in Music: An ERP Study

Stefan Koelsch; Thomas C. Gunter; Matthias Wittfoth; Daniela Sammler

The present study investigated simultaneous processing of language and music using visually presented sentences and auditorily presented chord sequences. Music-syntactically regular and irregular chord functions were presented synchronously with syntactically correct or incorrect words, or with words that had either a high or a low semantic cloze probability. Music-syntactically irregular chords elicited an early right anterior negativity (ERAN). Syntactically incorrect words elicited a left anterior negativity (LAN). The LAN was clearly reduced when words were presented simultaneously with music-syntactically irregular chord functions. Processing of high and low cloze-probability words as indexed by the N400 was not affected by the presentation of irregular chord functions. In a control experiment, the LAN was not affected by physically deviant tones that elicited a mismatch negativity (MMN). Results demonstrate that processing of musical syntax (as reflected in the ERAN) interacts with the processing of linguistic syntax (as reflected in the LAN), and that this interaction is not due to a general effect of deviance-related negativities that precede an LAN. Findings thus indicate a strong overlap of neural resources involved in the processing of syntax in language and music.


NeuroImage | 2014

Structural connectivity differences in left and right temporal lobe epilepsy.

Pierre Besson; Vera Dinkelacker; Romain Valabregue; Lionel Thivard; Xavier Leclerc; Michel Baulac; Daniela Sammler; Olivier Colliot; Stéphane Lehéricy; Séverine Samson; Sophie Dupont

Our knowledge on temporal lobe epilepsy (TLE) with hippocampal sclerosis has evolved towards the view that this syndrome affects widespread brain networks. Diffusion weighted imaging studies have shown alterations of large white matter tracts, most notably in left temporal lobe epilepsy, but the degree of altered connections between cortical and subcortical structures remains to be clarified. We performed a whole brain connectome analysis in 39 patients with refractory temporal lobe epilepsy and unilateral hippocampal sclerosis (20 right and 19 left) and 28 healthy subjects. We performed whole-brain probabilistic fiber tracking using MRtrix and segmented 164 cortical and subcortical structures with Freesurfer. Individual structural connectivity graphs based on these 164 nodes were computed by mapping the mean fractional anisotropy (FA) onto each tract. Connectomes were then compared using two complementary methods: permutation tests for pair-wise connections and Network Based Statistics to probe for differences in large network components. Comparison of pair-wise connections revealed a marked reduction of connectivity between left TLE patients and controls, which was strongly lateralized to the ipsilateral temporal lobe. Specifically, infero-lateral cortex and temporal pole were strongly affected, and so was the perisylvian cortex. In contrast, for right TLE, focal connectivity loss was much less pronounced and restricted to bilateral limbic structures and right temporal cortex. Analysis of large network components revealed furthermore that both left and right hippocampal sclerosis affected diffuse global and interhemispheric connectivity. Thus, left temporal lobe epilepsy was associated with a much more pronounced pattern of reduced FA, that included major landmarks of perisylvian language circuitry. These distinct patterns of connectivity associated with unilateral hippocampal sclerosis show how a focal pathology influences global network architecture, and how left or right-sided lesions may have differential and specific impacts on cerebral connectivity.


Neuroreport | 2001

Differentiating ERAN and MMN: An ERP study

Stefan Koelsch; Thomas C. Gunter; Erich Schröger; Mari Tervaniemi; Daniela Sammler; Angela D. Friederici

In the present study, the early right-anterior negativity (ERAN) elicited by harmonically inappropriate chords during listening to music was compared to the frequency mismatch negativity (MMN) and the abstract-feature MMN. Results revealed that the amplitude of the ERAN, in contrast to the MMN, is specifically dependent on the degree of harmonic appropriateness. Thus, the ERAN is correlated with the cognitive processing of complex rule-based information, i.e. with the application of music–syntactic rules. Moreover, results showed that the ERAN, compared to the abstract-feature MMN, had both a longer latency, and a larger amplitude. The combined findings indicate that ERAN and MMN reflect different mechanisms of pre-attentive irregularity detection, and that, although both components have several features in common, the ERAN does not easily fit into the classical MMN framework. The present ERPs thus provide evidence for a differentiation of cognitive processes underlying the fast and pre-attentive processing of auditory information.


Clinical Neurophysiology | 2006

Auditory processing during deep propofol sedation and recovery from unconsciousness

Stefan Koelsch; Wolfgang Heinke; Daniela Sammler; Derk Olthoff

OBJECTIVE Using evoked potentials, this study investigated effects of deep propofol sedation, and effects of recovery from unconsciousness, on the processing of auditory information with stimuli suited to elicit a physical MMN, and a (music-syntactic) ERAN. METHODS Levels of sedation were assessed using the Bispectral Index (BIS) and the Modified Observers Assessment of Alertness and Sedation Scale (MOAAS). EEG-measurements were performed during wakefulness, deep propofol sedation (MOAAS 2-3, mean BIS=68), and a recovery period. Between deep sedation and recovery period, the infusion rate of propofol was increased to achieve unconsciousness (MOAAS 0-1, mean BIS=35); EEG measurements of recovery period were performed after subjects regained consciousness. RESULTS During deep sedation, the physical MMN was markedly reduced, but still significant. No ERAN was observed in this level. A clear P3a was elicited during deep sedation by those deviants, which were task-relevant during the awake state. As soon as subjects regained consciousness during the recovery period, a normal MMN was elicited. By contrast, the P3a was absent in the recovery period, and the P3b was markedly reduced. CONCLUSIONS Results indicate that the auditory sensory memory (as indexed by the physical MMN) is still active, although strongly reduced, during deep sedation (MOAAS 2-3). The presence of the P3a indicates that attention-related processes are still operating during this level. Processes of syntactic analysis appear to be abolished during deep sedation. After propofol-induced anesthesia, the auditory sensory memory appears to operate normal as soon as subjects regain consciousness, whereas the attention-related processes indexed by P3a and P3b are markedly impaired. SIGNIFICANCE Results inform about effects of sedative drugs on auditory and attention-related mechanisms. The findings are important because these mechanisms are prerequisites for auditory awareness, auditory learning and memory, as well as language perception during anesthesia.


Anesthesiology | 2004

Sequential effects of increasing propofol sedation on frontal and temporal cortices as indexed by auditory event-related potentials

Wolfgang Heinke; Ramona Kenntner; Thomas C. Gunter; Daniela Sammler; Derk Olthoff; Stefan Koelsch

BackgroundIt is an open question whether cognitive processes of auditory perception that are mediated by functionally different cortices exhibit the same sensitivity to sedation. The auditory event-related potentials P1, mismatch negativity (MMN), and early right anterior negativity (ERAN) originate from different cortical areas and reflect different stages of auditory processing. The P1 originates mainly from the primary auditory cortex. The MMN is generated in or in the close vicinity of the primary auditory cortex but is also dependent on frontal sources. The ERAN mainly originates from frontal generators. The purpose of the study was to investigate the effects of increasing propofol sedation on different stages of auditory processing as reflected in P1, MMN, and ERAN. MethodsThe P1, the MMN, and the ERAN were recorded preoperatively in 18 patients during four levels of anesthesia adjusted with target-controlled infusion: awake state (target concentration of propofol 0.0 &mgr;g/ml), light sedation (0.5 &mgr;g/ml), deep sedation (1.5 &mgr;g/ml), and unconsciousness (2.5–3.0 &mgr;g/ml). Simultaneously, propofol anesthesia was assessed using the Bispectral Index. ResultsPropofol sedation resulted in a progressive decrease in amplitudes and an increase of latencies with a similar pattern for MMN and ERAN. MMN and ERAN were elicited during sedation but were abolished during unconsciousness. In contrast, the amplitude of the P1 was unchanged by sedation but markedly decreased during unconsciousness. ConclusionThe results indicate differential effects of propofol sedation on cognitive functions that involve mainly the auditory cortices and cognitive functions that involve the frontal cortices.


European Journal of Neuroscience | 2007

A cardiac signature of emotionality

Stefan Koelsch; Andrew Remppis; Daniela Sammler; Sebastian Jentschke; Daniel Mietchen; Thomas Fritz; Hendrik Bonnemeier; Walter A. Siebel

Human personality has brain correlates that exert manifold influences on biological processes. This study investigates relations between emotional personality and heart activity. Our data demonstrate that emotional personality is related to a specific cardiac amplitude signature in the resting electrocardiogram (ECG). Two experiments using functional magnetic resonance imaging show that this signature correlates with brain activity in the amygdala and the hippocampus during the processing of musical stimuli with emotional valence. Additionally, this cardiac signature correlates with subjective indices of emotionality (as measured by the Revised Toronto Alexithymia Scale), and with both time and frequency domain measures of the heart rate variability. The results demonstrate intricate connections between emotional personality and the heart by showing that ECG amplitude patterns provide considerably more information about an individuals emotionality than previously believed. The finding of a cardiac signature of emotional personality opens new perspectives for the investigation of relations between emotional dysbalance and cardiovascular disease.


Brain | 2010

Prosody meets syntax: the role of the corpus callosum

Daniela Sammler; Sonja A. Kotz; Korinna Eckstein; Derek V. M. Ott; Angela D. Friederici

Contemporary neural models of auditory language comprehension proposed that the two hemispheres are differently specialized in the processing of segmental and suprasegmental features of language. While segmental processing of syntactic and lexical semantic information is predominantly assigned to the left hemisphere, the right hemisphere is thought to have a primacy for the processing of suprasegmental prosodic information such as accentuation and boundary marking. A dynamic interplay between the hemispheres is assumed to allow for the timely coordination of both information types. The present event-related potential study investigated whether the anterior and/or posterior portion of the corpus callosum provide the crucial brain basis for the online interaction of syntactic and prosodic information. Patients with lesions in the anterior two-thirds of the corpus callosum connecting orbital and frontal structures, or the posterior third of the corpus callosum connecting temporal, parietal and occipital areas, as well as matched healthy controls, were tested in a paradigm that crossed syntactic and prosodic manipulations. An anterior negativity elicited by a mismatch between syntactically predicted phrase structure and prosodic intonation was analysed as a marker for syntax-prosody interaction. Healthy controls and patients with lesions in the anterior corpus callosum showed this anterior negativity demonstrating an intact interplay between syntax and prosody. No such effect was found in patients with lesions in the posterior corpus callosum, although they exhibited intact, prosody-independent syntactic processing comparable with healthy controls and patients with lesions in the anterior corpus callosum. These data support the interplay between the speech processing streams in the left and right hemispheres via the posterior portion of the corpus callosum, building the brain basis for the coordination and integration of local syntactic and prosodic features during auditory speech comprehension.


Frontiers in Psychology | 2012

Perception of Words and Pitch Patterns in Song and Speech

Julia Merrill; Daniela Sammler; Marc Bangert; Dirk Goldhahn; Gabriele Lohmann; Robert Turner; Angela D. Friederici

This functional magnetic resonance imaging study examines shared and distinct cortical areas involved in the auditory perception of song and speech at the level of their underlying constituents: words and pitch patterns. Univariate and multivariate analyses were performed to isolate the neural correlates of the word- and pitch-based discrimination between song and speech, corrected for rhythmic differences in both. Therefore, six conditions, arranged in a subtractive hierarchy were created: sung sentences including words, pitch and rhythm; hummed speech prosody and song melody containing only pitch patterns and rhythm; and as a control the pure musical or speech rhythm. Systematic contrasts between these balanced conditions following their hierarchical organization showed a great overlap between song and speech at all levels in the bilateral temporal lobe, but suggested a differential role of the inferior frontal gyrus (IFG) and intraparietal sulcus (IPS) in processing song and speech. While the left IFG coded for spoken words and showed predominance over the right IFG in prosodic pitch processing, an opposite lateralization was found for pitch in song. The IPS showed sensitivity to discrete pitch relations in song as opposed to the gliding pitch in speech. Finally, the superior temporal gyrus and premotor cortex coded for general differences between words and pitch patterns, irrespective of whether they were sung or spoken. Thus, song and speech share many features which are reflected in a fundamental similarity of brain areas involved in their perception. However, fine-grained acoustic differences on word and pitch level are reflected in the IPS and the lateralized activity of the IFG.

Collaboration


Dive into the Daniela Sammler's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pascal Belin

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge