Stefanie Schelinski
Max Planck Society
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Stefanie Schelinski.
PLOS ONE | 2008
Stefan Koelsch; Simone Kilches; Nikolaus Steinbeis; Stefanie Schelinski
Background There is lack of neuroscientific studies investigating music processing with naturalistic stimuli, and brain responses to real music are, thus, largely unknown. Methodology/Principal Findings This study investigates event-related brain potentials (ERPs), skin conductance responses (SCRs) and heart rate (HR) elicited by unexpected chords of piano sonatas as they were originally arranged by composers, and as they were played by professional pianists. From the musical excerpts played by the pianists (with emotional expression), we also created versions without variations in tempo and loudness (without musical expression) to investigate effects of musical expression on ERPs and SCRs. Compared to expected chords, unexpected chords elicited an early right anterior negativity (ERAN, reflecting music-syntactic processing) and an N5 (reflecting processing of meaning information) in the ERPs, as well as clear changes in the SCRs (reflecting that unexpected chords also elicited emotional responses). The ERAN was not influenced by emotional expression, whereas N5 potentials elicited by chords in general (regardless of their chord function) differed between the expressive and the non-expressive condition. Conclusions/Significance These results show that the neural mechanisms of music-syntactic processing operate independently of the emotional qualities of a stimulus, justifying the use of stimuli without emotional expression to investigate the cognitive processing of musical structure. Moreover, the data indicate that musical expression affects the neural mechanisms underlying the processing of musical meaning. Our data are the first to reveal influences of musical performance on ERPs and SCRs, and to show physiological responses to unexpected chords in naturalistic music.
NeuroImage | 2013
Laura C. Anderson; Danielle Z. Bolling; Stefanie Schelinski; Marika C. Coffman; Kevin A. Pelphrey; Martha D. Kaiser
Disorders related to social functioning including autism and schizophrenia differ drastically in incidence and severity between males and females. Little is known about the neural systems underlying these sex-linked differences in risk and resiliency. Using functional magnetic resonance imaging and a task involving the visual perception of point-light displays of coherent and scrambled biological motion, we discovered sex differences in the development of neural systems for basic social perception. In adults, we identified enhanced activity during coherent biological motion perception in females relative to males in a network of brain regions previously implicated in social perception including amygdala, medial temporal gyrus, and temporal pole. These sex differences were less pronounced in our sample of school-age youth. We hypothesize that the robust neural circuitry supporting social perception in females, which diverges from males beginning in childhood, may underlie sex differences in disorders related to social processing.
Current Biology | 2014
Claudia Roswandowitz; Samuel R. Mathias; Florian Hintz; Stefanie Schelinski; Katharina von Kriegstein
Recognizing other individuals is an essential skill in humans and in other species. Over the last decade, it has become increasingly clear that person-identity recognition abilities are highly variable. Roughly 2% of the population has developmental prosopagnosia, a congenital deficit in recognizing others by their faces. It is currently unclear whether developmental phonagnosia, a deficit in recognizing others by their voices, is equally prevalent, or even whether it actually exists. Here, we aimed to identify cases of developmental phonagnosia. We collected more than 1,000 data sets from self-selected German individuals by using a web-based screening test that was designed to assess their voice-recognition abilities. We then examined potentially phonagnosic individuals by using a comprehensive laboratory test battery. We found two novel cases of phonagnosia: AS, a 32-year-old female, and SP, a 32-year-old male; both are otherwise healthy academics, have normal hearing, and show no pathological abnormalities in brain structure. The two cases have comparable patterns of impairments: both performed at least 2 SDs below the level of matched controls on tests that required learning new voices, judging the familiarity of famous voices, and discriminating pitch differences between voices. In both cases, only voice-identity processing per se was affected: face recognition, speech intelligibility, emotion recognition, and musical ability were all comparable to controls. The findings confirm the existence of developmental phonagnosia as a modality-specific impairment and allow a first rough prevalence estimate.
Neuropsychologia | 2014
Stefanie Schelinski; Philipp Riedel; Katharina von Kriegstein
In auditory-only conditions, for example when we listen to someone on the phone, it is essential to fast and accurately recognize what is said (speech recognition). Previous studies have shown that speech recognition performance in auditory-only conditions is better if the speaker is known not only by voice, but also by face. Here, we tested the hypothesis that such an improvement in auditory-only speech recognition depends on the ability to lip-read. To test this we recruited a group of adults with autism spectrum disorder (ASD), a condition associated with difficulties in lip-reading, and typically developed controls. All participants were trained to identify six speakers by name and voice. Three speakers were learned by a video showing their face and three others were learned in a matched control condition without face. After training, participants performed an auditory-only speech recognition test that consisted of sentences spoken by the trained speakers. As a control condition, the test also included speaker identity recognition on the same auditory material. The results showed that, in the control group, performance in speech recognition was improved for speakers known by face in comparison to speakers learned in the matched control condition without face. The ASD group lacked such a performance benefit. For the ASD group auditory-only speech recognition was even worse for speakers known by face compared to speakers not known by face. In speaker identity recognition, the ASD group performed worse than the control group independent of whether the speakers were learned with or without face. Two additional visual experiments showed that the ASD group performed worse in lip-reading whereas face identity recognition was within the normal range. The findings support the view that auditory-only communication involves specific visual mechanisms. Further, they indicate that in ASD, speaker-specific dynamic visual information is not available to optimize auditory-only speech recognition.
Social Cognitive and Affective Neuroscience | 2016
Stefanie Schelinski; Kamila Borowiak; Katharina von Kriegstein
The ability to recognise the identity of others is a key requirement for successful communication. Brain regions that respond selectively to voices exist in humans from early infancy on. Currently, it is unclear whether dysfunction of these voice-sensitive regions can explain voice identity recognition impairments. Here, we used two independent functional magnetic resonance imaging studies to investigate voice processing in a population that has been reported to have no voice-sensitive regions: autism spectrum disorder (ASD). Our results refute the earlier report that individuals with ASD have no responses in voice-sensitive regions: Passive listening to vocal, compared to non-vocal, sounds elicited typical responses in voice-sensitive regions in the high-functioning ASD group and controls. In contrast, the ASD group had a dysfunction in voice-sensitive regions during voice identity but not speech recognition in the right posterior superior temporal sulcus/gyrus (STS/STG)—a region implicated in processing complex spectrotemporal voice features and unfamiliar voices. The right anterior STS/STG correlated with voice identity recognition performance in controls but not in the ASD group. The findings suggest that right STS/STG dysfunction is critical for explaining voice recognition impairments in high-functioning ASD and show that ASD is not characterised by a general lack of voice-sensitive responses.
Autism Research | 2017
Stefanie Schelinski; Claudia Roswandowitz; Katharina von Kriegstein
People with autism spectrum disorder (ASD) have difficulties in identifying another person by face and voice. This might contribute considerably to the development of social cognition and interaction difficulties. The characteristics of the voice recognition deficit in ASD are unknown. Here, we used a comprehensive behavioral test battery to systematically investigate voice processing in high‐functioning ASD (n = 16) and typically developed pair‐wise matched controls (n = 16). The ASD group had particular difficulties with discriminating, learning, and recognizing unfamiliar voices, while recognizing famous voices was relatively intact. Tests on acoustic processing abilities showed that the ASD group had a specific deficit in vocal pitch perception that was dissociable from otherwise intact acoustic processing (i.e., musical pitch, musical, and vocal timbre perception). Our results allow a characterization of the voice recognition deficit in ASD: The findings indicate that in high‐functioning ASD, the difficulty to recognize voices is particularly pronounced for learning novel voices and the recognition of unfamiliar peoples’ voices. This pattern might be indicative of difficulties with integrating the acoustic characteristics of the voice into a coherent percept—a function that has been previously associated with voice‐selective regions in the posterior superior temporal sulcus/gyrus of the human brain. Autism Res 2017, 10: 155–168.
NeuroImage | 2017
Claudia Roswandowitz; Stefanie Schelinski; Katharina von Kriegstein
ABSTRACT Human voice recognition is critical for many aspects of social communication. Recently, a rare disorder, developmental phonagnosia, which describes the inability to recognise a speakers voice, has been discovered. The underlying neural mechanisms are unknown. Here, we used two functional magnetic resonance imaging experiments to investigate brain function in two behaviourally well characterised phonagnosia cases, both 32 years old: AS has apperceptive and SP associative phonagnosia. We found distinct malfunctioned brain mechanisms in AS and SP matching their behavioural profiles. In apperceptive phonagnosia, right‐hemispheric auditory voice‐sensitive regions (i.e., Heschls gyrus, planum temporale, superior temporal gyrus) showed lower responses than in matched controls (nAS=16) for vocal versus non‐vocal sounds and for speaker versus speech recognition. In associative phonagnosia, the connectivity between voice‐sensitive (i.e. right posterior middle/inferior temporal gyrus) and supramodal (i.e. amygdala) regions was reduced in comparison to matched controls (nSP=16) during speaker versus speech recognition. Additionally, both cases recruited distinct potential compensatory mechanisms. Our results support a central assumption of current two‐system models of voice‐identity processing: They provide the first evidence that dysfunction of voice‐sensitive regions and impaired connectivity between voice‐sensitive and supramodal person recognition regions can selectively contribute to deficits in person recognition by voice.
Research Seminars in Psychology and Cognitive Neuroscience (Colloquium) | 2018
Stefanie Schelinski
The correct perception of information carried by the voice is a key requirement for successful human communication. Hearing another person’s voice provides information about who is speaking (voice identity), what is said (vocal speech) and the emotional state of a person (vocal emotion). Autism spectrum disorder (ASD) is associated with impaired voice identity and vocal emotion perception while the perception of vocal speech is relatively intact. However, the underlying mechanisms of these voice perception impairments are unknown. For example, it is unclear at which processing stage voice perception difficulties occur, i.e. whether they are rather of apperceptive or associative nature. It is further unclear whether impairments in voice identity processing in ASD are associated with dysfunction of voice-sensitive brain regions, whether they are dissociable from more general impairments in acoustic sound processing, such as music or whether they are associated with deficits in recognising other person related information, such as faces. Within the scope of my dissertation we addressed these open research questions by systematically investigating voice perception in adults with high-functioning ASD and typically developed matched controls (matched pairwise on age, gender, and intellectual abilities). Our results allow, for the first time, a characterisation of the voice perception deficits in ASD. Results from a comprehensive behavioural test battery and two standard functional magnetic resonance imaging (fMRI) experiments that we used to localise voice-sensitive brain regions suggest that in high-functioning ASD: (i) difficulties in voice identity and vocal emotion recognition are of perceptual nature, i.e. occur due to difficulties with processing voice acoustic features, such as vocal pitch; (ii) difficulties in vocal pitch perception are dissociable from otherwise intact acoustic processing (i.e. hearing ability, musical pitch, musical timbre, and vocal timbre perception); difficulties in voice identity recognition are (iii) dissociable from intact speech recognition; (iv) associated with impaired face identity recognition; (v) on a neuronal level associated with reduced functioning of brain regions associated with voice identity processing and (vi) dissociable from a more general neuronal processing deficit of vocal sounds. Our results inform models on human communication and advance our understanding for basic mechanisms which might contribute to core symptoms in ASD, such as difficulties in communication.
NeuroImage: Clinical | 2018
Kamila Borowiak; Stefanie Schelinski; Katharina von Kriegstein
Speech information inherent in face movements is important for understanding what is said in face-to-face communication. Individuals with autism spectrum disorders (ASD) have difficulties in extracting speech information from face movements, a process called visual-speech recognition. Currently, it is unknown what dysfunctional brain regions or networks underlie the visual-speech recognition deficit in ASD. We conducted a functional magnetic resonance imaging (fMRI) study with concurrent eye tracking to investigate visual-speech recognition in adults diagnosed with high-functioning autism and pairwise matched typically developed controls. Compared to the control group (n = 17), the ASD group (n = 17) showed decreased Blood Oxygenation Level Dependent (BOLD) response during visual-speech recognition in the right visual area 5 (V5/MT) and left temporal visual speech area (TVSA) – brain regions implicated in visual-movement perception. The right V5/MT showed positive correlation with visual-speech task performance in the ASD group, but not in the control group. Psychophysiological interaction analysis (PPI) revealed that functional connectivity between the left TVSA and the bilateral V5/MT and between the right V5/MT and the left IFG was lower in the ASD than in the control group. In contrast, responses in other speech-motor regions and their connectivity were on the neurotypical level. Reduced responses and network connectivity of the visual-movement regions in conjunction with intact speech-related mechanisms indicate that perceptual mechanisms might be at the core of the visual-speech recognition deficit in ASD. Communication deficits in ASD might at least partly stem from atypical sensory processing and not higher-order cognitive processing of socially relevant information.
Journal of Autism and Developmental Disorders | 2018
Stefanie Schelinski; Katharina von Kriegstein
We tested the relation between vocal emotion and vocal pitch perception abilities in adults with high-functioning autism spectrum disorder (ASD) and pairwise matched adults with typical development. The ASD group had impaired vocal but typical non-vocal pitch and vocal timbre perception abilities. The ASD group showed less accurate vocal emotion perception than the comparison group and vocal emotion perception abilities were correlated with traits and symptoms associated with ASD. Vocal pitch and vocal emotion perception abilities were significantly correlated in the comparison group only. Our results suggest that vocal emotion recognition difficulties in ASD might not only be based on difficulties with complex social tasks, but also on difficulties with processing of basic sensory features, such as vocal pitch.