Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Katharina von Kriegstein is active.

Publication


Featured researches published by Katharina von Kriegstein.


Cognitive Brain Research | 2003

Modulation of neural responses to speech by directing attention to voices or verbal content

Katharina von Kriegstein; Evelyn Eger; Andreas Kleinschmidt; Anne-Lise Giraud

We studied with functional neuroimaging the cortical response to auditory sentences, comparing two recognition tasks that either targeted the speakers voice or the verbal content. The right anterior superior temporal sulcus responded during the voice but not during the verbal content task. This response was therefore specifically related to the analysis of nonverbal features of speech. However, the dissociation between verbal and nonverbal analysis was only partial. Left middle temporal regions previously implicated in semantic processing responded in both tasks. This indicates that implicit semantic processing occurred even when the task directed attention to nonverbal input analysis. The verbal task yielded greater bilateral activation in the fusiform/lingual region, presumably reflecting an implicit translation of auditory sentences into visual representations. This result confirms the participation of visual cortical regions in verbal analysis of speech.


PLOS Biology | 2006

Implicit multisensory associations influence voice recognition.

Katharina von Kriegstein; Anne-Lise Giraud

Natural objects provide partially redundant information to the brain through different sensory modalities. For example, voices and faces both give information about the speech content, age, and gender of a person. Thanks to this redundancy, multimodal recognition is fast, robust, and automatic. In unimodal perception, however, only part of the information about an object is available. Here, we addressed whether, even under conditions of unimodal sensory input, crossmodal neural circuits that have been shaped by previous associative learning become activated and underpin a performance benefit. We measured brain activity with functional magnetic resonance imaging before, while, and after participants learned to associate either sensory redundant stimuli, i.e. voices and faces, or arbitrary multimodal combinations, i.e. voices and written names, ring tones, and cell phones or brand names of these cell phones. After learning, participants were better at recognizing unimodal auditory voices that had been paired with faces than those paired with written names, and association of voices with faces resulted in an increased functional coupling between voice and face areas. No such effects were observed for ring tones that had been paired with cell phones or names. These findings demonstrate that brief exposure to ecologically valid and sensory redundant stimulus pairs, such as voices and faces, induces specific multisensory associations. Consistent with predictive coding theories, associative representations become thereafter available for unimodal perception and facilitate object recognition. These data suggest that for natural objects effective predictive signals can be generated across sensory systems and proceed by optimization of functional connectivity between specialized cortical sensory modules.


The Journal of Neuroscience | 2011

Direct structural connections between voice- and face-recognition areas

Helen Blank; Katharina von Kriegstein

Currently, there are two opposing models for how voice and face information is integrated in the human brain to recognize person identity. The conventional model assumes that voice and face information is only combined at a supramodal stage (Bruce and Young, 1986; Burton et al., 1990; Ellis et al., 1997). An alternative model posits that areas encoding voice and face information also interact directly and that this direct interaction is behaviorally relevant for optimizing person recognition (von Kriegstein et al., 2005; von Kriegstein and Giraud, 2006). To disambiguate between the two different models, we tested for evidence of direct structural connections between voice- and face-processing cortical areas by combining functional and diffusion magnetic resonance imaging. We localized, at the individual subject level, three voice-sensitive areas in anterior, middle, and posterior superior temporal sulcus (STS) and face-sensitive areas in the fusiform gyrus [fusiform face area (FFA)]. Using probabilistic tractography, we show evidence that the FFA is structurally connected with voice-sensitive areas in STS. In particular, our results suggest that the FFA is more strongly connected to middle and anterior than to posterior areas of the voice-sensitive STS. This specific structural connectivity pattern indicates that direct links between face- and voice-recognition areas could be used to optimize human person recognition.


Proceedings of the National Academy of Sciences of the United States of America | 2008

Simulation of talking faces in the human brain improves auditory speech recognition.

Katharina von Kriegstein; Özgür Dogan; Martina Grüter; Anne-Lise Giraud; Christian A. Kell; Thomas Grüter; Andreas Kleinschmidt; Stefan J. Kiebel

Human face-to-face communication is essentially audiovisual. Typically, people talk to us face-to-face, providing concurrent auditory and visual input. Understanding someone is easier when there is visual input, because visual cues like mouth and tongue movements provide complementary information about speech content. Here, we hypothesized that, even in the absence of visual input, the brain optimizes both auditory-only speech and speaker recognition by harvesting speaker-specific predictions and constraints from distinct visual face-processing areas. To test this hypothesis, we performed behavioral and neuroimaging experiments in two groups: subjects with a face recognition deficit (prosopagnosia) and matched controls. The results show that observing a specific person talking for 2 min improves subsequent auditory-only speech and speaker recognition for this person. In both prosopagnosics and controls, behavioral improvement in auditory-only speech recognition was based on an area typically involved in face-movement processing. Improvement in speaker recognition was only present in controls and was based on an area involved in face-identity processing. These findings challenge current unisensory models of speech processing, because they show that, in auditory-only speech, the brain exploits previously encoded audiovisual correlations to optimize communication. We suggest that this optimization is based on speaker-specific audiovisual internal models, which are used to simulate a talking face.


PLOS Computational Biology | 2009

Recognizing Sequences of Sequences

Stefan J. Kiebel; Katharina von Kriegstein; Jean Daunizeau; K. J. Friston

The brains decoding of fast sensory streams is currently impossible to emulate, even approximately, with artificial agents. For example, robust speech recognition is relatively easy for humans but exceptionally difficult for artificial speech recognition systems. In this paper, we propose that recognition can be simplified with an internal model of how sensory input is generated, when formulated in a Bayesian framework. We show that a plausible candidate for an internal or generative model is a hierarchy of ‘stable heteroclinic channels’. This model describes continuous dynamics in the environment as a hierarchy of sequences, where slower sequences cause faster sequences. Under this model, online recognition corresponds to the dynamic decoding of causal sequences, giving a representation of the environment with predictive power on several timescales. We illustrate the ensuing decoding or recognition scheme using synthetic sequences of syllables, where syllables are sequences of phonemes and phonemes are sequences of sound-wave modulations. By presenting anomalous stimuli, we find that the resulting recognition dynamics disclose inference at multiple time scales and are reminiscent of neuronal dynamics seen in the real brain.


The Journal of Neuroscience | 2005

The sensory cortical representation of the human penis: revisiting somatotopy in the male homunculus.

Christian A. Kell; Katharina von Kriegstein; Alexander Rösler; Andreas Kleinschmidt; Helmut Laufs

Pioneering mapping studies of the human cortex have established the notion of somatotopy in sensory representation, which transpired into Penfield and Rasmussens famous sensory homunculus diagram. However, regarding the primary cortical representation of the genitals, classical and modern findings appear to be at odds with the principle of somatotopy, often assigning it to the cortex on the mesial wall. Using functional neuroimaging, we established a mediolateral sequence of somatosensory foot, penis, and lower abdominal wall representation on the contralateral postcentral gyrus in primary sensory cortex and a bilateral secondary somatosensory representation in the parietal operculum.


The Journal of Neuroscience | 2012

Features versus Feelings: Dissociable Representations of the Acoustic Features and Valence of Aversive Sounds

Sukhbinder Kumar; Katharina von Kriegstein; K. J. Friston; Timothy D. Griffiths

This study addresses the neuronal representation of aversive sounds that are perceived as unpleasant. Functional magnetic resonance imaging in humans demonstrated responses in the amygdala and auditory cortex to aversive sounds. We show that the amygdala encodes both the acoustic features of a stimulus and its valence (perceived unpleasantness). Dynamic causal modeling of this system revealed that evoked responses to sounds are relayed to the amygdala via auditory cortex. While acoustic features modulate effective connectivity from auditory cortex to the amygdala, the valence modulates the effective connectivity from amygdala to the auditory cortex. These results support a complex (recurrent) interaction between the auditory cortex and amygdala based on object-level analysis in the auditory cortex that portends the assignment of emotional valence in amygdala that in turn influences the representation of salient information in auditory cortex.


The Journal of Neuroscience | 2008

Encoding of Spectral Correlation over Time in Auditory Cortex

Tobias Overath; Sukhbinder Kumar; Katharina von Kriegstein; Timothy D. Griffiths

Natural sounds contain multiple spectral components that vary over time. The degree of variation can be characterized in terms of correlation between successive time frames of the spectrum, or as a time window within which any two frames show a minimum degree of correlation: the greater the correlation of the spectrum between successive time frames, the longer the time window. Recent studies suggest differences in the encoding of shorter and longer time windows in left and right auditory cortex, respectively. The present functional magnetic resonance imaging study assessed brain activation in response to the systematic variation of the time window in complex spectra that are more similar to natural sounds than in previous studies. The data show bilateral activity in the planum temporale and anterior superior temporal gyrus as a function of increasing time windows, as well as activity in the superior temporal sulcus that was significantly lateralized to the right. The results suggest a coexistence of hierarchical and lateralization schemes for representing increasing time windows in auditory association cortex.


Nature Neuroscience | 2006

Representation of interaural time delay in the human auditory midbrain

Sarah Thompson; Katharina von Kriegstein; Adenike Deane-Pratt; Torsten Marquardt; Ralf Deichmann; Timothy D. Griffiths; David McAlpine

Interaural time difference (ITD) is a critical cue to sound-source localization. Traditional models assume that sounds leading at one ear, and perceived on that side, are processed in the opposite midbrain. Using functional magnetic resonance imaging we demonstrate that as the ITDs of sounds increase, midbrain activity can switch sides, even though perceived location remains on the same side. The data require a new model for human ITD processing.


PLOS Biology | 2007

An information theoretic characterisation of auditory encoding.

Tobias Overath; Rhodri Cusack; Sukhbinder Kumar; Katharina von Kriegstein; Jason D. Warren; Manon Grube; Robert P. Carlyon; Timothy D. Griffiths

The entropy metric derived from information theory provides a means to quantify the amount of information transmitted in acoustic streams like speech or music. By systematically varying the entropy of pitch sequences, we sought brain areas where neural activity and energetic demands increase as a function of entropy. Such a relationship is predicted to occur in an efficient encoding mechanism that uses less computational resource when less information is present in the signal: we specifically tested the hypothesis that such a relationship is present in the planum temporale (PT). In two convergent functional MRI studies, we demonstrated this relationship in PT for encoding, while furthermore showing that a distributed fronto-parietal network for retrieval of acoustic information is independent of entropy. The results establish PT as an efficient neural engine that demands less computational resource to encode redundant signals than those with high information content.

Collaboration


Dive into the Katharina von Kriegstein's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stefan J. Kiebel

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge