Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jonas Obleser is active.

Publication


Featured researches published by Jonas Obleser.


The Journal of Neuroscience | 2007

Functional integration across brain regions improves speech perception under adverse listening conditions

Jonas Obleser; Richard Wise; M. Alex Dresner; Sophie K. Scott

Speech perception is supported by both acoustic signal decomposition and semantic context. This study, using event-related functional magnetic resonance imaging, investigated the neural basis of this interaction with two speech manipulations, one acoustic (spectral degradation) and the other cognitive (semantic predictability). High compared with low predictability resulted in the greatest improvement in comprehension at an intermediate level of degradation, and this was associated with increased activity in the left angular gyrus, the medial and left lateral prefrontal cortices, and the posterior cingulate gyrus. Functional connectivity between these regions was also increased, particularly with respect to the left angular gyrus. In contrast, activity in both superior temporal sulci and the left inferior frontal gyrus correlated with the amount of spectral detail in the speech signal, regardless of predictability. These results demonstrate that increasing functional connectivity between high-order cortical areas, remote from the auditory cortex, facilitates speech comprehension when the clarity of speech is reduced.


Human Brain Mapping | 2006

Vowel sound extraction in anterior superior temporal cortex

Jonas Obleser; Henning Boecker; Alexander Drzezga; Bernhard Haslinger; Andreas Hennenlotter; Michael Roettinger; Carsten Eulitz; Josef P. Rauschecker

We investigated the functional neuroanatomy of vowel processing. We compared attentive auditory perception of natural German vowels to perception of nonspeech band‐passed noise stimuli using functional magnetic resonance imaging (fMRI). More specifically, the mapping in auditory cortex of first and second formants was considered, which spectrally characterize vowels and are linked closely to phonological features. Multiple exemplars of natural German vowels were presented in sequences alternating either mainly along the first formant (e.g., [u]‐[o], [i]‐[e]) or along the second formant (e.g., [u]‐[i], [o]‐[e]). In fixed‐effects and random‐effects analyses, vowel sequences elicited more activation than did nonspeech noise in the anterior superior temporal cortex (aST) bilaterally. Partial segregation of different vowel categories was observed within the activated regions, suggestive of a speech sound mapping across the cortical surface. Our results add to the growing evidence that speech sounds, as one of the behaviorally most relevant classes of auditory objects, are analyzed and categorized in aST. These findings also support the notion of an auditory “what” stream, with highly object‐specialized areas anterior to primary auditory cortex. Hum. Brain Mapp, 2005.


Proceedings of the National Academy of Sciences of the United States of America | 2012

Frequency modulation entrains slow neural oscillations and optimizes human listening behavior

Molly J. Henry; Jonas Obleser

The human ability to continuously track dynamic environmental stimuli, in particular speech, is proposed to profit from “entrainment” of endogenous neural oscillations, which involves phase reorganization such that “optimal” phase comes into line with temporally expected critical events, resulting in improved processing. The current experiment goes beyond previous work in this domain by addressing two thus far unanswered questions. First, how general is neural entrainment to environmental rhythms: Can neural oscillations be entrained by temporal dynamics of ongoing rhythmic stimuli without abrupt onsets? Second, does neural entrainment optimize performance of the perceptual system: Does human auditory perception benefit from neural phase reorganization? In a human electroencephalography study, listeners detected short gaps distributed uniformly with respect to the phase angle of a 3-Hz frequency-modulated stimulus. Listeners’ ability to detect gaps in the frequency-modulated sound was not uniformly distributed in time, but clustered in certain preferred phases of the modulation. Moreover, the optimal stimulus phase was individually determined by the neural delta oscillation entrained by the stimulus. Finally, delta phase predicted behavior better than stimulus phase or the event-related potential after the gap. This study demonstrates behavioral benefits of phase realignment in response to frequency-modulated auditory stimuli, overall suggesting that frequency fluctuations in natural environmental input provide a pacing signal for endogenous neural oscillations, thereby influencing perceptual processing.


The Journal of Neuroscience | 2008

Bilateral Speech Comprehension Reflects Differential Sensitivity to Spectral and Temporal Features

Jonas Obleser; Frank Eisner; Sonja A. Kotz

Speech comprehension has been shown to be a strikingly bilateral process, but the differential contributions of the subfields of left and right auditory cortices have remained elusive. The hypothesis that left auditory areas engage predominantly in decoding fast temporal perturbations of a signal whereas the right areas are relatively more driven by changes of the frequency spectrum has not been directly tested in speech or music. This brain-imaging study independently manipulated the speech signal itself along the spectral and the temporal domain using noise-band vocoding. In a parametric design with five temporal and five spectral degradation levels in word comprehension, a functional distinction of the left and right auditory association cortices emerged: increases in the temporal detail of the signal were most effective in driving brain activation of the left anterolateral superior temporal sulcus (STS), whereas the right homolog areas exhibited stronger sensitivity to the variations in spectral detail. In accordance with behavioral measures of speech comprehension acquired in parallel, change of spectral detail exhibited a stronger coupling with the STS BOLD signal. The relative pattern of lateralization (quantified using lateralization quotients) proved reliable in a jack-knifed iterative reanalysis of the group functional magnetic resonance imaging model. This study supplies direct evidence to the often implied functional distinction of the two cerebral hemispheres in speech processing. Applying direct manipulations to the speech signal rather than to low-level surrogates, the results lend plausibility to the notion of complementary roles for the left and right superior temporal sulci in comprehending the speech signal.


Frontiers in Psychology | 2011

Alpha rhythms in audition: Cognitive and clinical perspectives

Thomas Hartmann; Nadia Müller; Isabel Lorenz; Jonas Obleser

Like the visual and the sensorimotor systems, the auditory system exhibits pronounced alpha-like resting oscillatory activity. Due to the relatively small spatial extent of auditory cortical areas, this rhythmic activity is less obvious and frequently masked by non-auditory alpha-generators when recording non-invasively using magnetoencephalography (MEG) or electroencephalography (EEG). Following stimulation with sounds, marked desynchronizations can be observed between 6 and 12 Hz, which can be localized to the auditory cortex. However knowledge about the functional relevance of the auditory alpha rhythm has remained scarce so far. Results from the visual and sensorimotor system have fuelled the hypothesis of alpha activity reflecting a state of functional inhibition. The current article pursues several intentions: (1) Firstly we review and present own evidence (MEG, EEG, sEEG) for the existence of an auditory alpha-like rhythm independent of visual or motor generators, something that is occasionally met with skepticism. (2) In a second part we will discuss tinnitus and how this audiological symptom may relate to reduced background alpha. The clinical part will give an introduction into a method which aims to modulate neurophysiological activity hypothesized to underlie this distressing disorder. Using neurofeedback, one is able to directly target relevant oscillatory activity. Preliminary data point to a high potential of this approach for treating tinnitus. (3) Finally, in a cognitive neuroscientific part we will show that auditory alpha is modulated by anticipation/expectations with and without auditory stimulation. We will also introduce ideas and initial evidence that alpha oscillations are involved in the most complex capability of the auditory system, namely speech perception. The evidence presented in this article corroborates findings from other modalities, indicating that alpha-like activity functionally has an universal inhibitory role across sensory modalities.


Journal of Cognitive Neuroscience | 2004

Magnetic Brain Response Mirrors Extraction of Phonological Features from Spoken Vowels

Jonas Obleser; Aditi Lahiri; Carsten Eulitz

This study further elucidates determinants of vowel perception in the human auditory cortex. The vowel inventory of a given language can be classified on the basis of phonological features which are closely linked to acoustic properties. A cortical representation of speech sounds based on these phonological features might explain the surprisingly inverse correlation between immense variance in the acoustic signal and high accuracy of speech recognition. We investigated timing and mapping of the N100m elicited by 42 tokens of seven natural German vowels varying along the phonological features tongue height (corresponding to the frequency of the first formant) and place of articulation (corresponding to the frequency of the second and third formants). Auditoryevoked fields were recorded using a 148-channel whole-head magnetometer while subjects performed target vowel detection tasks. Source location differences appeared to be driven by place of articulation: Vowels with mutually exclusive place of articulation features, namely, coronal and dorsal elicited separate centers of activation along the posterior-anterior axis. Additionally, the time course of activation as reflected in the N100m peak latency distinguished between vowel categories especially when the spatial distinctiveness of cortical activation was low. In sum, results suggest that both N100m latency and source location as well as their interaction reflect properties of speech stimuli that correspond to abstract phonological features.


Trends in Cognitive Sciences | 2009

Pre-lexical abstraction of speech in the auditory cortex

Jonas Obleser; Frank Eisner

Speech perception requires the decoding of complex acoustic patterns. According to most cognitive models of spoken word recognition, this complexity is dealt with before lexical access via a process of abstraction from the acoustic signal to pre-lexical categories. It is currently unclear how these categories are implemented in the auditory cortex. Recent advances in animal neurophysiology and human functional imaging have made it possible to investigate the processing of speech in terms of probabilistic cortical maps rather than simple cognitive subtraction, which will enable us to relate neurometric data more directly to behavioural studies. We suggest that integration of insights from cognitive science, neurophysiology and functional imaging is necessary for furthering our understanding of pre-lexical abstraction in the cortex.


Human Brain Mapping | 2009

Disentangling syntax and intelligibility in auditory language comprehension.

Angela D. Friederici; Sonja A. Kotz; Sophie K. Scott; Jonas Obleser

Studies of the neural basis of spoken language comprehension typically focus on aspects of auditory processing by varying signal intelligibility, or on higher‐level aspects of language processing such as syntax. Most studies in either of these threads of language research report brain activation including peaks in the superior temporal gyrus (STG) and/or the superior temporal sulcus (STS), but it is not clear why these areas are recruited in functionally different studies. The current fMRI study aims to disentangle the functional neuroanatomy of intelligibility and syntax in an orthogonal design. The data substantiate functional dissociations between STS and STG in the left and right hemispheres: first, manipulations of speech intelligibility yield bilateral mid‐anterior STS peak activation, whereas syntactic phrase structure violations elicit strongly left‐lateralized mid STG and posterior STS activation. Second, ROI analyses indicate all interactions of speech intelligibility and syntactic correctness to be located in the left frontal and temporal cortex, while the observed right‐hemispheric activations reflect less specific responses to intelligibility and syntax. Our data demonstrate that the mid‐to‐anterior STS activation is associated with increasing speech intelligibility, while the mid‐to‐posterior STG/STS is more sensitive to syntactic information within the speech. Hum Brain Mapp, 2010.


NeuroImage | 2010

Integration of iconic gestures and speech in left superior temporal areas boosts speech comprehension under adverse listening conditions

Henning Holle; Jonas Obleser; Shirley-Ann Rueschemeyer; Thomas C. Gunter

Iconic gestures are spontaneous hand movements that illustrate certain contents of speech and, as such, are an important part of face-to-face communication. This experiment targets the brain bases of how iconic gestures and speech are integrated during comprehension. Areas of integration were identified on the basis of two classic properties of multimodal integration, bimodal enhancement and inverse effectiveness (i.e., greater enhancement for unimodally least effective stimuli). Participants underwent fMRI while being presented with videos of gesture-supported sentences as well as their unimodal components, which allowed us to identify areas showing bimodal enhancement. Additionally, we manipulated the signal-to-noise ratio of speech (either moderate or good) to probe for integration areas exhibiting the inverse effectiveness property. Bimodal enhancement was found at the posterior end of the superior temporal sulcus and adjacent superior temporal gyrus (pSTS/STG) in both hemispheres, indicating that the integration of iconic gestures and speech takes place in these areas. Furthermore, we found that the left pSTS/STG specifically showed a pattern of inverse effectiveness, i.e., the neural enhancement for bimodal stimulation was greater under adverse listening conditions. This indicates that activity in this area is boosted when an iconic gesture accompanies an utterance that is otherwise difficult to comprehend. The neural response paralleled the behavioral data observed. The present data extends results from previous gesture-speech integration studies in showing that pSTS/STG plays a key role in the facilitation of speech comprehension through simultaneous gestural input.


The Journal of Neuroscience | 2012

Adverse Listening Conditions and Memory Load Drive a Common Alpha Oscillatory Network

Jonas Obleser; Malte Wöstmann; Nele Hellbernd; Anna Wilsch; Burkhard Maess

How does acoustic degradation affect the neural mechanisms of working memory? Enhanced alpha oscillations (8–13 Hz) during retention of items in working memory are often interpreted to reflect increased demands on storage and inhibition. We hypothesized that auditory signal degradation poses an additional challenge to human listeners partly because it draws on the same neural mechanisms. In an adapted Sternberg paradigm, auditory memory load and acoustic degradation were parametrically varied and the magnetoencephalographic response was analyzed in the time–frequency domain. Notably, during the stimulus-free delay interval, alpha power monotonically increased at central–parietal sensors as functions of memory load (higher alpha power with more memory load) and of acoustic degradation (also higher alpha power with more severe acoustic degradation). This alpha effect was superadditive when highest load was combined with most severe degradation. Moreover, alpha oscillatory dynamics during stimulus-free delay were predictive of response times to the probe item. Source localization of alpha power during stimulus-free delay indicated that alpha generators in right parietal, cingulate, supramarginal, and superior temporal cortex were sensitive to combined memory load and acoustic degradation. In summary, both challenges of memory load and acoustic degradation increase activity in a common alpha-frequency network. The results set the stage for future studies on how chronic or acute degradations of sensory input affect mechanisms of executive control.

Collaboration


Dive into the Jonas Obleser's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Björn Herrmann

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge