Mathias Scharinger
Max Planck Society
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mathias Scharinger.
Journal of Cognitive Neuroscience | 2011
Mathias Scharinger; William J. Idsardi; Samantha Poe
Mammalian cortex is known to contain various kinds of spatial encoding schemes for sensory information including retinotopic, somatosensory, and tonotopic maps. Tonotopic maps are especially interesting for human speech sound processing because they encode linguistically salient acoustic properties. In this study, we mapped the entire vowel space of a language (Turkish) onto cortical locations by using the magnetic N1 (M100), an auditory-evoked component that peaks approximately 100 msec after auditory stimulus onset. We found that dipole locations could be structured into two distinct maps, one for vowels produced with the tongue positioned toward the front of the mouth (front vowels) and one for vowels produced in the back of the mouth (back vowels). Furthermore, we found spatial gradients in lateral–medial, anterior–posterior, and inferior–superior dimensions that encoded the phonetic, categorical distinctions between all the vowels of Turkish. Statistical model comparisons of the dipole locations suggest that the spatial encoding scheme is not entirely based on acoustic bottom–up information but crucially involves featural–phonetic top–down modulation. Thus, multiple areas of excitation along the unidimensional basilar membrane are mapped into higher dimensional representations in auditory cortex.
The Journal of Neuroscience | 2015
Antje Strauss; Molly J. Henry; Mathias Scharinger; Jonas Obleser
Psychophysical target detection has been shown to be modulated by slow oscillatory brain phase. However, thus far, only low-level sensory stimuli have been used as targets. The current human electroencephalography (EEG) study examined the influence of neural oscillatory phase on a lexical-decision task performed for stimuli embedded in noise. Neural phase angles were compared for correct versus incorrect lexical decisions using a phase bifurcation index (BI), which quantifies differences in mean phase angles and phase concentrations between correct and incorrect trials. Neural phase angles in the alpha frequency range (8–12 Hz) over right anterior sensors were approximately antiphase in a prestimulus time window, and thus successfully distinguished between correct and incorrect lexical decisions. Moreover, alpha-band oscillations were again approximately antiphase across participants for correct versus incorrect trials during a later peristimulus time window (∼500 ms) at left-central electrodes. Strikingly, lexical decision accuracy was not predicted by either event-related potentials (ERPs) or oscillatory power measures. We suggest that correct lexical decisions depend both on successful sensory processing, which is made possible by the alignment of stimulus onset with an optimal alpha phase, as well as integration and weighting of decisional information, which is coupled to alpha phase immediately following the critical manipulation that differentiated words from pseudowords. The current study constitutes a first step toward characterizing the role of dynamic oscillatory brain states for higher cognitive functions, such as spoken word recognition.
PLOS ONE | 2012
Mathias Scharinger; Alexandra Bendixen; Nelson J. Trujillo-Barreto; Jonas Obleser
The precise neural mechanisms underlying speech sound representations are still a matter of debate. Proponents of ‘sparse representations’ assume that on the level of speech sounds, only contrastive or otherwise not predictable information is stored in long-term memory. Here, in a passive oddball paradigm, we challenge the neural foundations of such a ‘sparse’ representation; we use words that differ only in their penultimate consonant (“coronal” [t] vs. “dorsal” [k] place of articulation) and for example distinguish between the German nouns Latz ([lats]; bib) and Lachs ([laks]; salmon). Changes from standard [t] to deviant [k] and vice versa elicited a discernible Mismatch Negativity (MMN) response. Crucially, however, the MMN for the deviant [lats] was stronger than the MMN for the deviant [laks]. Source localization showed this difference to be due to enhanced brain activity in right superior temporal cortex. These findings reflect a difference in phonological ‘sparsity’: Coronal [t] segments, but not dorsal [k] segments, are based on more sparse representations and elicit less specific neural predictions; sensory deviations from this prediction are more readily ‘tolerated’ and accordingly trigger weaker MMNs. The results foster the neurocomputational reality of ‘representationally sparse’ models of speech perception that are compatible with more general predictive mechanisms in auditory perception.
Speech Communication | 2011
Frank Zimmerer; Mathias Scharinger; Henning Reetz
The deletion and reduction of alveolar /t/ is a phenomenon that has been given considerable attention in the research on speech production and perception. Data have mainly be drawn from spoken language corpora, where a tight control over contributing factors of /t/-deletion is hardly possible. Here, we present a new way of creating a spoken language corpus adhering to some crucial factors we wanted to hold constant for the investigation of word-final /t/-deletion in German. German is especially interesting with regard to /t/-deletion due to its rich suffixal morphology, attributing morphological status to word-final /t/ in many paradigms. We focused on verb inflection and employed a verb form production task for creating a concise corpus of naturally spoken language in which we could control for factors previously established to affect /t/-deletion. We then determined the best estimators for /t/-productions (i.e. canonical, deleted, or reduced) in our corpus. The influence of extra-linguistic factors was comparable to previous studies. We suggest that our method of constructing a natural language corpus with carefully selected characteristics is a viable way for the examination of deletions and reductions during speech production. Furthermore, we found that the best predictor for non-canonical productions and deletions was the following phonological context.
Brain and Language | 2011
Mathias Scharinger; Jennifer Merickel; Joshua Riley; William J. Idsardi
Speech sounds can be classified on the basis of their underlying articulators or on the basis of the acoustic characteristics resulting from particular articulatory positions. Research in speech perception suggests that distinctive features are based on both articulatory and acoustic information. In recent years, neuroelectric and neuromagnetic investigations provided evidence for the brains early sensitivity to distinctive features and their acoustic consequences, particularly for place of articulation distinctions. Here, we compare English consonants in a Mismatch Field design across two broad and distinct places of articulation - labial and coronal - and provide further evidence that early evoked auditory responses are sensitive to these features. We further add to the findings of asymmetric consonant processing, although we do not find support for coronal underspecification. Labial glides (Experiment 1) and fricatives (Experiment 2) elicited larger Mismatch responses than their coronal counterparts. Interestingly, their M100 dipoles differed along the anterior/posterior dimension in the auditory cortex that has previously been found to spatially reflect place of articulation differences. Our results are discussed with respect to acoustic and articulatory bases of featural speech sound classifications and with respect to a model that maps distinctive phonetic features onto long-term representations of speech sounds.
Hearing Research | 2013
Björn Herrmann; Molly J. Henry; Mathias Scharinger; Jonas Obleser
Spectral analysis of acoustic stimuli occurs in the auditory periphery (termed frequency selectivity) as well as at the level of auditory cortex (termed frequency specificity). Frequency selectivity is commonly investigated using an auditory filter model, while frequency specificity is often investigated as neural adaptation of the N1 response in electroencephalography (EEG). However, the effects of aging on frequency-specific adaptation, and the link between peripheral frequency selectivity and neural frequency specificity have not received much attention. Here, normal hearing younger (20-31 years) and older participants (49-63 years) underwent a psychophysical notched noise experiment to estimate individual auditory filters, and an EEG experiment to investigate frequency-specific adaptation in auditory cortex. The shape of auditory filters was comparable between age groups, and thus shows intact frequency selectivity in normal aging. In auditory cortex, both groups showed N1 frequency-specific neural adaptation effects that similarly varied with the spectral variance in the stimulation, while N1 responses were overall larger for older than younger participants. Importantly, the overall N1 amplitude, but not frequency-specific neural adaptation was correlated with the pass-band of the auditory filter. Thus, the current findings show a dissociation of peripheral frequency selectivity and neural frequency specificity, but suggest that widened auditory filters are compensated for by a response gain in frequency-specific areas of auditory cortex.
Journal of Neurolinguistics | 2010
Mathias Scharinger; Aditi Lahiri; Carsten Eulitz
Abstract In some languages, morphologically complex word forms may involve vowel alternations between front and back phonemes, as illustrated in the German noun Stock ( St o ck∼St o cke ‘stick∼sticks’) versus the non-alternating Stoff ( St o ff∼St o ffe ‘cloth∼cloths’). This study was aimed to investigate the consequences of the presence or absence of these alternations for the fine structure of lexical representations. Previous research has shown that certain vowel oppositions are processed in an asymmetric way, as studied by means of brain electric activity (e.g. Eulitz & Lahiri, 2004 ). Here, we contrasted base form and diminutive stems of German nouns in a Mismatch Negativity (MMN) study. We compared the alternating noun Stock with non-alternating Stoff and obtained a consistently stronger MMN if Stoff was preceded by a fronted stem. This was an initial stem fragment in Experiment 1 and its diminutive form in Experiment 2. The results of our experiments indicate asymmetries in processing at the phonetics/morpho-phonology interface and are discussed on the background of several models of lexical representation and morphological processing. We conjecture that our findings are best explained by differences in abstract phonological representations.
Journal of Phonetics | 2014
Frank Zimmerer; Mathias Scharinger; Henning Reetz
Abstract In running speech, deviations from canonical pronunciations are omnipresent. In extreme cases, segments such as /t/ are deleted altogether. On the other hand, /t/ may have morphological meaning, for instance, as marker of past tense in deal-t. Is it thus less likely that /t/ is deleted in deal t than in monomorphemic words, such as pain t ? Previous research suggests that morphological constraints on /t/-deletions indeed exist in English. However, in languages like German with richer morphology than English, the probability that /t/ with morphological information is deleted seems to be higher, particularly in contexts where /t/-deletion can allow for cluster simplification. Would such phonological effects override morphological constraints on /t/-deletion? To this end, a novel inflectional spoken verb form corpus was constructed in order to analyze the role of phonological and morphological influences on /t/-deletions. Final /t/ was part of suffixes in 2nd and 3rd person singular present tense verb forms (e.g., mach-s t ; mach- t ; ‘make’). Statistical analyses on /t/-deletions revealed that phonological context was highly predictive of /t/-deletions, particularly in cases where cluster simplifications were possible. This was true even in the 3rd person verb forms, where /t/ is morphologically more meaningful than in the 2nd person verb forms, and despite the fact that overall, /t/ was deleted less often in the 3rd than in the 2nd person. Altogether, this suggests that both phonology and morphology may constraint (or predict) /t/-deletions in German, but phonology can override morphological constraints in certain situations.
Memory & Cognition | 2013
Mathias Scharinger; Molly J. Henry; Jonas Obleser
Complex sounds vary along a number of acoustic dimensions. These dimensions may exhibit correlations that are familiar to listeners due to their frequent occurrence in natural sounds—namely, speech. However, the precise mechanisms that enable the integration of these dimensions are not well understood. In this study, we examined the categorization of novel auditory stimuli that differed in the correlations of their acoustic dimensions, using decision bound theory. Decision bound theory assumes that stimuli are categorized on the basis of either a single dimension (rule based) or the combination of more than one dimension (information integration) and provides tools for assessing successful integration across multiple acoustic dimensions. In two experiments, we manipulated the stimulus distributions such that in Experiment 1, optimal categorization could be accomplished by either a rule-based or an information integration strategy, while in Experiment 2, optimal categorization was possible only by using an information integration strategy. In both experiments, the pattern of results demonstrated that unidimensional strategies were strongly preferred. Listeners focused on the acoustic dimension most closely related to pitch, suggesting that pitch-based categorization was given preference over timbre-based categorization. Importantly, in Experiment 2, listeners also relied on a two-dimensional information integration strategy, if there was immediate feedback. Furthermore, this strategy was used more often for distributions defined by a negative spectral correlation between stimulus dimensions, as compared with distributions with a positive correlation. These results suggest that prior experience with such correlations might shape short-term auditory category learning.
Frontiers in Neuroscience | 2014
Mathias Scharinger; Björn Herrmann; Till Nierhaus; Jonas Obleser
Optimal utilization of acoustic cues during auditory categorization is a vital skill, particularly when informative cues become occluded or degraded. Consequently, the acoustic environment requires flexible choosing and switching amongst available cues. The present study targets the brain functions underlying such changes in cue utilization. Participants performed a categorization task with immediate feedback on acoustic stimuli from two categories that varied in duration and spectral properties, while we simultaneously recorded Blood Oxygenation Level Dependent (BOLD) responses in fMRI and electroencephalograms (EEGs). In the first half of the experiment, categories could be best discriminated by spectral properties. Halfway through the experiment, spectral degradation rendered the stimulus duration the more informative cue. Behaviorally, degradation decreased the likelihood of utilizing spectral cues. Spectrally degrading the acoustic signal led to increased alpha power compared to nondegraded stimuli. The EEG-informed fMRI analyses revealed that alpha power correlated with BOLD changes in inferior parietal cortex and right posterior superior temporal gyrus (including planum temporale). In both areas, spectral degradation led to a weaker coupling of BOLD response to behavioral utilization of the spectral cue. These data provide converging evidence from behavioral modeling, electrophysiology, and hemodynamics that (a) increased alpha power mediates the inhibition of uninformative (here spectral) stimulus features, and that (b) the parietal attention network supports optimal cue utilization in auditory categorization. The results highlight the complex cortical processing of auditory categorization under realistic listening challenges.