Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Holger Mitterer is active.

Publication


Featured researches published by Holger Mitterer.


Journal of Phonetics | 2014

Phonetic category recalibration: What are the categories?

Eva Reinisch; David R. Wozny; Holger Mitterer; Lori L. Holt

Listeners use lexical or visual context information to recalibrate auditory speech perception. After hearing an ambiguous auditory stimulus between /aba/ and /ada/ coupled with a clear visual stimulus (e.g., lip closure in /aba/), an ambiguous auditory-only stimulus is perceived in line with the previously seen visual stimulus. What remains unclear, however, is what exactly listeners are recalibrating: phonemes, phone sequences, or acoustic cues. To address this question we tested generalization of visually-guided auditory recalibration to 1) the same phoneme contrast cued differently (i.e., /aba/-/ada/ vs. /ibi/-/idi/ where the main cues are formant transitions in the vowels vs. burst and frication of the obstruent), 2) a different phoneme contrast cued identically (/aba/-/ada/ vs. /ama/-/ana/ both cued by formant transitions in the vowels), and 3) the same phoneme contrast with the same cues in a different acoustic context (/aba/-/ada/ vs. (/ubu/-/udu/). Whereas recalibration was robust for all recalibration control trials, no generalization was found in any of the experiments. This suggests that perceptual recalibration may be more specific than previously thought as it appears to be restricted to the phoneme category experienced during exposure as well as to the specific manipulated acoustic cues. We suggest that recalibration affects context-dependent sub-lexical units.


Journal of Phonetics | 2016

How does prosody influence speech categorization

Holger Mitterer; Taehong Cho; Sahyang Kim

We thank our graduate student assistants, Daejin Kim, Miroo Lee and Yuna Baek for assisting us with data acquisition. This work was supported by the research fund of Hanyang University (HY-2013) to the corresponding author (T. Cho).


Journal of Phonetics | 2016

Exposure modality, input variability and the categories of perceptual recalibration

Eva Reinisch; Holger Mitterer

Recent evidence shows that studies on perceptual recalibration and its generalization can inform us about the presence and nature of prelexical units used for speech perception. Listeners recalibrate perception when hearing an ambiguous auditory stimulus between, for example, /p/ and /t/ in unambiguous lexical context (kee[p/t]->/p/, mee[p/t]->/t/) or visual context (presence vs. absence of lip closure). A later encountered ambiguous auditoryonly stimulus is then perceived in line with the previously experienced context. Unlike studies using lexical context to guide learning, experiments with the visual paradigm suggested that prelexical units are rather specific and context-dependent. However, these experiments raised doubts whether lexically-guided and visually-guided recalibration are targeting the same type of units, or whether learning in the visually-guided paradigm—with limited variability during exposure—is task-specific. The present study shows successful visually-guided learning following exposure to a variety of different learning trials. We also show that patterns of generalization found with the visually-guided paradigm can be replicated with a lexically-guided paradigm: listeners do not generalize a recalibrated stop contrast across manner of articulation. This supports suggestions that the units of perception depend on the distribution of relevant cues in the speech signal.


Journal of Phonetics | 2016

What are the letters of speech? Testing the role of phonological specification and phonetic similarity in perceptual learning

Holger Mitterer; Taehong Cho; Sahyang Kim

Abstract Recent studies on perceptual learning have indicated that listeners use some form of pre-lexical abstraction (an intermediate unit) between the acoustic input and lexical representations of words. Patterns of generalization of learning that can be observed with the perceptual learning paradigm have also been effectively examined for exploring the nature of these intermediate pre-lexical units. We here test whether perceptual learning generalizes to other sounds that share an underlying or a phonetic representation with the sounds based on which learning has taken place. This was achieved by exposing listeners to phonologically altered (tensified) plain (lax) stops in Korean (i.e., underlyingly plain stops are produced as tense due to a phonological process in Korean) with which listeners learned to recalibrate place of articulation in tensified plain stops. After the recalibration with tensified plain stops, Korean listeners generalized perceptual learning (1) to phonetically similar but underlyingly (phonemically) different stops (i.e., from tensified plain stops to underlyingly tense stops) and (2) to phonetically dissimilar but underlyingly (phonemically) same stops (i.e., from tensified plain stops to non-tensified ones) while generalization failed to phonetically dissimilar and underlyingly different consonants (aspirated stops and nasals) even though they share the same [place] feature. The results imply that pre-lexical units can be better understood in terms of phonetically-definable segments of granular size rather than phonological features, although perceptual learning appears to make some reference to the underlying (phonemic) representation of speech sounds based on which learning takes place.


Attention Perception & Psychophysics | 2017

How does cognitive load influence speech perception? An encoding hypothesis.

Holger Mitterer; Sven L. Mattys

Two experiments investigated the conditions under which cognitive load exerts an effect on the acuity of speech perception. These experiments extend earlier research by using a different speech perception task (four-interval oddity task) and by implementing cognitive load through a task often thought to be modular, namely, face processing. In the cognitive-load conditions, participants were required to remember two faces presented before the speech stimuli. In Experiment 1, performance in the speech-perception task under cognitive load was not impaired in comparison to a no-load baseline condition. In Experiment 2, we modified the load condition minimally such that it required encoding of the two faces simultaneously with the speech stimuli. As a reference condition, we also used a visual search task that in earlier experiments had led to poorer speech perception. Both concurrent tasks led to decrements in the speech task. The results suggest that speech perception is affected even by loads thought to be processed modularly, and that, critically, encoding in working memory might be the locus of interference.


Bilingualism: Language and Cognition | 2016

Variability in L2 phonemic learning originates from speech-specific capabilities: An MMN study on late bilinguals

Begoña Díaz; Holger Mitterer; Mirjam Broersma; Carles Escera; Núria Sebastián-Gallés

People differ in their ability to perceive second language (L2) sounds. In early bilinguals the variability in learning L2 phonemes stems from speech-specific capabilities (Diaz, Baus, Escera, Costa & Sebastian-Galles, 2008 ). The present study addresses whether speech-specific capabilities similarly explain variability in late bilinguals. Event-related potentials were recorded (using a design similar to Diaz et al., 2008 ) in two groups of late Dutch–English bilinguals who were good or poor in overtly discriminating the L2 English vowels /e-ae/. The mismatch negativity, an index of discrimination sensitivity, was similar between the groups in conditions involving pure tones (of different length, frequency, and presentation order) but was attenuated in poor L2 perceivers for native, unknown, and L2 phonemes. These results suggest that variability in L2 phonemic learning originates from speech-specific capabilities and imply a continuity of L2 phonemic learning mechanisms throughout the lifespan.


Journal of Phonetics | 2018

Not all geminates are created equal: Evidence from Maltese glottal consonants

Holger Mitterer

Abstract Many languages distinguish short and long consonants or singletons and geminates. At a phonetic level, research has established that duration is the main cue to such distinctions but that other, sometimes language-specific, cues contribute to the distinction as well. Different proposals for representing geminates share one assumption: The difference between a singleton and a geminate is relatively uniform for all consonants in a given language. In this paper, Maltese glottal consonants are shown to challenge this view. In production, secondary cues, such as the amount of voicing during closure and the spectral properties of frication noises, are stronger for glottal consonants than for oral ones, and, in perception, the role of secondary cues and duration also varies across consonants. Contrary to the assumption that gemination is a uniform process in a given language, the results show that the relative role of secondary cues and duration may differ across consonants and that gemination may involve language-specific phonetic knowledge that is specific to each consonant. These results question the idea that lexical access in speech processing can be achieved through features.


Attention Perception & Psychophysics | 2017

Visual speech influences speech perception immediately but not automatically

Holger Mitterer; Eva Reinisch

Two experiments examined the time course of the use of auditory and visual speech cues to spoken word recognition using an eye-tracking paradigm. Results of the first experiment showed that the use of visual speech cues from lipreading is reduced if concurrently presented pictures require a division of attentional resources. This reduction was evident even when listeners’ eye gaze was on the speaker rather than the (static) pictures. Experiment 2 used a deictic hand gesture to foster attention to the speaker. At the same time, the visual processing load was reduced by keeping the visual display constant over a fixed number of successive trials. Under these conditions, the visual speech cues from lipreading were used. Moreover, the eye-tracking data indicated that visual information was used immediately and even earlier than auditory information. In combination, these data indicate that visual speech cues are not used automatically, but if they are used, they are used immediately.


Journal of Memory and Language | 2018

Allophones, not phonemes in spoken-word recognition

Holger Mitterer; Eva Reinisch; James M. McQueen


ICPhS | 2015

Perceptual learning in speech is phonetic, not phonological: Evidence from final consonant devoicing.

Eva Reinisch; Holger Mitterer

Collaboration


Dive into the Holger Mitterer's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mirjam Broersma

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge