Anita Wagner
University of Groningen
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Anita Wagner.
Journal of the Acoustical Society of America | 2006
Anita Wagner; Mirjarn Ernestus; Anne Cutler
The distribution of energy across the noise spectrum provides the primary cues for the identification of a fricative. Formant transitions have been reported to play a role in identification of some fricatives, but the combined results so far are conflicting. We report five experiments testing the hypothesis that listeners differ in their use of formant transitions as a function of the presence of spectrally similar fricatives in their native language. Dutch, English, German, Polish, and Spanish native listeners performed phoneme monitoring experiments with pseudowords containing either coherent or misleading formant transitions for the fricatives /s/ and /f/. Listeners of German and Dutch, both languages without spectrally similar fricatives, were not affected by the misleading formant transitions. Listeners of the remaining languages were misled by incorrect formant transitions. In an untimed labeling experiment both Dutch and Spanish listeners provided goodness ratings that revealed sensitivity to the acoustic manipulation. We conclude that all listeners may be sensitive to mismatching information at a low auditory level, but that they do not necessarily take full advantage of all available systematic acoustic variation when identifying phonemes. Formant transitions may be most useful for listeners of languages with spectrally similar fricatives.
Frontiers in Psychology | 2016
Anita Wagner; Paolo Toffanin; Deniz Başkent
Understanding speech is effortless in ideal situations, and although adverse conditions, such as caused by hearing impairment, often render it an effortful task, they do not necessarily suspend speech comprehension. A prime example of this is speech perception by cochlear implant users, whose hearing prostheses transmit speech as a significantly degraded signal. It is yet unknown how mechanisms of speech processing deal with such degraded signals, and whether they are affected by effortful processing of speech. This paper compares the automatic process of lexical competition between natural and degraded speech, and combines gaze fixations, which capture the course of lexical disambiguation, with pupillometry, which quantifies the mental effort involved in processing speech. Listeners’ ocular responses were recorded during disambiguation of lexical embeddings with matching and mismatching durational cues. Durational cues were selected due to their substantial role in listeners’ quick limitation of the number of lexical candidates for lexical access in natural speech. Results showed that lexical competition increased mental effort in processing natural stimuli in particular in presence of mismatching cues. Signal degradation reduced listeners’ ability to quickly integrate durational cues in lexical selection, and delayed and prolonged lexical competition. The effort of processing degraded speech was increased overall, and because it had its sources at the pre-lexical level this effect can be attributed to listening to degraded speech rather than to lexical disambiguation. In sum, the course of lexical competition was largely comparable for natural and degraded speech, but showed crucial shifts in timing, and different sources of increased mental effort. We argue that well-timed progress of information from sensory to pre-lexical and lexical stages of processing, which is the result of perceptual adaptation during speech development, is the reason why in ideal situations speech is perceived as an undemanding task. Degradation of the signal or the receiver channel can quickly bring this well-adjusted timing out of balance and lead to increase in mental effort. Incomplete and effortful processing at the early pre-lexical stages has its consequences on lexical processing as it adds uncertainty to the forming and revising of lexical hypotheses.
Journal of the Acoustical Society of America | 2016
Paul Iverson; Anita Wagner; Stuart Rosen
Cross-language differences in speech perception have traditionally been linked to phonological categories, but it has become increasingly clear that language experience has effects beginning at early stages of perception, which blurs the accepted distinctions between general and speech-specific processing. The present experiments explored this distinction by playing stimuli to English and Japanese speakers that manipulated the acoustic form of English /r/ and /l/, in order to determine how acoustically natural and phonologically identifiable a stimulus must be for cross-language discrimination differences to emerge. Discrimination differences were found for stimuli that did not sound subjectively like speech or /r/ and /l/, but overall they were strongly linked to phonological categorization. The results thus support the view that phonological categories are an important source of cross-language differences, but also show that these differences can extend to stimuli that do not clearly sound like speech.
Journal of the Acoustical Society of America | 2011
Paul Iverson; Anita Wagner; Melanie Pinet; Stuart Rosen
This study examined the perceptual specialization for native-language speech sounds, by comparing native Hindi and English speakers in their perception of a graded set of English /w/-/v/ stimuli that varied in similarity to natural speech. The results demonstrated that language experience does not affect general auditory processes for these types of sounds; there were strong cross-language differences for speech stimuli, and none for stimuli that were nonspeech. However, the cross-language differences extended into a gray area of speech-like stimuli that were difficult to classify, suggesting that the specialization occurred in phonetic processing prior to categorization.
Phonetica | 2008
Anita Wagner; Mirjam Ernestus
This study reports general and language-specific patterns in phoneme identification. In a series of phoneme monitoring experiments, Castilian Spanish, Catalan, Dutch, English, and Polish listeners identified vowel, fricative, and stop consonant targets that are phonemic in all these languages, embedded in nonsense words. Fricatives were generally identified more slowly than vowels, while the speed of identification for stop consonants was highly dependent on the onset of the measurements. Moreover, listeners’ response latencies and accuracy in detecting a phoneme correlated with the number of categories within that phoneme’s class in the listener’s native phoneme repertoire: more native categories slowed listeners down and decreased their accuracy. We excluded the possibility that this effect stems from differences in the frequencies of occurrence of the phonemes in the different languages. Rather, the effect of the number of categories can be explained by general properties of the perception system, which cause language-specific patterns in speech processing.
Journal of the Acoustical Society of America | 2013
Anita Wagner
Cross-language differences in the use of coarticulatory cues for the identification of fricatives have been demonstrated in a phoneme detection task: Listeners with perceptually similar fricative pairs in their native phoneme inventories (English, Polish, Spanish) relied more on cues from vowels than listeners with perceptually more distinct fricative contrasts (Dutch and German). The present gating study further investigated these cross-language differences and addressed three questions. (1) Are there cross-language differences in informativeness of parts of the speech signal regarding place of articulation for fricative identification? (2) Are such cross-language differences fricative-specific, or do they extend to the perception of place of articulation for plosives? (3) Is such language-specific uptake of information based on cues preceding or following the consonantal constriction? Dutch, Italian, Polish, and Spanish listeners identified fricatives and plosives in gated CV and VC syllables. The results showed cross-language differences in the informativeness of coarticulatory cues for fricative identification: Spanish and Polish listeners extracted place of articulation information from shorter portions of VC syllables. No language-specific differences were found for plosives, suggesting that greater reliance on coarticulatory cues did not generalize to other phoneme types. The language-specific differences for fricatives were based on coarticulatory cues into the consonant.
IEEE Transactions on Neural Systems and Rehabilitation Engineering | 2018
Luise Wagner; Natasha Maurits; Bert Maat; Deniz Başkent; Anita Wagner
Electroencephalographic (EEG) recordings provide objective estimates of listeners’ cortical processing of sounds and of the status of their speech perception system. For profoundly deaf listeners with cochlear implants (CIs), the applications of EEG are limited because the device adds electric artifacts to the recordings. This restricts the possibilities for the neural-based metrics of speech processing by CI users, for instance to gauge cortical reorganization due to individual’s hearing loss history. This paper describes the characteristics of the CI artifact as recorded with an artificial head substitute, and reports how the artifact is affected by the properties of the acoustical input signal versus the settings of the device. Methods: We created a brain substitute using agar that simulates the brain’s conductivity, placed it in a human skull, and performed EEG recordings with CIs from three different manufacturers. As stimuli, we used simple and complex non-speech stimuli, as well as naturally produced continuous speech. We examined the effect of manipulating device settings in both controlled experimental CI configurations and real clinical maps. Results: An increase in the magnitude of the stimulation current through the device settings increases also the magnitude of the artifact. The artifact recorded to speech is smaller in magnitude than for non-speech stimuli due to signal-inherent amplitude modulations. Conclusion: The CI EEG artifact for speech appears more difficult to detect than for simple stimuli. Since the artifact differs across CI users, due to their individual clinical maps, the method presented enables insight into the individual manifestations of the artifact.
Trends in hearing | 2016
Deniz Başkent; Jeanne Clarke; Carina Pals; Michel Ruben Benard; Pranesh Bhargava; Jefta D. Saija; Anastasios Sarampalis; Anita Wagner; Etienne Gaudrain
External degradations in incoming speech reduce understanding, and hearing impairment further compounds the problem. While cognitive mechanisms alleviate some of the difficulties, their effectiveness may change with age. In our research, reviewed here, we investigated cognitive compensation with hearing impairment, cochlear implants, and aging, via (a) phonemic restoration as a measure of top-down filling of missing speech, (b) listening effort and response times as a measure of increased cognitive processing, and (c) visual world paradigm and eye gazing as a measure of the use of context and its time course. Our results indicate that between speech degradations and their cognitive compensation, there is a fine balance that seems to vary greatly across individuals. Hearing impairment or inadequate hearing device settings may limit compensation benefits. Cochlear implants seem to allow the effective use of sentential context, but likely at the cost of delayed processing. Linguistic and lexical knowledge, which play an important role in compensation, may be successfully employed in advanced age, as some compensatory mechanisms seem to be preserved. These findings indicate that cognitive compensation in hearing impairment can be highly complicated—not always absent, but also not easily predicted by speech intelligibility tests only.
Advances in Experimental Medicine and Biology | 2016
Pim van Dijk; Deniz Başkent; Etienne Gaudrain; Emile de Kleine; Anita Wagner; Cornelis Lanting
The International Symposium on Hearing is a prestigious, triennial gathering where world-class scientists present and discuss the most recent advances in the field of human and animal hearing research. The 2015 edition will particularly focus on integrative approaches linking physiological, psychophysical and cognitive aspects of normal and impaired hearing. Like previous editions, the proceedings will contain about 50 chapters ranging from basic to applied research, and of interest to neuroscientists, psychologists, audiologists, engineers, otolaryngologists, and artificial intelligence researchers.
Journal of the Acoustical Society of America | 2004
Anita Wagner; Mirjam Ernestus
Although the consonant inventories of Dutch, German, English, and Spanish are similar in size, the fricative inventories differ: English and Spanish distinguish labio‐dental versus dental fricatives, whereas Dutch and German do not. Three phoneme‐monitoring experiments investigated whether the relevance of formant transitions varies across languages, and whether it depends on the types of fricatives in a language. Native Dutch, German, English, and Spanish listeners detected a target fricative, /s/ or /f/, in nonwords. Half of the nonwords were cross spliced to produce misleading formant transitions: an /s/ replaced an /f/, or vice versa. Dutch and German listeners were unaffected by the misleading formant transitions, whereas Spanish listeners missed significantly more fricatives surrounded by misleading formant transitions; these results were obtained whether stimuli were originally spoken by a Dutch or Spanish speaker. English listeners showed the same sensitivity to formant transitions as the Spanish. Despite previous reports that formant transition cues are of negligible significance for fricative identification (Klaassen‐Don, 1983), the present findings show that formant transitions are indeed relevant for listeners whose native language distinguishes labio‐dental versus dental fricatives. Listeners relied on the transitions even though no dental fricatives occurred in these stimuli, and independently of the speaker’s native language.