Dianne J. Van Tasell
University of Minnesota
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dianne J. Van Tasell.
Ear and Hearing | 1997
Wayne O. Olsen; Dianne J. Van Tasell; Charles Speaks
Objective: To evaluate relations among scores for phonemes, words in isolation, and words in sentences for listeners with normal hearing and for listeners with sensorineural hearing loss. Design: Ten‐word lists of consonant‐vowel‐consonant monosyllables with each list utilizing the same 10 vowels and 20 consonants (Boothroyd, 1968) were devised and recorded. These words also were incorporated into contextually correct sentences and recorded by the same talker. The materials were presented in quiet to 36 listeners with normal hearing and to 876 listeners (1260 ears) with sensorineural hearing loss. Formulae derived byBoothroyd and Nittrouer (1988) to relate scores for phonemes, words, and sentences were applied to the data. Results: Phoneme scoring yielded scores that were on the order of 20% higher than scores for whole words heard in isolation, and scores for words in sentences were about 20% higher than when the same words were heard singly. Relations among scores of phonemes, words in isolation, and words in sentences were very similar to those observed by Boothroyd and Nittrouer(1988). The constants derived from application of their formulae to our data were very similar to the constants Boothroyd and Nittrouer obtained for a different set of materials presented against a noise background to listeners with normal hearing. Further, the constants were similar for our group of listeners with normal hearing and our large sample of listeners with sensorineural hearing loss. Conclusions: 1) These findings support Bilgers (1984) unifying assumptions that speech recognition is a single construct; therefore, scores on all speech recognition tests must be related and scores on one speech recognition test should be predictive of scores on other tests. 2) Advantages of phoneme scoring include: A) It increases the sample size of scored items for a given list of words, thereby reducing variability in test results. B) Statistical equivalence of phoneme scores for the same 30 phonemes in each of two isophonemic word lists can be evaluated quickly and easily by applying the binomial distribution model to the scores(Thornton & Raffin, 1978). C) Phoneme scores are reasonably accurate predictors of recognition of words in the contextually correct but generally low probability sentences used in this study.
Journal of the Acoustical Society of America | 1986
Dianne J. Van Tasell; David A. Fabry; Linda M. Thibodeau
Confusion matrices for seven synthetic steady‐state vowels were obtained from ten normal and three hearing‐impaired subjects. The vowels were identified at greater than 96% accuracy by the normals, and less accurately by the impaired subjects. Shortened versions of selected vowels then were used as maskers, and vowel masking patterns (VMPs) consisting of forward‐masked threshold for sinusoidal probes at all vowel masker harmonics were obtained from the impaired subjects and from one normal subject. Vowel‐masked probe thresholds were transformed using growth‐of‐masking functions obtained with flat‐spectrum noise. VMPs of the impaired subjects, relative to those of the normal, were characterized by smaller dynamic range, poorer peak resolution, and poorer preservation of the vowel formant structure. These VMP characteristics, however, did not necessarily coincide with inaccurate vowel recognition. Vowel identification appeared to be related primarily to VMP peak frequencies rather than to the levels at the ...
Journal of the Acoustical Society of America | 1987
Linda M. Thibodeau; Dianne J. Van Tasell
Frequency resolution was evaluated for two normal-hearing and seven hearing-impaired subjects with moderate, flat sensorineural hearing loss by measuring percent correct detection of a 2000-Hz tone as the width of a notch in band-reject noise increased. The level of the tone was fixed for each subject at a criterion performance level in broadband noise. Discrimination of synthetic speech syllables that differed in spectral content in the 2000-Hz region was evaluated as a function of the notch width in the same band-reject noise. Recognition of natural speech consonant/vowel syllables in quiet was also tested; results were analyzed for percent correct performance and relative information transmitted for voicing and place features. In the hearing-impaired subjects, frequency resolution at 2000 Hz was significantly correlated with the discrimination of synthetic speech information in the 2000-Hz region and was not related to the recognition of natural speech nonsense syllables unless (a) the speech stimuli contained the vowel /i/ rather than /a/, and (b) the score reflected information transmitted for place of articulation rather than percent correct.
The Hearing journal | 2002
Timothy D. Trine; Dianne J. Van Tasell
It is often said and written these days that digital technology will make possible what was not possible in analog hearing aids. This is true. It is also true that we are—as we always will be—at a point where what we wish digital hearing aids could do far exceeds what they actually can do. This has created some confusion among hearing professionals and consumers alike, and has given rise to some very legitimate questions: What can digital aids really do? What can’t they do? How do we know? In this article, we take a hard look at what digital hearing aids can do, and what is, at the moment, only wishful thinking. First, some definitions:
Language Speech and Hearing Services in Schools | 1986
Dianne J. Van Tasell; Christi Ann Mallinger; Elizabeth S. Crump
Functional gain and word recognition were assessed for nine hearing-impaired school children under two conditions of FM amplification: (a) FM auditory trainer with insert earphone, and (b) personal FM system with miniloop. On the average, the insert-earphone auditory trainer system provided slightly greater functional gain than did the miniloop system; differences were most consistent at frequencies below 1,000 Hz. For eight of the nine subjects, word recognition scores did not differ across the two amplification conditions. When they are adjusted properly, personal miniloop systems apparently may provide sufficient gain for speech recognition to some hearing-impaired children, with the exception of those children whose residual hearing is limited to low frequencies.
Journal of the Acoustical Society of America | 1981
Dianne J. Van Tasell; Elizabeth S. A. Crump
Normal‐hearing subjects identified tokens from two synthetic syllable continua in terms of stop consonant voicing and place of articulation. Each continuum was presented at both 60 and 100 dB SPL. Different effects of stimulus level were observed for the two sets of stimuli. For the voicing continuum, the phonetic boundary shifted to indicate more ’’voiceless’’ responses at the higher levels. For the place series, all syllables were identified less consistently at the higher level. Implications are discussed for comparing the speech‐cue identification performance of normal versus hearing‐impaired subjects who must listen at high levels.
Journal of the Acoustical Society of America | 2004
Elizabeth A. Strickland; Neal F. Viemeister; Dianne J. Van Tasell; Jill E. Preminger
In a recent paper, Horwitz et al. [J. Acoust. Soc. Am. 111, 409–416 (2002)] concluded that listeners with high-frequency hearing impairment show a decrement in the perception of low-frequency speech sounds that is due to loss of information normally carried by auditory-nerve fibers with high characteristic frequencies (CFs). However, in their own study and in other studies, highpass-filtered noise did not degrade the perception of lowpass-filtered speech in listeners with normal hearing. An alternate conclusion proposed by Strickland et al. [J. Acoust. Soc. Am. 95, 497–501 (1994)] is that information conveyed by high-CF fibers is not necessary for speech perception. To reconcile these opposite conclusions, we suggest that the hearing-impaired listeners tested by Horwitz et al. may not have had normal hearing even in the low frequencies, and that the conclusion from Strickland et al. remains correct: high-CF fibers are not necessary for normal speech perception.
Journal of the Acoustical Society of America | 1985
Sigfrid D. Soli; Virginia M. Kirby; Dianne J. Van Tasell; Gregory P. Widin
A primary source of perceptual information for the cochlear implant user in the time‐intensity envelope of the speech waveform. The purpose of this study was to estimate the amount and type of information for consonant recognition potentially available in the time‐intensity envelope of speech. The experimental stimuli were generated from the time‐intensity envelopes of 19 /aCa/ utterances (C = /p, t, k, b, d, g, f, θ, s, ∫, v, ð, z, ȝ, m, n, r, l, j/), spoken by a male talker. The envelopes were obtained by full‐wave rectifying and low‐pass filtering the speech waveforms at 2000, 200, and 20 Hz. These envelopes were used to multiply noise with a 3‐kHz bandwidth, producing three sets of envelope stimuli with identical, flat spectra that differed in the amount of time‐varying in their amplitude envelopes. The unprocessed speech waveforms and the three sets of envelope stimuli were presented to 12 normal‐hearing subjects in blocked, closed‐set consonant recognition tests. Individual and group confusion matri...
Journal of the Acoustical Society of America | 1996
Dianne J. Van Tasell; Bart R. Clement; Anna C. Schroder; David A. Nelson
The relationship between auditory frequency resolution and speech perception was investigated in 11 hearing‐impaired listeners who differed widely in terms of hearing‐loss severity, hearing‐loss configuration, and frequency resolution abilities. First, the function relating articulation index (AI) to percent correct phoneme recognition was determined for 19 normal‐hearing listeners at a variety of speech levels and S/N ratios. Eleven hearing‐impaired subjects were then tested with the same speech materials throughout a wide range of AI conditions. Simultaneous‐masked psychophysical tuning curves were also obtained at probe frequencies of 0.5, 1, 2, and 4 kHz from all hearing‐impaired subjects. When phoneme recognition scores were weighted by AI, the performance of 10 of the 11 hearing‐impaired subjects was within normal limits. A modest correlation was observed between high‐frequency tuning curve slope and the error in the AI prediction relative to the normal AI data. Results indicate that although listen...
Journal of the Acoustical Society of America | 1984
David A. Fabry; Dianne J. Van Tasell
Hearing threshold configuration of the impaired ear was simulated, via two different methods, in the normal ear of each of six subjects with unilateral sensorineural hearing loss. The simulation methods were (1) frequency‐specific attenuation (filtering), and (2) masking by shaped masking noise. Identification responses to 20 consonant‐vowel nonsense syllables were obtained from the normal ear of each subject in both hearing loss simulation conditions, as well as from the impaired ear. Data were scored both for percent correct syllable recognition and for errors on a set of eight consonant features. Overall performance level and consonant feature error patterns appeared to be relatively independent measures of speech recognition. For three subjects, both masking and filtering successfully simulated the effects of sensorineural hearing loss on consonant feature error patterns. For the remaining subjects, one or both of the simulation schemes was inadequate. [Supported by NINCDS Grant NS‐12125.]