Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Heather A. Kreft is active.

Publication


Featured researches published by Heather A. Kreft.


Journal of the Acoustical Society of America | 2005

Place-pitch discrimination of single- versus dual-electrode stimuli by cochlear implant users (L)

Gail S. Donaldson; Heather A. Kreft; Leonid M. Litvak

Simultaneous or near-simultaneous activation of adjacent cochlear implant electrodes can produce pitch percepts intermediate to those produced by each electrode separately, thereby increasing the number of place-pitch steps available to cochlear implant listeners. To estimate how many distinct pitches could be generated with simultaneous dual-electrode stimulation, the present study measured place-pitch discrimination thresholds for single- versus dual-electrode stimuli in users of the Clarion CII device. Discrimination thresholds were expressed as the proportion of current directed to the secondary electrode of the dual-electrode pair. For 16 of 17 electrode pairs tested in six subjects, thresholds ranged from 0.11 to 0.64, suggesting that dual-electrode stimuli can produce 2-9 discriminable pitches between the pitches of single electrodes. Some subjects demonstrated a level effect, with better place-pitch discrimination at higher stimulus levels. Equal loudness was achieved with dual-electrode stimuli at net current levels that were similar to or slightly higher than those for single-electrode stimuli.


Journal of the Acoustical Society of America | 2008

Forward-masked spatial tuning curves in cochlear implant users

David A. Nelson; Gail S. Donaldson; Heather A. Kreft

Forward-masked psychophysical spatial tuning curves (fmSTCs) were measured in twelve cochlear-implant subjects, six using bipolar stimulation (Nucleus devices) and six using monopolar stimulation (Clarion devices). fmSTCs were measured at several probe levels on a middle electrode using a fixed-level probe stimulus and variable-level maskers. The average fmSTC slopes obtained in subjects using bipolar stimulation (3.7 dBmm) were approximately three times steeper than average slopes obtained in subjects using monopolar stimulation (1.2 dBmm). Average spatial bandwidths were about half as wide for subjects with bipolar stimulation (2.6 mm) than for subjects with monopolar stimulation (4.6 mm). None of the tuning curve characteristics changed significantly with probe level. fmSTCs replotted in terms of acoustic frequency, using Greenwoods [J. Acoust. Soc. Am. 33, 1344-1356 (1961)] frequency-to-place equation, were compared with forward-masked psychophysical tuning curves obtained previously from normal-hearing and hearing-impaired acoustic listeners. The average tuning characteristics of fmSTCs in electric hearing were similar to the broad tuning observed in normal-hearing and hearing-impaired acoustic listeners at high stimulus levels. This suggests that spatial tuning is not the primary factor limiting speech perception in many cochlear implant users.


Journal of the Acoustical Society of America | 2011

Comparing spatial tuning curves, spectral ripple resolution, and speech perception in cochlear implant users.

Elizabeth S. Anderson; David A. Nelson; Heather A. Kreft; Peggy B. Nelson; Andrew J. Oxenham

Spectral ripple discrimination thresholds were measured in 15 cochlear-implant users with broadband (350-5600 Hz) and octave-band noise stimuli. The results were compared with spatial tuning curve (STC) bandwidths previously obtained from the same subjects. Spatial tuning curve bandwidths did not correlate significantly with broadband spectral ripple discrimination thresholds but did correlate significantly with ripple discrimination thresholds when the rippled noise was confined to an octave-wide passband, centered on the STCs probe electrode frequency allocation. Ripple discrimination thresholds were also measured for octave-band stimuli in four contiguous octaves, with center frequencies from 500 Hz to 4000 Hz. Substantial variations in thresholds with center frequency were found in individuals, but no general trends of increasing or decreasing resolution from apex to base were observed in the pooled data. Neither ripple nor STC measures correlated consistently with speech measures in noise and quiet in the sample of subjects in this study. Overall, the results suggest that spectral ripple discrimination measures provide a reasonable measure of spectral resolution that correlates well with more direct, but more time-consuming, measures of spectral resolution, but that such measures do not always provide a clear and robust predictor of performance in speech perception tasks.


Journal of the Acoustical Society of America | 2004

Effects of pulse rate on threshold and dynamic range in Clarion cochlear-implant users (L)

Heather A. Kreft; Gail S. Donaldson; David A. Nelson

The effects of pulse rate on absolute threshold (THS), maximum acceptable loudness (MAL), and dynamic range (DR) were evaluated in 15 Clarion cochlear implant users. A wider range of pulse rates was assessed than in previous studies, and subjects with both standard and perimodiolar electrode arrays were tested. THS and MAL decreased with pulse rate, and DR increased with pulse rate, for pulse rates between 200 and 6500 pulses per second (pps). However, slopes of THS-vs-pulse rate and MAL-vs-pulse rate functions became shallower above 3250 pps. Subjects with standard electrode arrays had similar THSs as subjects with perimodiolar electrode arrays at all pulse rates. In contrast, subjects with standard arrays had significantly higher MALs and larger DRs than subjects with perimodiolar arrays, and these differences became larger with increasing pulse rate.


Trends in hearing | 2014

Speech perception in tones and noise via cochlear implants reveals influence of spectral resolution on temporal processing.

Andrew J. Oxenham; Heather A. Kreft

Under normal conditions, human speech is remarkably robust to degradation by noise and other distortions. However, people with hearing loss, including those with cochlear implants, often experience great difficulty in understanding speech in noisy environments. Recent work with normal-hearing listeners has shown that the amplitude fluctuations inherent in noise contribute strongly to the masking of speech. In contrast, this study shows that speech perception via a cochlear implant is unaffected by the inherent temporal fluctuations of noise. This qualitative difference between acoustic and electric auditory perception does not seem to be due to differences in underlying temporal acuity but can instead be explained by the poorer spectral resolution of cochlear implants, relative to the normally functioning ear, which leads to an effective smoothing of the inherent temporal-envelope fluctuations of noise. The outcome suggests an unexpected trade-off between the detrimental effects of poorer spectral resolution and the beneficial effects of a smoother noise temporal envelope. This trade-off provides an explanation for the long-standing puzzle of why strong correlations between speech understanding and spectral resolution have remained elusive. The results also provide a potential explanation for why cochlear-implant users and hearing-impaired listeners exhibit reduced or absent masking release when large and relatively slow temporal fluctuations are introduced in noise maskers. The multitone maskers used here may provide an effective new diagnostic tool for assessing functional hearing loss and reduced spectral resolution.


Ear and Hearing | 2006

Effects of Vowel Context on the Recognition of Initial and Medial Consonants by Cochlear Implant Users

Gail S. Donaldson; Heather A. Kreft

Objective: Scores on consonant-recognition tests are widely used as an index of speech-perception ability in cochlear implant (CI) users. The consonant stimuli in these tests are typically presented in the /&agr;/ vowel context, even though consonants in conversational speech occur in many other contexts. For this reason, it would be useful to know whether vowel context has any systematic effect on consonant recognition in this population. The purpose of the present study was to compare consonant recognition for the /&agr;, i/, and /u/ vowel contexts for consonants presented in both initial (Cv) and medial (vCv) positions. Design: Twenty adult CI users with one of three different implanted devices underwent consonant-confusion testing. Twelve stimulus conditions that differed according to vowel context (/&agr;, i, u/), consonant position (Cv, vCv), and talker gender (male, female) were assessed in each subject. Results: Mean percent-correct consonant-recognition scores were slightly (5 to 8%) higher for the /&agr;/ and /u/ vowel contexts than for the /i/ vowel context for both initial and medial consonants. This general pattern was observed for both male and female talkers, for subjects with better and poorer average consonant-recognition performance, and for subjects using low, medium, and high stimulation rates in their speech processors. In contrast to the mean data, many individual subjects demonstrated large effects of vowel context. For 10 of 20 subjects, consonant-recognition scores varied by 15% or more across vowel contexts in one or more stimulus conditions. Similar to the mean data, these differences generally reflected better performance for the /&agr;/ and /u/ vowel contexts than for the /i/ vowel context. An analysis of consonant features showed that overall performance was best for the voicing feature, followed by the manner and place features, and that the place feature showed the strongest effect of vowel context. Vowel-context effects were strongest for the six consonants /d, j, n, k, m/, and /l/. For three of these consonants (/j, n, k/), the back vowels /&agr;/ and /u/ produced substantially (30 to 35%) higher mean scores than the front vowel /i/. For each of the remaining three consonants, a unique pattern was observed in which a different single vowel produced substantially higher scores than the others. Several additional consonants (/s, g, w, b/, and /đ/) showed strong context effects in either the initial consonant or medial consonant position. Overall, voiceless stop, nasal, and glide-liquid consonants showed the strongest effects of vowel context, whereas the voiceless fricative and voiceless affricate consonants were least affected. Consistent with the feature analysis, a qualitative assessment of phoneme errors for the six key consonants indicated that vowel-context effects stem primarily from changes in the number of place-of-articulation errors made in each context. Conclusions: Vowel context has small but significant effects on consonant-recognition scores for the “average” CI listener, with the back vowels /&agr;/ and /u/ producing better performance than the front vowel /i/. In contrast to the average results, however, the effects of vowel context are sizable in some individual subjects. This suggests that it may be beneficial to assess consonant recognition using two vowels, such as /&agr;/ and /i/, which produce better and poorer performance, respectively. The present results underscore previous findings that poor transmission of spectral speech cues limits consonant-recognition performance in CI users. Spectral cue transmission may be hindered not only by poor spectral resolution in these listeners but also by the brief duration and dynamic nature of consonant place-of-articulation cues.


Journal of the Acoustical Society of America | 2013

Temporal coherence versus harmonicity in auditory stream formation

Christophe Micheyl; Heather A. Kreft; Shihab Shamma; Andrew J. Oxenham

This study sought to investigate the influence of temporal incoherence and inharmonicity on concurrent stream segregation, using performance-based measures. Subjects discriminated frequency shifts in a temporally regular sequence of target pure tones, embedded in a constant or randomly varying multi-tone background. Depending on the condition tested, the target tones were either temporally coherent or incoherent with, and either harmonically or inharmonically related to, the background tones. The results provide further evidence that temporal incoherence facilitates stream segregation and they suggest that deviations from harmonicity can cause similar facilitation effects, even when the targets and the maskers are temporally coherent.


Journal of the Acoustical Society of America | 2004

Effects of pulse rate and electrode array design on intensity discrimination in cochlear implant users

Heather A. Kreft; Gail S. Donaldson; David A. Nelson

The effects of pulse rate on intensity discrimination were evaluated in 14 subjects with Clarion C-I cochlear implants. Subjects had a standard [Clarion spiral electrode array (SPRL group)] or perimodiolar electrode array [Clarion HiFocus electrode array with electrode positioning system (HF+EPS group)]. Weber fractions for intensity discrimination [ Wf(dB)= 10 log deltaI/I] were evaluated at five levels over dynamic range at each of three pulse rates (200, 1625 and 6500 pps) using monopolar stimulation. Weber fractions were smaller for 200 pps stimuli than for 1625 or 6500 pps stimuli in both groups. Weber fractions were significantly smaller for SPRL subjects (mean Wf(dB) = -9.1 dB) than for HF+EPS subjects (mean Wf(dB) = -6.7 dB). Intensity difference limens (DLs) expressed as a percentage of dynamic range (DR) (deltaI%DR= deltaI/DRdB* 100) did not vary systematically with pulse rate in either group. Larger intensity DLs combined with smaller dynamic ranges led to fewer intensity steps over the dynamic range for HF+EPS subjects (average 9 steps) compared to SPRL subjects (average 23 steps). The observed effects of pulse rate and electrode array design may stem primarily from an inverse relationship between absolute current amplitude and the size of intensity DLs. The combination of smaller dynamic ranges and larger Weber fractions in HF+EPS subjects could be the result of increased variability of neural outputs in these subjects.


Journal of the Acoustical Society of America | 2011

Spatial tuning curves from apical, middle, and basal electrodes in cochlear implant users

David A. Nelson; Heather A. Kreft; Elizabeth S. Anderson; Gail S. Donaldson

Forward-masked psychophysical spatial tuning curves (fmSTCs) were measured in 15 cochlear-implant subjects, 10 using monopolar stimulation and 5 using bipolar stimulation. In each subject, fmSTCs were measured at several probe levels on an apical, middle, and basal electrode using a fixed-level probe stimulus and variable-level maskers. Tuning curve slopes and bandwidths did not change significantly with probe level for electrodes located in the apical, middle, or basal region although a few subjects exhibited dramatic changes in tuning at the extremes of the probe level range. Average tuning curve slopes and bandwidths did not vary significantly across electrode regions. Spatial tuning curves were symmetrical and similar in width across the three electrode regions. However, several subjects demonstrated large changes in slope and/or bandwidth across the three electrode regions, indicating poorer tuning in localized regions of the array. Cochlear-implant users exhibited bandwidths that were approximately five times wider than normal-hearing acoustic listeners but were in the same range as acoustic listeners with moderate cochlear hearing loss. No significant correlations were found between spatial tuning parameters and speech recognition; although a weak relation was seen between middle electrode tuning and transmitted information for vowel second formant frequency.


Journal of the Acoustical Society of America | 2012

Vowel enhancement effects in cochlear-implant users

Ningyuan Wang; Heather A. Kreft; Andrew J. Oxenham

Auditory enhancement of certain frequencies can occur through prior stimulation of surrounding frequency regions. The underlying neural mechanisms are unknown, but may involve stimulus-driven changes in cochlear gain via the medial olivocochlear complex (MOC) efferents. Cochlear implants (CIs) bypass the cochlea and stimulate the auditory nerve directly. If the MOC plays a critical role in enhancement then CI users should not exhibit this effect. Results using vowel stimuli, with and without preceding sounds designed to enhance formants, provided evidence of auditory enhancement in both normal-hearing listeners and CI users, suggesting that vowel enhancement is not mediated solely by cochlear effects.

Collaboration


Dive into the Heather A. Kreft's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christopher A. Shera

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge