Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John J. Galvin is active.

Publication


Featured researches published by John J. Galvin.


Jaro-journal of The Association for Research in Otolaryngology | 2004

The Role of Spectral and Temporal Cues in Voice Gender Discrimination by Normal-Hearing Listeners and Cochlear Implant Users

Qian-Jie Fu; Sherol Chinchilla; John J. Galvin

The present study investigated the relative importance of temporal and spectral cues in voice gender discrimination and vowel recognition by normal-hearing subjects listening to an acoustic simulation of cochlear implant speech processing and by cochlear implant users. In the simulation, the number of speech processing channels ranged from 4 to 32, thereby varying the spectral resolution; the cutoff frequencies of the channels’ envelope filters ranged from 20 to 320xa0Hz, thereby manipulating the available temporal cues. For normal-hearing subjects, results showed that both voice gender discrimination and vowel recognition scores improved as the number of spectral channels was increased. When only 4 spectral channels were available, voice gender discrimination significantly improved as the envelope filter cutoff frequency was increased from 20 to 320xa0Hz. For all spectral conditions, increasing the amount of temporal information had no significant effect on vowel recognition. Both voice gender discrimination and vowel recognition scores were highly variable among implant users. The performance of cochlear implant listeners was similar to that of normal-hearing subjects listening to comparable speech processing (4–8 spectral channels). The results suggest that both spectral and temporal cues contribute to voice gender discrimination and that temporal cues are especially important for cochlear implant users to identify the voice gender when there is reduced spectral resolution.


Trends in Amplification | 2007

Perceptual Learning and Auditory Training in Cochlear Implant Recipients

Qian-Jie Fu; John J. Galvin

Learning electrically stimulated speech patterns can be a new and difficult experience for cochlear implant (CI) recipients. Recent studies have shown that most implant recipients at least partially adapt to these new patterns via passive, daily-listening experiences. Gradually introducing a speech processor parameter (eg, the degree of spectral mismatch) may provide for more complete and less stressful adaptation. Although the implant device restores hearing sensation and the continued use of the implant provides some degree of adaptation, active auditory rehabilitation may be necessary to maximize the benefit of implantation for CI recipients. Currently, there are scant resources for auditory rehabilitation for adult, postlingually deafened CI recipients. We recently developed a computer-assisted speech-training program to provide the means to conduct auditory rehabilitation at home. The training software targets important acoustic contrasts among speech stimuli, provides auditory and visual feedback, and incorporates progressive training techniques, thereby maintaining recipients interest during the auditory training exercises. Our recent studies demonstrate the effectiveness of targeted auditory training in improving CI recipients speech and music perception. Provided with an inexpensive and effective auditory training program, CI recipients may find the motivation and momentum to get the most from the implant device.


Jaro-journal of The Association for Research in Otolaryngology | 2005

Auditory Training with Spectrally Shifted Speech: Implications for Cochlear Implant Patient Auditory Rehabilitation

Qian-Jie Fu; Geraldine Nogaki; John J. Galvin

After implantation, postlingually deafened cochlear implant (CI) patients must adapt to both spectrally reduced and spectrally shifted speech, due to the limited number of electrodes and the limited length of the electrode array. This adaptation generally occurs during the first three to six months of implant use and may continue for many years. To see whether moderate speech training can accelerate this learning process, 16 naïve, normal-hearing listeners were trained with spectrally shifted speech via an eight-channel acoustic simulation of CI speech processing. Baseline vowel and consonant recognition was measured for both spectrally shifted and unshifted speech. Short daily training sessions were conducted over five consecutive days, using four different protocols. For the test-only protocol, no improvement was seen over the five-day period. Similarly, sentence training provided little benefit for vowel recognition. However, after five days of targeted phoneme training, subjects’ recognition of spectrally shifted vowels significantly improved in most subjects. This improvement did not generalize to the spectrally unshifted vowel and consonant tokens, suggesting that subjects adapted to the specific spectral shift, rather than to the eight-channel processing in general. Interestingly, significant improvement was also observed for the recognition of spectrally shifted consonants. The largest improvement was observed with targeted vowel contrast training, which did not include any explicit consonant training. These results suggest that targeted phoneme training can accelerate adaptation to spectrally shifted speech. Given these results with normal-hearing listeners, auditory rehabilitation tools that provide targeted phoneme training may be effective in improving the speech recognition performance of adult CI users.


Hearing Research | 2008

Maximizing Cochlear Implant Patients' Performance with Advanced Speech Training Procedures

Qian-Jie Fu; John J. Galvin

Advances in implant technology and speech processing have provided great benefit to many cochlear implant patients. However, some patients receive little benefit from the latest technology, even after many years experience with the device. Moreover, even the best cochlear implant performers have great difficulty understanding speech in background noise, and music perception and appreciation remain major challenges. Recent studies have shown that targeted auditory training can significantly improve cochlear implant patients speech recognition performance. Such benefits are not only observed in poorly performing patients, but also in good performers under difficult listening conditions (e.g., speech noise, telephone speech, music, etc.). Targeted auditory training has also been shown to enhance performance gains provided by new implant devices and/or speech processing strategies. These studies suggest that cochlear implantation alone may not fully meet the needs of many patients, and that additional auditory rehabilitation may be needed to maximize the benefits of the implant device. Continuing research will aid in the development of efficient and effective training protocols and materials, thereby minimizing the costs (in terms of time, effort and resources) associated with auditory rehabilitation while maximizing the benefits of cochlear implantation for all recipients.


Trends in Amplification | 2007

Vocal emotion recognition by normal-hearing listeners and cochlear implant users.

Xin Luo; Qian-Jie Fu; John J. Galvin

The present study investigated the ability of normal-hearing listeners and cochlear implant users to recognize vocal emotions. Sentences were produced by 1 male and 1 female talker according to 5 target emotions: angry, anxious, happy, sad, and neutral. Overall amplitude differences between the stimuli were either preserved or normalized. In experiment 1, vocal emotion recognition was measured in normal-hearing and cochlear implant listeners; cochlear implant subjects were tested using their clinically assigned processors. When overall amplitude cues were preserved, normal-hearing listeners achieved near-perfect performance, whereas listeners with cochlear implant recognized less than half of the target emotions. Removing the overall amplitude cues significantly worsened mean normal-hearing and cochlear implant performance. In experiment 2, vocal emotion recognition was measured in listeners with cochlear implant as a function of the number of channels (from 1 to 8) and envelope filter cutoff frequency (50 vs 400 Hz) in experimental speech processors. In experiment 3, vocal emotion recognition was measured in normal-hearing listeners as a function of the number of channels (from 1 to 16) and envelope filter cutoff frequency (50 vs 500 Hz) in acoustic cochlear implant simulations. Results from experiments 2 and 3 showed that both cochlear implant and normal-hearing performance significantly improved as the number of channels or the envelope filter cutoff frequency was increased. The results suggest that spectral, temporal, and overall amplitude cues each contribute to vocal emotion recognition. The poorer cochlear implant performance is most likely attributable to the lack of salient pitch cues and the limited functional spectral resolution.


Jaro-journal of The Association for Research in Otolaryngology | 2005

Effects of Stimulation Rate, Mode and Level on Modulation Detection by Cochlear Implant Users

John J. Galvin; Qian-Jie Fu

In cochlear implant (CI) patients, temporal processing is often poorest at low listening levels, making perception difficult for low-amplitude temporal cues that are important for consonant recognition and/or speech perception in noise. It remains unclear how speech processor parameters such as stimulation rate and stimulation mode may affect temporal processing, especially at low listening levels. The present study investigated the effects of these parameters on modulation detection by six CI users. Modulation detection thresholds (MDTs) were measured as functions of stimulation rate, mode, and level. Results show that for all stimulation rate and mode conditions, modulation sensitivity was poorest at quiet listening levels, consistent with results from previous studies. MDTs were better with the lower stimulation rate, especially for quiet-to-medium listening levels. Stimulation mode had no significant effect on MDTs. These results suggest that, although high stimulation rates may better encode temporal information and widen the electrode dynamic range, CI patients may not be able to access these enhanced temporal cues, especially at the lower portions of the dynamic range. Lower stimulation rates may provide better recognition of weak acoustic envelope information.


Jaro-journal of The Association for Research in Otolaryngology | 2002

Holes in hearing.

Robert V. Shannon; John J. Galvin; Deniz Başkent

Previous experiments have demonstrated that the correct tonotopic representation of spectral information is important for speech recognition. However, in prosthetic devices, such as hearing aids and cochlear implants, there may be a frequency/place mismatch due in part to the signal processing of the device and in part to the pathology that caused the hearing loss. Local regions of damaged neurons may create a hole in the tonotopic representation of spectral information, further distorting the frequency-to-place mapping. The present experiment was performed to quantitatively assess the impact of spectral holes on speech recognition. Speech was processed by a 20-band processor: SPEAK for cochlear implant (CI) listeners, and a 20-band noise processor for normal-hearing (NH) listeners. Holes in the tonotopic representation (from 1.5 to 6 mm in extent) were created by eliminating electrodes or noise carrier bands in the basal, middle, or apical regions of the cochlea. Vowel, consonant, and sentence recognition were measured as a function of the location and size of the hole. In addition, the spectral information that would normally be represented in the hole region was either: (1) dropped, (2) assigned to the apical side of the hole, (3) assigned to the basal side of the hole, or (4) split evenly to both sides of the hole. In general, speech features that are highly dependent on spectral cues (consonant place, vowel identity) were more affected by the presence of tonotopic holes than temporal features (consonant voicing and manner). Holes in the apical region were more damaging than holes in the basal or middle regions. A similar pattern of performance was observed for NH and CI listeners, suggesting that the loss of spectral information was the primary cause of the effects. The Speech Intelligibility Index was able to account for both NH and CI listeners results. No significant differences were observed among the four conditions that redistributed the spectral information around the hole, suggesting that rerouting spectral information around a hole was no better than simply dropping it.


Annals of the New York Academy of Sciences | 2009

Melodic Contour Identification and Music Perception by Cochlear Implant Users

John J. Galvin; Qian-Jie Fu; Robert V. Shannon

Research and outcomes with cochlear implants (CIs) have revealed a dichotomy in the cues necessary for speech and music recognition. CI devices typically transmit 16–22 spectral channels, each modulated slowly in time. This coarse representation provides enough information to support speech understanding in quiet and rhythmic perception in music, but not enough to support speech understanding in noise or melody recognition. Melody recognition requires some capacity for complex pitch perception, which in turn depends strongly on access to spectral fine structure cues. Thus, temporal envelope cues are adequate for speech perception under optimal listening conditions, while spectral fine structure cues are needed for music perception. In this paper, we present recent experiments that directly measure CI users’ melodic pitch perception using a melodic contour identification (MCI) task. While normal‐hearing (NH) listeners’ performance was consistently high across experiments, MCI performance was highly variable across CI users. CI users’ MCI performance was significantly affected by instrument timbre, as well as by the presence of a competing instrument. In general, CI users had great difficulty extracting melodic pitch from complex stimuli. However, musically experienced CI users often performed as well as NH listeners, and MCI training in less‐experienced subjects greatly improved performance. With fixed constraints on spectral resolution, such as occurs with hearing loss or an auditory prosthesis, training and experience can provide considerable improvements in music perception and appreciation.


Acoustics Research Letters Online-arlo | 2005

Moderate auditory training can improve speech performance of adult cochlear implant patients

Qian-Jie Fu; John J. Galvin; Xiaosong Wang; Geraldine Nogaki

Learning electrically stimulated speech patterns can be a new and difficult experience for many cochlear implant users. In the present study, ten cochlear implant patients participated in an auditory training program using speech stimuli. Training was conducted at home using a personal computer for 1 hour per day, 5 days per week, for a period of 1 month or longer. Results showed a significant improvement in all patients’ speech perception performance. These results suggest that moderate auditory training using a computer-based auditory rehabilitation tool can be an effective approach for improving the speech perception performance of cochlear implant patients.


Jaro-journal of The Association for Research in Otolaryngology | 2006

Effects of stimulation mode, level and location on forward-masked excitation patterns in cochlear implant patients.

Monita Chatterjee; John J. Galvin; Qian-Jie Fu; Robert V. Shannon

In multi-channel cochlear implants, electrical current is delivered to appropriate electrodes in the cochlea to approximate the spatial representation of speech. Theoretically, electrode configurations that restrict the current spread within the cochlea (e.g., bi- or tri-polar stimulation) may provide better spatial selectivity, and in turn, better speech recognition than configurations that produce a broader current spread (e.g., monopolar stimulation). However, the effects of electrode configuration on supra-threshold excitation patterns have not been systematically studied in cochlear implant patients. In the present study, forward-masked excitation patterns were measured in cochlear implant patients as functions of stimulation mode, level and location within the cochlea. All stimuli were 500 pulses-per-second biphasic pulse trains (200xa0μs/phase, 20xa0μs inter-phase gap). Masker stimuli were 200xa0ms in duration; the bi-polar configuration was varied from narrow (BP + 1) to wide (BP + 17), depending on the test condition. Probe stimuli were 20xa0ms in duration and the masker-probe delay was 5xa0ms; the probe configuration was fixed at BP + 1. The results indicated that as the distance between the active and return electrodes in a bi-polar pair was increased, the excitation pattern broadened within the cochlea. When the distance between active and return electrodes was sufficiently wide, two peaks were often observed in the excitation pattern, comparable to non-overlapping electric fields produced by widely separated dipoles. Analyses of the normalized data showed little effect of stimulation level on the shape of the excitation pattern.

Collaboration


Dive into the John J. Galvin's collaboration.

Top Co-Authors

Avatar

Qian-Jie Fu

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Deniz Başkent

University Medical Center Groningen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christina Fuller

University Medical Center Groningen

View shared research outputs
Top Co-Authors

Avatar

Rolien Free

University Medical Center Groningen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge