Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ying-Yee Kong is active.

Publication


Featured researches published by Ying-Yee Kong.


Ear and Hearing | 2004

Music perception with temporal cues in acoustic and electric hearing.

Ying-Yee Kong; Rachel Cruz; J. Ackland Jones; Fan-Gang Zeng

Objective The first specific aim of the present study is to compare the ability of normal-hearing and cochlear implant listeners to use temporal cues in three music perception tasks: tempo discrimination, rhythmic pattern identification, and melody identification. The second aim is to identify the relative contribution of temporal and spectral cues to melody recognition in acoustic and electric hearing. Design Both normal-hearing and cochlear implant listeners participated in the experiments. Tempo discrimination was measured in a two-interval forced-choice procedure in which subjects were asked to choose the faster tempo at four standard tempo conditions (60, 80, 100, and 120 beats per minute). For rhythmic pattern identification, seven different rhythmic patterns were created and subjects were asked to read and choose the musical notation displayed on the screen that corresponded to the rhythmic pattern presented. Melody identification was evaluated with two sets of 12 familiar melodies. One set contained both rhythm and melody information (rhythm condition), whereas the other set contained only melody information (no-rhythm condition). Melody stimuli were also processed to extract the slowly varying temporal envelope from 1, 2, 4, 8, 16, 32, and 64 frequency bands, to create cochlear implant simulations. Subjects listened to a melody and had to respond by choosing one of the 12 names corresponding to the melodies displayed on a computer screen. Results In tempo discrimination, the cochlear implant listeners performed similarly to the normal-hearing listeners with rate discrimination difference limens obtained at 4–6 beats per minute. In rhythmic pattern identification, the cochlear implant listeners performed 5–25 percentage points poorer than the normal-hearing listeners. The normal-hearing listeners achieved perfect scores in melody identification with and without the rhythmic cues. However, the cochlear implant listeners performed significantly poorer than the normal-hearing listeners in both rhythm and no-rhythm conditions. The simulation results from normal-hearing listeners showed a relatively high level of performance for all numbers of frequency bands in the rhythm condition but required as many as 32 bands in the no-rhythm condition. Conclusions Cochlear-implant listeners performed normally in tempo discrimination, but significantly poorer than normal-hearing listeners in rhythmic pattern identification and melody recognition. While both temporal (rhythmic) and spectral (pitch) cues contribute to melody recognition, cochlear-implant listeners mostly relied on the rhythmic cues for melody recognition. Without the rhythmic cues, high spectral resolution with as many as 32 bands was needed for melody recognition for normal-hearing listeners. This result indicates that the present cochlear implants provide sufficient spectral cues to support speech recognition in quiet, but they are not adequate to support music perception. Increasing the number of functional channels and improved encoding of the fine structure information are necessary to improve music perception for cochlear implant listeners.


Journal of the Acoustical Society of America | 2005

Speech and melody recognition in binaurally combined acoustic and electric hearing

Ying-Yee Kong; Ginger S. Stickney; Fan-Gang Zeng

Speech recognition in noise and music perception is especially challenging for current cochlear implant users. The present study utilizes the residual acoustic hearing in the nonimplanted ear in five cochlear implant users to elucidate the role of temporal fine structure at low frequencies in auditory perception and to test the hypothesis that combined acoustic and electric hearing produces better performance than either mode alone. The first experiment measured speech recognition in the presence of competing noise. It was found that, although the residual low-frequency (<1000 Hz) acoustic hearing produced essentially no recognition for speech recognition in noise, it significantly enhanced performance when combined with the electric hearing. The second experiment measured melody recognition in the same group of subjects and found that, contrary to the speech recognition result, the low-frequency acoustic hearing produced significantly better performance than the electric hearing. It is hypothesized that listeners with combined acoustic and electric hearing might use the correlation between the salient pitch in low-frequency acoustic hearing and the weak pitch in the envelope to enhance segregation between signal and noise. The present study suggests the importance and urgency of accurately encoding the fine-structure cue in cochlear implants.


Journal of the Acoustical Society of America | 2004

On the dichotomy in auditory perception between temporal envelope and fine structure cues (L)

Fan-Gang Zeng; Kaibao Nie; Sheng Liu; Ginger S. Stickney; Elsa Del Rio; Ying-Yee Kong; Hongbin Chen

It is important to know what cues the sensory system extracts from natural stimuli and how the brain uses them to form perception. To explore this issue, Smith, Delgutte, and Oxenham [Nature (London) 416, 87–90 (2002)] mixed one sound’s temporal envelope with another sound’s fine temporal structure to produce auditory chimaeras and found that “the perceptual importance of the envelope increases with the number of frequency bands, while that of the fine structure diminishes.” This study addressed two technical issues related to natural cochlear filtering and artificial filter ringing in the chimaerizing algorithm. In addition, this study found that the dichotomy in auditory perception revealed by auditory chimaeras is an epiphenomenon of the classic dichotomy between low- and high-frequency processing. Finally, this study found that the temporal envelope determines sound location as long as the interaural level difference cue is present. The present result reinforces the original hypothesis that the tempor...


Clinical Neurophysiology | 2005

Auditory temporal processes in normal-hearing individuals and in patients with auditory neuropathy

Henry J. Michalewski; Arnold Starr; Tin Toan Nguyen; Ying-Yee Kong; Fan-Gang Zeng

OBJECTIVE To study objectively auditory temporal processing in a group of normal hearing subjects and in a group of hearing-impaired individuals with auditory neuropathy (AN) using electrophysiological and psychoacoustic methods. METHODS Scalp recorded evoked potentials were measured to brief silent intervals (gaps) varying between 2 and 50ms embedded in continuous noise. Latencies and amplitudes of N100 and P200 were measured and analyzed in two conditions: (1) active, when using a button in response to gaps; (2) passive, listening, but not responding. RESULTS In normal subjects evoked potentials (N100/P200 components) were recorded in response to gaps as short as 5ms in both active and passive conditions. Gap evoked potentials in AN subjects appeared only with prolonged gap durations (10-50ms). There was a close association between gap detection thresholds measured psychoacoustically and electrophysiologically in both normals and in AN subjects. CONCLUSIONS Auditory cortical potentials can provide objective measures of auditory temporal processes. SIGNIFICANCE The combination of electrophysiological and psychoacoustic methods converged to provide useful objective measures for studying auditory cortical temporal processing in normals and hearing-impaired individuals. The procedure used may also provide objective measures of temporal processing for evaluating special populations such as children who may not be able to provide subjective responses.


Journal of the Acoustical Society of America | 2004

Temporal and spectral cues in Mandarin tone recognition

Ying-Yee Kong; Fan-Gang Zeng

This study evaluates the relative contributions of envelope and fine structure cues in both temporal and spectral domains to Mandarin tone recognition in quiet and in noise. Four sets of stimuli were created. Noise-excited vocoder speech was used to evaluate the temporal envelope. Frequency modulation was then added to evaluate the temporal fine structure. Whispered speech was used to evaluate the spectral envelope. Finally, equal-amplitude harmonics were used to evaluate the spectral fine structure. Results showed that normal-hearing listeners achieved nearly perfect tone recognition with either spectral or temporal fine structure in quiet, but only 70%-80% correct with the envelope cues. With the temporal envelope, 32 spectral bands were needed to achieve performance similar to that obtained with the original stimuli, but only four bands were necessary with the additional temporal fine structure. Envelope cues were more susceptible to noise than fine structure cues, with the envelope cues producing significantly lower performance in noise. These findings suggest that tonal pattern recognition is a robust process that can make use of both spectral and temporal cues. Unlike speech recognition, the fine structure is more important than the envelope for tone recognition in both temporal and spectral domains, particularly in noise.


Ear and Hearing | 2009

Cochlear implant melody recognition as a function of melody frequency range, harmonicity, and number of electrodes.

Sonya Singh; Ying-Yee Kong; Fan-Gang Zeng

Objective: The primary goal of the present study was to determine how cochlear implant melody recognition was affected by the frequency range of the melodies, the harmonicity of these melodies, and the number of activated electrodes. The secondary goal was to investigate whether melody recognition and speech recognition were differentially affected by the limitations imposed by cochlear implant processing. Design: Four experiments were conducted. In the first experiment, 11 cochlear implant users used their clinical processors to recognize melodies of complex harmonic tones with their fundamental frequencies being in the low (104–262 Hz), middle (207–523 Hz), and high (414–1046 Hz) ranges. In the second experiment, melody recognition with pure tones was compared to melody recognition with complex harmonic tones in four subjects. In the third experiment, melody recognition was measured as a function of the number of electrodes in five subjects. In the fourth experiment, vowel and consonant recognition were measured as a function of the number of electrodes in the same five subjects who participated in the third experiment. Results: Frequency range significantly affected cochlear implant melody recognition, with higher frequency ranges producing better performance. Pure tones produced significantly better performance than complex harmonic tones. Increasing the number of activated electrodes did not affect performance with low- and middle-frequency melodies but produced better performance with high-frequency melodies. Large individual variability was observed for melody recognition, but its source seemed to be different from the source of the large variability observed in speech recognition. Conclusion: Contemporary cochlear implants do not adequately encode either temporal pitch or place pitch cues. Melody recognition and speech recognition require different signal processing strategies in future cochlear implants.


Cochlear Implants International | 2004

Mandarin tone recognition in acoustic and electric hearing

Ying-Yee Kong; Fan-Gang Zeng

Mandarin tone recognition can be achieved using either spectral cues or temporal cues. Although the pitch contour is the most salient cue for voice pitch perception × Liang, 1963; Abramson, 1978), temporal envelope cues such as amplitude contour, periodicity and duration also contribute significantly to Mandarin tone recognition (Liang, 1963; Fu et al., 1998; Fu and Zeng, 2000). Fu and colleagues (1998, 2000) showed that normalhearing listeners performed approximately 80% tone identification with only temporal envelope cues. However, no studies have been done on whether reliable Mandarin tone recognition could be achieved in the presence of noise. The purpose of this study was to investigate the relative contribution of envelope and fine-structure cues in normalhearing and cochlear-implanted listeners to Mandarin tone recognition in quiet and in noise. We hypothesized that while the temporal envelope can support tone recognition in quiet, it is not adequate in noisy conditions. We further hypothesized that the temporal fine structure is required for tone recognition in noise.


Proceedings of the National Academy of Sciences of the United States of America | 2005

Speech recognition with amplitude and frequency modulations.

Fan-Gang Zeng; Kaibao Nie; Ginger S. Stickney; Ying-Yee Kong; Michael Vongphoe; Ashish Bhargave; Wei Cg; Cao Kl


Journal of Neurophysiology | 2005

Perceptual consequences of disrupted auditory nerve activity.

Fan-Gang Zeng; Ying-Yee Kong; Henry J. Michalewski; Arnold Starr


Archive | 2005

Auditory perception with slowly-varying amplitude and frequency modulations

Fan-Gang Zeng; Kaibao Nie; Ginger S. Stickney; Ying-Yee Kong

Collaboration


Dive into the Ying-Yee Kong's collaboration.

Top Co-Authors

Avatar

Fan-Gang Zeng

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kaibao Nie

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Arnold Starr

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elsa Del Rio

University of California

View shared research outputs
Top Co-Authors

Avatar

Hongbin Chen

University of California

View shared research outputs
Top Co-Authors

Avatar

Sheng Liu

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cao Kl

Peking Union Medical College Hospital

View shared research outputs
Researchain Logo
Decentralizing Knowledge