Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chin-Tuan Tan is active.

Publication


Featured researches published by Chin-Tuan Tan.


IEEE Transactions on Speech and Audio Processing | 1998

A parametric formulation of the generalized spectral subtraction method

Boh Lim Sim; Y. C. Tong; Joseph Sylvester Chang; Chin-Tuan Tan

In this paper, two short-time spectral amplitude estimators of the speech signal are derived based on a parametric formulation of the original generalized spectral subtraction method. The objective is to improve the noise suppression performance of the original method while maintaining its computational simplicity. The proposed parametric formulation describes the original method and several of its modifications. Based on the formulation, the speech spectral amplitude estimator is derived and optimized by minimizing the mean-square error (MSE) of the speech spectrum. With a constraint imposed on the parameters inherent in the formulation, a second estimator is also derived and optimized. The two estimators are different from those derived in most modified spectral subtraction methods, which are predominantly nonstatistical. When tested under stationary white Gaussian noise and semistationary Jeep noise, they showed improved noise suppression results.


International Journal of Audiology | 2008

Perception of nonlinear distortion by hearing-impaired people

Chin-Tuan Tan; Brian C. J. Moore

All hearing aids and communication devices introduce nonlinear distortion. The perception of distortion by hearing-impaired subjects was studied using artificial controlled distortions of various amounts and types. Subjects were asked to rate the perceived quality of distorted speech and music. Stimuli were subjected to frequency-dependent amplification as prescribed by the ‘Cambridge formula’ before presentation via Sennheiser HD580 earphones. The pattern of the ratings was reasonably consistent across subjects, but two of the eight subjects showed inconsistent results for the speech stimuli. Center clipping and soft clipping had only small effects on the ratings, while hard clipping and ‘full-range’ distortion had large effects. The results indicate that most hearing-impaired subjects are able to make orderly and consistent ratings of degradations in sound quality introduced by nonlinear distortion. The pattern of results could be predicted reasonably well using a model developed to account for the perception of distortion by normally hearing subjects.


Ear and Hearing | 2013

Feasibility of Real-Time Selection of Frequency Tables in an Acoustic Simulation of a Cochlear Implant

Matthew B. Fitzgerald; Elad Sagi; Tasnim A. Morbiwala; Chin-Tuan Tan; Mario A. Svirsky

Objectives: Perception of spectrally degraded speech is particularly difficult when the signal is also distorted along the frequency axis. This might be particularly important for post-lingually deafened recipients of cochlear implants (CIs), who must adapt to a signal where there may be a mismatch between the frequencies of an input signal and the characteristic frequencies of the neurons stimulated by the CI. However, there is a lack of tools that can be used to identify whether an individual has adapted fully to a mismatch in the frequency-to-place relationship and if so, to find a frequency table that ameliorates any negative effects of an unadapted mismatch. The goal of the proposed investigation is to test the feasibility of whether real-time selection of frequency tables can be used to identify cases in which listeners have not fully adapted to a frequency mismatch. The assumption underlying this approach is that listeners who have not adapted to a frequency mismatch will select a frequency table that minimizes any such mismatches, even at the expense of reducing the information provided by this frequency table. Design: Thirty-four normal-hearing adults listened to a noise-vocoded acoustic simulation of a CI and adjusted the frequency table in real time until they obtained a frequency table that sounded “most intelligible” to them. The use of an acoustic simulation was essential to this study because it allowed the authors to explicitly control the degree of frequency mismatch present in the simulation. None of the listeners had any previous experience with vocoded speech, in order to test the hypothesis that the real-time selection procedure could be used to identify cases in which a listener has not adapted to a frequency mismatch. After obtaining a self-selected table, the authors measured consonant nucleus consonant word-recognition scores with that self-selected table and two other frequency tables: a “frequency-matched” table that matched the analysis filters with the noisebands of the noise-vocoder simulation, and a “right information” table that is similar to that used in most CI speech processors, but in this simulation results in a frequency shift equivalent to 6.5 mm of cochlear space. Results: Listeners tended to select a table that was very close to, but shifted slightly lower in frequency from the frequency-matched table. The real-time selection process took on average 2 to 3 min for each trial, and the between-trial variability was comparable with that previously observed with closely related procedures. The word-recognition scores with the self-selected table were clearly higher than with the right-information table and slightly higher than with the frequency-matched table. Conclusions: Real-time self-selection of frequency tables may be a viable tool for identifying listeners who have not adapted to a mismatch in the frequency-to-place relationship, and to find a frequency table that is more appropriate for them. Moreover, the small but significant improvements in word-recognition ability observed with the self-selected table suggest that these listeners based their selections on intelligibility rather than some other factor. The within-subject variability in the real-time selection procedure was comparable with that of a genetic algorithm, and the speed of the real-time procedure appeared to be faster than either a genetic algorithm or a simplex procedure.


Laryngoscope | 2013

Real-time measurement of electrode impedance during intracochlear electrode insertion.

Chin-Tuan Tan; Mario A. Svirsky; Abbas Anwar; Shaun Ashwin Kumar; Bernie Caessens; Paul Carter; Claudiu Treaba; J. Thomas Roland

This pilot study details the use of a software tool that uses continuous impedance measurement during electrode insertion, with the eventual potential to assess and optimize electrode position and reduce insertional trauma.


IEICE Technical Report; IEICE Tech. Rep. | 2007

Estimates of Tuning of Auditory Filter Using Simultaneous and Forward Notched-noise Masking

Masashi Unoki; Ryota Miyauchi; Chin-Tuan Tan

The frequency selectivity of an auditory filter system is often conceptualized as a bank of bandpass auditory filters. Over the past 30 years, many simultaneous masking experiments using notched-noise maskers have been done to define the shape of the auditory filters (e.g., Glasberg and Moore 1990; Patterson and Nimmo-Smith 1980; Rosen and Baker, 1994). The studies of Glasberg and Moore (2000) and Baker and Rosen (2006) are notable inasmuch as they measured the human auditory filter shape over most of the range of frequencies and levels encountered in everyday hearing. The advantage of using notched-noise masking is that one can avoid off-frequency listening and investigate filter asymmetry. However, the derived filter shapes are also affected by the effects of suppression. The tunings of auditory filters derived from data collected in forward masking experiments were apparently sharper than those derived from simultaneous masking experiments, especially when the signal levels are low. The tuning of a filter is commonly believed to be affected by cochlear nonlinearity such as the effect of suppression. In past studies, the tunings of auditory filters derived from simultaneous masking data were wider than those of filters derived from nonsimultaneous (forward) masking data (Moore and Glasberg 1978; Glasberg and Moore 1982; Oxenham and Shera 2003). Heinz et al. (2002) showed that a tuning is generally sharpest when stimuli are at low levels and that suppression may affect tuning estimates more at high characteristic frequencies (CFs) than at low CFs. If the suggestion of Heinz et al. (2002) holds, i.e., if suppression affects frequency changes, comparing the filter bandwidths derived from simultaneous and forward masking experiments would indicate this. In this study we attempt to estimate filter tunings using both simultaneous and forward masking experiments with a notched-noise masker to investigate how the effects of suppression affect estimates of frequency selectivity across signal frequencies, signal levels, notch conditions (symmetric and asymmetric), and signal delays. This study extends the study of Unoki and Tan (2005).


Journal of The American Academy of Audiology | 2012

Current and Planned Cochlear Implant Research at New York University Laboratory for Translational Auditory Research

Mario A. Svirsky; Matthew B. Fitzgerald; Arlene C. Neuman; Elad Sagi; Chin-Tuan Tan; Darlene R. Ketten; Brett A. Martin

The Laboratory of Translational Auditory Research (LTAR/NYUSM) is part of the Department of Otolaryngology at the New York University School of Medicine and has close ties to the New York University Cochlear Implant Center. LTAR investigators have expertise in multiple related disciplines including speech and hearing science, audiology, engineering, and physiology. The lines of research in the laboratory deal mostly with speech perception by hearing impaired listeners, and particularly those who use cochlear implants (CIs) or hearing aids (HAs). Although the laboratorys research interests are diverse, there are common threads that permeate and tie all of its work. In particular, a strong interest in translational research underlies even the most basic studies carried out in the laboratory. Another important element is the development of engineering and computational tools, which range from mathematical models of speech perception to software and hardware that bypass clinical speech processors and stimulate cochlear implants directly, to novel ways of analyzing clinical outcomes data. If the appropriate tool to conduct an important experiment does not exist, we may work to develop it, either in house or in collaboration with academic or industrial partners. Another notable characteristic of the laboratory is its interdisciplinary nature where, for example, an audiologist and an engineer might work closely to develop an approach that would not have been feasible if each had worked singly on the project. Similarly, investigators with expertise in hearing aids and cochlear implants might join forces to study how human listeners integrate information provided by a CI and a HA. The following pages provide a flavor of the diversity and the commonalities of our research interests.


international conference on acoustics, speech, and signal processing | 2013

Validation of acoustic models of auditory neural prostheses

Mario A. Svirsky; Nai Ding; Elad Sagi; Chin-Tuan Tan; Matthew B. Fitzgerald; E. Katelyn Glassman; Keena Seward; Arlene C. Neuman

Acoustic models have been used in numerous studies over the past thirty years to simulate the percepts elicited by auditory neural prostheses. In these acoustic models, incoming signals are processed the same way as in a cochlear implant speech processor. The percepts that would be caused by electrical stimulation in a real cochlear implant are simulated by modulating the amplitude of either noise bands or sinusoids. Despite their practical usefulness these acoustic models have never been convincingly validated. This study presents a tool to conduct such validation using subjects who have a cochlear implant in one ear and have near perfect hearing in the other ear, allowing for the first time a direct perceptual comparison of the output of acoustic models to the stimulation provided by a cochlear implant.


Journal of The American Academy of Audiology | 2017

Pitch Matching between Electrical Stimulation of a Cochlear Implant and Acoustic Stimuli Presented to a Contralateral Ear with Residual Hearing

Chin-Tuan Tan; Brett A. Martin; Mario A. Svirsky

Background: Cochlear implants (CIs) successfully restore hearing in postlingually deaf adults, but in doing so impose a frequency‐position function in the cochlea that may differ from the physiological one. Purpose: The CI‐imposed frequency‐position function is determined by the frequency allocation table programmed into the listeners speech processor and by the location of the electrode array along the cochlea. To what extent can postlingually deaf CI users successfully adapt to the difference between physiological and CI‐imposed frequency‐position functions? Research Design: We attempt to answer the question by combining behavioral measures of electroacoustic pitch matching (PM) and measures of electrode location within the cochlea. Study Sample: The participants in this study were 16 adult CI users with residual hearing who could match the pitch of acoustic pure tones presented to their unimplanted ears to the pitch resulting from stimulation of different CI electrodes. Data Collection and Analysis: We obtained data for four to eight apical electrodes from 16 participants with CIs (most of whom were long‐term users), and estimated electrode insertion angle for 12 of these participants. PM functions in this group were compared with the two frequency‐position functions discussed above. Results: Taken together, the findings were consistent with the possibility that adaptation to the frequency‐position function imposed by CIs does happen, but it is not always complete. Conclusions: Some electrodes continue to be perceived as higher pitched than the acoustic frequencies with which they are associated despite years of listening experience after cochlear implantation.


Journal of the Acoustical Society of America | 2012

Behavioral and physiological measure for pitch matching between electrical and acoustical stimulation in cochlear implant patients.

Chin-Tuan Tan; Benjamin Guo; Brett A. Martin; Mario A. Svirsky

This study examines behavioral and physiological measures of pitch matching in cochlear implant (CI) users who have residual hearing in the contralateral ear. Subjects adjusted the frequency of an acoustic tone to match the pitch percept elicited by electrical stimulation in the other ear, when stimulation was alternating across both ears. In general, the selected acoustic frequencies did not line up perfectly with the center frequencies of the analysis bands corresponding to each stimulation electrode. Similar alternating stimuli were used to record Auditory Evoked Potentials on 8 NH subjects and 3 CI patients. NH subjects were presented with a fixed tone in one ear, while tones in the other ear varied within a few octaves of the fixed tone. CI patients were stimulated in the acoustic ear with six different audible tones including their pitch-matched tones, while receiving electrical stimulation in an electrode in the other ear. N1 latency for NH subjects was minimized when the same frequency was present...


international conference on acoustics, speech, and signal processing | 2014

Perceived quality of resonance based decomposed speech components under diotic and dichotic listening

Chin-Tuan Tan; Ivan W. Selesnick; Kemal Avci

This study investigates the feasibility of using binaural dichotic presentation of speech components decomposed using a recently proposed resonance-based decomposition method to release listeners from intra-speech masking and yield better perceived sound quality. Resonance-based decomposition is a nonlinear signal analysis method based not on frequency or scale but on resonance. We decomposed different categories of speech stimuli (vowels, consonants, and sentences) into low- and high-resonance component using various combination of low- and high-Q-factors {Q1, Q2}. 10 normal hearing listeners were asked to rate the perceived quality of each individual decomposed component presented diotically, and in pair presented dichotically. We found that the perceived quality rating of these resonance components when presented in pair was higher than the mean of perceived quality ratings of these resonance components when presented individually. Our result suggests that listeners were able to fuse binaural dichotic presentation of high- and low-resonance components and perceived better sound quality.

Collaboration


Dive into the Chin-Tuan Tan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brett A. Martin

City University of New York

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Y. C. Tong

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar

Joseph Sylvester Chang

Nanyang Technological University

View shared research outputs
Researchain Logo
Decentralizing Knowledge