Yiteng Huang
Georgia Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yiteng Huang.
international conference on acoustics, speech, and signal processing | 2000
Yiteng Huang; Jacob Benesty; Gary W. Elko
A multi-input one-step least-squares (OSLS) algorithm for passive source localization is proposed. It is shown that the OSLS algorithm is mathematically equivalent to the so-called spherical interpolation (SI) method but with less computational complexity. The OSLS/SI method uses spherical equations (instead of hyperbolic equations) and solves them in a least-squares sense. Based on the adaptive eigenvalue decomposition time delay estimation method previously proposed by the same authors and the OSLS source localization algorithm, a real-time passive source localization system for video camera steering is presented. The system demonstrates many desirable features such as accuracy, portability, and robustness.
Journal of the Acoustical Society of America | 2005
Jacob Benesty; Gary W. Elko; Yiteng Huang
A real-time passive acoustic source localization system for video camera steering advantageously determines the relative delay between the direct paths of two estimated channel impulse responses. The illustrative system employs an approach referred to herein as the “adaptive eigenvalue decomposition algorithm” (AEDA) to make such a determination, and then advantageously employs a “one-step least-squares algorithm” (OSLS) for purposes of acoustic source localization, providing the desired features of robustness, portability, and accuracy in a reverberant environment. The AEDA technique directly estimates the (direct path) impulse response from the sound source to each of a pair of microphones, and then uses these estimated impulse responses to determine the time delay of arrival (TDOA) between the two microphones by measuring the distance between the first peaks thereof (i.e., the first significant taps of the corresponding transfer functions). In one embodiment, the system minimizes an error function (i.e., a difference) which is computed with the use of two adaptive filters, each such filter being applied to a corresponding one of the two signals received from the given pair of microphones. The filtered signals are then subtracted from one another to produce the error signal, which is minimized by a conventional adaptive filtering algorithm such as, for example, an LMS (Least Mean Squared) technique. Then, the TDOA is estimated by measuring the “distance” (i.e., the time) between the first significant taps of the two resultant adaptive filter transfer functions.
international conference on acoustics speech and signal processing | 1999
Yiteng Huang; Jacob Benesty; Gary W. Elko
To locate an acoustic source in a room, the relative delay between microphone pairs must be determined efficiently and accurately. However, most traditional time delay estimation (TDE) algorithms fail in reverberant environments. A new approach is proposed that takes into account the reverberation of the room. A real time PC-based TDE system running under Microsoft/sup TM/ Windows system was developed with three TDE techniques: classical cross-correlation, phase transform, and a new algorithm that is proposed in this paper. The system provides an interactive platform that allows users to compare performance of these algorithms.
Acoustic signal processing for telecommunication | 2000
Yiteng Huang; Jacob Benesty; Gary W. Elko
In this chapter, we consider the problem of passively estimating the acoustic source location by using microphone arrays for video camera steering in real reverberant environments. Within a two-stage framework for this problem, different algorithms for time delay estimation and source localization are developed. Their performance as well as computational complexity are analyzed and discussed. A successful real-time system is also presented.
international conference on acoustics, speech, and signal processing | 2008
Jacob Benesty; Jingdong Chen; Yiteng Huang
Noise reduction using multiple microphones remains a challenging and crucial research problem. This paper presents a new multichannel noise-reduction algorithm based on spatio-temporal prediction. Unlike many multichannel techniques that attempt to achieve both speech dereverberation and noise reduction at the same time, this new approach puts aside speech dereverberation and formulates the problem as one of estimating the speech component received at one microphone using the observations from all the available microphones. In comparison with the existing techniques such as beamforming, this new multichannel approach has many appealing properties: it does not require the knowledge of the source location or the channel impulse responses; the multiple microphones do not have to be arranged into a specific array geometry; it works the same for both the far-field and near-field cases; and most importantly, it can produce very good noise reduction with minimum speech distortion in real acoustic environments.
Archive | 2006
Yiteng Huang; Jacob Benesty; Jingdong Chen
Archive | 2011
Jacob Benesty; Yiteng Huang
Archive | 2009
Jacob Benesty; Jingdong Chen; Yiteng Huang; Israel Cohen
Archive | 2009
Jacob Benesty; Jingdong Chen; Yiteng Huang; Israel Cohen
Archive | 2009
Jacob Benesty; Jingdong Chen; Yiteng Huang; Israel Cohen