Gary W. Elko
Alcatel-Lucent
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gary W. Elko.
IEEE Transactions on Speech and Audio Processing | 2001
Yiteng Huang; Jacob Benesty; Gary W. Elko; Russell M. Mersereati
A linear-correction least-squares estimation procedure is proposed for the source localization problem under an additive measurement error model. The method, which can be easily implemented in a real-time system with moderate computational complexity, yields an efficient source location estimator without assuming a priori knowledge of noise distribution. Alternative existing estimators, including likelihood-based, spherical intersection, spherical interpolation, and quadratic-correction least-squares estimators, are reviewed and comparisons of their complexity, estimation consistency and efficiency against the Cramer-Rao lower bound are made. Numerical studies demonstrate that the proposed estimator performs better under many practical situations.
international conference on acoustics, speech, and signal processing | 2002
Jens Meyer; Gary W. Elko
This paper describes a beamforming microphone array consisting of pressure microphones that are mounted on the surface of a rigid sphere. The beamformer is based on a spherical harmonic decomposition of the soundfield. We show that this allows a simple and computationally effective, yet flexible beamformer structure. The look-direction can be steered to any direction in 3-D space without changing the beampattern. In general the number of sensors and their location is quite arbitrary as long as they hold a certain orthogonality constraint that we derived. For a practical example we chose a spherical array with 32 elements. The microphones are located at the center of the faces of a truncated icosahedron. The radius of the sphere is 5 cm. With this setup we can achieve a Directivity Index of 12 dB and higher. The operating frequency range is from 100 Hz to 5 kHz.
international conference on computer graphics and interactive techniques | 1998
Thomas A. Funkhouser; Ingrid Carlbom; Gary W. Elko; Gopal Pingali; Mohan Sondhi; James E. West
Virtual environment research has focused on interactive image generation and has largely ignored acoustic modeling for spatialization of sound. Yet, realistic auditory cues can complement and enhance visual cues to aid navigation, comprehension, and sense of presence in virtual environments. A primary challenge in acoustic modeling is computation of reverberation paths from sound sources fast enough for real-time auralization. We have developed a system that uses precomputed spatial subdivision and “beam tree” data structures to enable real-time acoustic modeling and auralization in interactive virtual environments. The spatial subdivision is a partition of 3D space into convex polyhedral regions (cells) represented as a cell adjacency graph. A beam tracing algorithm recursively traces pyramidal beams through the spatial subdivision to construct a beam tree data structure representing the regions of space reachable by each potential sequence of transmission and specular reflection events at cell boundaries. From these precomputed data structures, we can generate high-order specular reflection and transmission paths at interactive rates to spatialize fixed sound sources in real-time as the user moves through a virtual environment. Unlike previous acoustic modeling work, our beam tracing method: 1) supports evaluation of reverberation paths at interactive rates, 2) scales to compute highorder reflections and large environments, and 3) extends naturally to compute paths of diffraction and diffuse reflection efficiently. We are using this system to develop interactive applications in which a user experiences a virtual environment immersively via simultaneous auralization and visualization.
Acoustic signal processing for telecommunication | 2000
Gary W. Elko
Noise and reverberation can seriously degrade both the microphone reception and the loudspeaker transmission of speech signals in hands-free telecommunication. Directional loudspeakers and microphone arrays can be effective in combating these problems. This chapter covers the design and implementation of differential arrays that are small compared to the acoustic wavelength. Differential arrays are therefore also superdirectional arrays since their directivity is higher than that of a uniformly summed array with the same geometry. Aside from the small size, another beneficial feature of these differential arrays is that their directivity is independent of frequency. Derivations are included for several optimal differential arrays that may be useful for teleconferencing and speech pickup in noisy and reverberant environments. Novel expressions and design details covering multiple-order hypercardioid and supercardioid-type differential arrays are given. Also, the design of Dolph-Chebyshev equi-sidelobe differential arrays is covered for the general multiple-order case. The results shown here should be useful in designing and selecting directional microphones for a variety of applications.
Speech Communication | 1996
Gary W. Elko
Abstract Microphone array systems can be effective in combating the detrimental effects of acoustic noise and reverberation in hands-free telecommunication. This paper discusses classical delay-sum beamformers as well as the more general filter-sum beamformers. Filter-sum beamformers add the ability to control the array beampattern as a function of frequency and a new design method for a constant beamwidth filter-sum beamformer is presented. The delay-sum and filter-sum beamformers require array sizes that are comparable to the acoustic wavelength. These designs can result in arrays that are large in size. For applications that are space constrained, differential microphone array systems are presented. Finally, two types of adaptive beamformers are presented: a broadside array and a two-element differential microphone.
workshop on applications of signal processing to audio and acoustics | 1995
Gary W. Elko; Anh-Tho Nguyen Pong
As communication devices become more portable and used in any environment, the acoustic pick-up by electroacoustic transducers will require the combination of small compact transducers and signal-processing to allow high quality communication. This paper covers the design and implementation of a novel adaptive first-order differential microphone. The self-optimization is based on minimizing the microphone output under the constraint that the solitary null for first order systems is located in the rear-half plane. The constraint is simply realized by the judicious subtraction of time-delayed outputs from two closely-spaced omnidirectional microphones. Although the solution presented does not maximize the signal-to-noise ratio, it can significantly improve the signal-to-noise ratio in certain acoustic fields.
international conference on acoustics, speech, and signal processing | 2000
Yiteng Huang; Jacob Benesty; Gary W. Elko
A multi-input one-step least-squares (OSLS) algorithm for passive source localization is proposed. It is shown that the OSLS algorithm is mathematically equivalent to the so-called spherical interpolation (SI) method but with less computational complexity. The OSLS/SI method uses spherical equations (instead of hyperbolic equations) and solves them in a least-squares sense. Based on the adaptive eigenvalue decomposition time delay estimation method previously proposed by the same authors and the OSLS source localization algorithm, a real-time passive source localization system for video camera steering is presented. The system demonstrates many desirable features such as accuracy, portability, and robustness.
international conference on acoustics, speech, and signal processing | 1993
Michael M. Goodwin; Gary W. Elko
The beamwidth of a linear array decreases as frequency increases. For wideband beamformers such as microphone arrays intended for teleconferencing, this frequency dependence implies that signals incident on the outer portions of the main beam are subject to the undesirable effects of lowpass filtering. The authors discuss several ways of attaining beamwidth constancy and present a method based on superimposing several marginally steered beams to form a constant beamwidth multi-beam. This method provides an analytically tractable framework for designing constant beamwidth beamformers.<<ETX>>
Journal of the Acoustical Society of America | 2005
Jacob Benesty; Gary W. Elko; Yiteng Huang
A real-time passive acoustic source localization system for video camera steering advantageously determines the relative delay between the direct paths of two estimated channel impulse responses. The illustrative system employs an approach referred to herein as the “adaptive eigenvalue decomposition algorithm” (AEDA) to make such a determination, and then advantageously employs a “one-step least-squares algorithm” (OSLS) for purposes of acoustic source localization, providing the desired features of robustness, portability, and accuracy in a reverberant environment. The AEDA technique directly estimates the (direct path) impulse response from the sound source to each of a pair of microphones, and then uses these estimated impulse responses to determine the time delay of arrival (TDOA) between the two microphones by measuring the distance between the first peaks thereof (i.e., the first significant taps of the corresponding transfer functions). In one embodiment, the system minimizes an error function (i.e., a difference) which is computed with the use of two adaptive filters, each such filter being applied to a corresponding one of the two signals received from the given pair of microphones. The filtered signals are then subtracted from one another to produce the error signal, which is minimized by a conventional adaptive filtering algorithm such as, for example, an LMS (Least Mean Squared) technique. Then, the TDOA is estimated by measuring the “distance” (i.e., the time) between the first significant taps of the two resultant adaptive filter transfer functions.
international conference on acoustics speech and signal processing | 1999
Yiteng Huang; Jacob Benesty; Gary W. Elko
To locate an acoustic source in a room, the relative delay between microphone pairs must be determined efficiently and accurately. However, most traditional time delay estimation (TDE) algorithms fail in reverberant environments. A new approach is proposed that takes into account the reverberation of the room. A real time PC-based TDE system running under Microsoft/sup TM/ Windows system was developed with three TDE techniques: classical cross-correlation, phase transform, and a new algorithm that is proposed in this paper. The system provides an interactive platform that allows users to compare performance of these algorithms.