Ngai-Man Cheung
Texas Instruments
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ngai-Man Cheung.
international conference on acoustics, speech, and signal processing | 2001
Ngai-Man Cheung; Yuji Itoh
The variable length code (VLC) tables in MPEG-1/2/4 and H.263 are fixed and optimized for a limited range of bit-rates, and they cannot handle a variety of applications. The universal variable length code (UVLC) is a new scheme to encode syntax elements and has some configurable capabilities. It is also being considered in ITU-T H.26L. However, the configurable feature of the UVLC has not been well explored. We propose configuring the UVLC with the additional code configuration (ACC). The ACC is used to adapt UVLC to different symbol distributions by adjusting the partitioning of the symbols into different categories, and the code size assignment to different categories. Experimental results show that the UVLC with ACC outperforms the current proposed scheme in H.26L and the VLC tables of existing standards, while drastically simplifying the encoding and decoding process, and is applicable to a variety of applications.
international conference on image processing | 2002
Ngai-Man Cheung; Yuji Itoh
The variable length code (VLC) tables in MPEG-1/2/4 and H.26X are fixed and optimized for a limited range of bit-rates, and they cannot handle a variety of applications. The universal variable length code (UVLC) is a new scheme to encode syntax elements and has some configurable capabilities. It is being considered in the ITU-T H.26L. However, the configurable feature of UVLC has not been well explored. In this paper we describe a VLC scheme which uses configurations as parameters to adapt UVLC to different symbol distributions of different applications. We also propose a method to automatically determine the configuration parameters based on a genetic algorithm (GA). Experimental results show that our method can achieve very good coding efficiency while drastically simplifying the encoding and decoding process, and is applicable to a variety of applications.
international conference on acoustics speech and signal processing | 1998
Ngai-Man Cheung; Steven Trautmann; Andrew Horner
Head-related transfer functions (HRTFs) describe the spectral filtering that occurs between a source sound and the listeners eardrum. Since HRTFs vary as a function of the relative source location and subject, practical implementation of 3D audio must take into account a large set of HRTFs for different azimuths and elevations. Previous work has proposed several HRTF models for data reduction. This paper describes our work in applying genetic algorithms to find a set of HRTF basis spectra, and the normal equation method to compute the optimal combination of linear weights to represent the individual HRTFs at different azimuths and elevations. The genetic algorithm selects the basis spectra from the set of original HRTF amplitude responses, using an average relative spectral error as the fitness function. Encouraging results from the experiments suggest that genetic algorithms provide an effective approach to this data reduction problem.
multimedia signal processing | 1999
Ngai-Man Cheung; Tsugio Kawashima; Yoshihide Iwata; Steven Trautmann; Jeff Tsay; Basavaraj I. Pawate
This paper describes our effort in implementing software MPEG-2 video decoder using TI C6201 DSP and Basava Technology. The hardware device, DSP Enhanced Memory Module (DSP-MM), leverages the advantages of high computation performance from C6201 DSP, as well as high bandwidth memory access and efficient memory usage from Basava Technology. A prototype with C6201 DSP and 32MB SDRAM shared memory is functioning in PC environment running Windows 95 operating system. We will describe the video decoder in detail.
multimedia signal processing | 1997
Ngai-Man Cheung; S. Trautman
Head-related transfer functions, or HRTFs, refer to the spectral filtering from sound sources to listeners eardrums. It is an important cue to spatial hearing. Since HRTFs vary as a function of relative source locations and subjects, practical implementation of 3D audio always races a large set of HRTFs for different azimuths and elevations (even if non-individualized HRTFs are used). Previous works have proposed several models to represent the HRTFs in order to achieve data reduction. In this paper, we describe our work In applying Genetic Algorithm (GA) to HRTF modeling. Based on a linear combination model, our system uses a GA to find the basis spectra and the normal equation method to compute the weights matrix. Individual HRTFs at different azimuths and elevations are represented as a weighted combination of the basis spectra. A set of HRTFs from the MIT Media Labs KEMAR measurements was used as the source data.
Journal of The Audio Engineering Society | 1996
Ngai-Man Cheung; Andrew Horner
Archive | 2003
Yuji Itoh; Ngai-Man Cheung
Archive | 2003
Ngai-Man Cheung; Yuji Itoh
Archive | 2003
Yuji Itoh; Ho-Cheon Wey; Ngai-Man Cheung
international computer music conference | 1995
Andrew Horner; Ngai-Man Cheung; James W. Beauchamp