Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chin-Hui Lee is active.

Publication


Featured researches published by Chin-Hui Lee.


IEEE Transactions on Acoustics, Speech, and Signal Processing | 1990

Automatic recognition of keywords in unconstrained speech using hidden Markov models

Jay G. Wilpon; Lawrence R. Rabiner; Chin-Hui Lee; E. R. Goldman

The modifications made to a connected word speech recognition algorithm based on hidden Markov models (HMMs) which allow it to recognize words from a predefined vocabulary list spoken in an unconstrained fashion are described. The novelty of this approach is that statistical models of both the actual vocabulary word and the extraneous speech and background are created. An HMM-based connected word recognition system is then used to find the best sequence of background, extraneous speech, and vocabulary word models for matching the actual input. Word recognition accuracy of 99.3% on purely isolated speech (i.e., only vocabulary items and background noise were present), and 95.1% when the vocabulary word was embedded in unconstrained extraneous speech, were obtained for the five word vocabulary using the proposed recognition algorithm. >


IEEE Transactions on Acoustics, Speech, and Signal Processing | 1989

A frame-synchronous network search algorithm for connected word recognition

Chin-Hui Lee; Lawrence R. Rabiner

A description is given of an implementation of a novel frame-synchronous network search algorithm for recognizing continuous speech as a connected sequence of words according to a specified grammar. The algorithm, which has all the features of earlier methods, is inherently based on hidden Markov model (HMM) representations and is described in an easily understood, easily programmable manner. The new features of the algorithm include the capability of recording and determining (unique) word sequences corresponding to the several best paths to each grammar node, and the capability of efficiently incorporating a range of word and state duration scoring techniques directly into the forward search of the algorithm, thereby eliminating the need for a postprocessor as in previous implementations. It is also simple and straightforward to incorporate deterministic word transition rules and statistical constraints (probabilities) from a language model into the forward search of the algorithm. >


IEEE Transactions on Acoustics, Speech, and Signal Processing | 1988

On robust linear prediction of speech

Chin-Hui Lee

A robust linear prediction (LP) algorithms is proposed that minimizes the sum of appropriately weighted residuals. The weight is a function of the prediction residual, and the cost function is selected to give more weight to the bulk of small residuals while deemphasizing the small portion of large residuals. In contrast, the conventional LP procedure weights all prediction residuals equally. The robust algorithm takes into account the non-Gaussian nature of the excitations for voiced speech and gives a more efficient (less variance) and less biased estimate for the prediction coefficients than conventional methods. The algorithm can be used in the front-end features extractor for a speech recognition system and as an analyzer for a speech coding system. Testing on synthetic vowel data demonstrates that the robust LP procedure is able to reduce the formant and bandwidth error rate by more than an order of magnitude compared to the conventional LP procedures and is relatively insensitive to the placement of the LPC (LP coding) analysis window and to the value of the pitch period, for a given section of speech signal. >


human language technology | 1990

Improved acoustic modeling for continuous speech recognition

Chin-Hui Lee; Egidio P. Giachin; Lawrence R. Rabiner; Roberto Pieraccini; Aaron E. Rosenberg

We report on some recent improvements to an HMM-based, continuous speech recognition system which is being developed at AT&T Bell Laboratories. These advances, which include the incorporation of inter-word, context-dependent units and an improved feature analysis, lead to a recognition system which achieve better than 95% word accuracy for speaker independent recognition of the 1000-word, DARPA resource management task using the standard word-pair grammar (with a perplexity of about 60). It will be shown that the incorporation of inter-word units into training results in better acoustic models of word juncture coarticulation and gives a 20% reduction in error rate. The effect of an improved set of spectral and log energy features is to further reduce word error rate by about 30%. We also found that the spectral vectors, corresponding to the same speech unit, behave differently statistically, depending on whether they are at word boundaries or within a word. The results suggest that intra-word and inter-word units should be modeled independently, even when they appear in the same context. Using a set of sub-word units which included variants for intra-word and inter-word, context-dependent phones, an additional decrease of about 10% in word error rate resulted.


human language technology | 1989

Acoustic modeling of subword units for large vocabulary speaker independent speech recognition

Chin-Hui Lee; Lawrence R. Rabiner; Roberto Pieraccini; Jay G. Wilpon

The field of large vocabulary, continuous speech recognition has advanced to the point where there are several systems capable of attaining between 90 and 95% word accuracy for speaker independent recognition of a 1000 word vocabulary, spoken fluently for a task with a perplexity (average word branching factor) of about 60. There are several factors which account for the high performance achieved by these systems, including the use of hidden Markov models (HMM) for acoustic modeling, the use of context dependent sub-word units, the representation of between-word phonemic variation, and the use of corrective training techniques to emphasize differences between acoustically similar words in the vocabulary. In this paper we describe one of the large vocabulary speech recognition systems which is being developed at AT&T Bell Laboratories, and discuss the methods used to provide high word recognition accuracy. In particular, we focus on the techniques used to obtain acoustic models of the sub-word units (both context independent and context dependent units), and discuss the resulting system performance as a function of the type of acoustic modeling used.


Journal of the Acoustical Society of America | 1997

Speech recognition employing key word modeling and non-key word modeling

Chin-Hui Lee; Lawrence R. Rabiner; Jay Gordon Wilpon


Journal of the Acoustical Society of America | 1998

Method of and apparatus for signal recognition that compensates for mismatching

Chin-Hui Lee; Ananth Sankar


Archive | 2009

Cross-Pollination in Signal Processing Technical Areas

Alex Acero; John G. Apostolopoulos; Brendan J. Frey; Sadaoki Furui; Alex B. Gershman; Mazin Gilbert; Yingbo Hua; Chin-Hui Lee; Bede Liu; Soo-Chang Pei; Michael Picheny; Roberto Pieraccini; Fernando Pereira; C. Principe; Phillip A. Regalia; Hideaki Sakai; Murat Tekalp; Anthony Vetro; Xiaodong Wang; Umit Batur; Andrea Cavallaro; Berna Erol; Rodrigo Capobianco Guido; Konstantinos Konstantinides; Andres Kwasinski; Besser Associates; Aleksandra Mojsilovic; George S. Moschytz; Methodist Hospital-Cornell; Dong Yu


Archive | 1995

Verfahren und Vorrichtung zur Signalerkennung unter Kompensation von Fehlzusammensetzungen

Chin-Hui Lee; Ananth Sankar


Archive | 2011

New Honor, New Initiatives, and New Impact to Come

Holger Boche; Yen-Kuang Cheng; Liang-Gee Chen; Brendan J. Frey; Alex B. Gershman; Mazin Gilbert; Jenq-Neng Hwang; Michael I. Jordan; Vikram Krishnamurthy; Chin-Hui Lee; Hongwei Liu; Ray Liu; Tom Luo; Nelson Morgan; Fernando Pereira; Roberto Pieraccini; Anthony Vetro; Patrick J. Wolfe; Andrea Cavallaro; Rodrigo Capobianco Guido; Andres Kwasinski; Rick Lyons; Aleksandra Mojsilovic; Marcelo G. S. Bruno; Gwenael Doerr; Yan Lindsay Sun

Collaboration


Dive into the Chin-Hui Lee's collaboration.

Top Co-Authors

Avatar

Biing-Hwang Juang

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrea Cavallaro

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fernando Pereira

Instituto Superior Técnico

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge