Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kyungmin Na is active.

Publication


Featured researches published by Kyungmin Na.


international symposium on circuits and systems | 1994

A discriminative training algorithm for predictive neural network models

Kyungmin Na; JaeYeol Rheem; Souguil Ann

Predictive neural network models are powerful speech recognition models based on a nonlinear pattern prediction. Those models, however, suffer from poor discrimination between acoustically similar speech signals. In this paper, we propose a new discriminative training algorithm for predictive neural network models based on the generalized probabilistic descent (GPD) algorithm and the minimum classification error formulation. The proposed algorithm allows direct minimization of a recognition error rate. Evaluation of our training algorithm on Korean digits shows its effectiveness by 30% reduction of recognition error.<<ETX>>


international symposium on neural networks | 1995

Corrective training of hidden control neural network

Kyungmin Na; Soo-Ik Chae; Souguil Ann

A corrective training algorithm for hidden control neural network (HCNN) is proposed in this paper with application to the isolated spoken Korean digit recognition. The proposed algorithm tries to heuristically minimize the number of recognition errors, which improves the discriminatory power of the conventional HCNN-based speech recognizers. Experimental results showed 25% reduction for closed test, and 10% reduction for open test in the number of recognition errors.


international symposium on neural networks | 1998

An HMM/MLP hybrid approach for improving discrimination in speech recognition

Kyungmin Na; Soo-Ik Chae

We propose an HMM/MLP hybrid scheme for achieving high discrimination in speech recognition. In the conventional hybrid approaches, an MLP is trained as a distribution estimator or as a VQ labeler, and the HMMs perform recognition using the output of the MLP. In the proposed method, to the contrary, HMMs generate a new feature vector of a fixed dimension by concatenating their state log-likelihoods, and an MLP discriminator performs recognition by using this new feature vector as an input. The proposed method was tested on the nine American E-set letters from the ISOLET database of the OGI. For comparison, a weighted HMM (WHMM) algorithm and GPD-based WHMM algorithm which use an adaptively-trained linear discriminator were also tested. In most cases, the recognition rates on the closed-test and open-test sets of the proposed method were higher than those of the conventional methods.


international symposium on neural networks | 1997

Single-sensor active noise cancellation using recurrent neural network predictors

Kyungmin Na; Soo-Ik Chae

In this paper, we propose a recurrent neural network (RNN) predictor with an application to a single-sensor active noise cancellation (ANC) system. The proposed RNN predictor has one hidden layer whose neurons are classified into two categories, recurrent hidden neurons and non-recurrent hidden neurons. Due to the RNNs ability of modeling time-varying signals such as acoustic noises, the proposed RNN may be more suitable than the LMS-type digital filters and multilayer perceptrons (MLP). Moreover, the number of non-recurrent hidden neurons can be arbitrarily increased according to the complexity of a given problem with a relatively little increase in computation during training. In the simulation on the noise data from a moisture-removing machine, about 22.35 dB attenuation was obtained with the proposed approach while 20.83 dB attenuation with the MLP-based approach, and 14.35 dB with a filtered-x LMS algorithm.


international conference on acoustics speech and signal processing | 1996

GPD training of the state weighting functions in hidden control neural network

Kyungmin Na; Soo-Ik Chae

This paper proposes a weighted hidden control neural network (WHCNN) which incorporates a state weighting function into the conventional HCNN. The state weighting function is trained by the generalized probabilistic descent (GPD) method. The GPD method allows minimization of the number of recognition errors directly, and thus allows approximation of the minimum-error-rate recognizer. As a result, we can find that state prediction residuals from each state contain rich discrimination information. Experimental results on the Korean digit recognition have shown 25% and about 16.7% reduction in the number of recognition errors for closed and open test, respectively. The derived algorithm may be easily applied to other predictive neural networks.


conference of the international speech communication association | 1995

Discriminative training of hidden Markov models using overall risk criterion and reduced gradient method.

Kyungmin Na; Bumki Jeon; Dong-Il Chang; Soo-Ik Chae; Souguil Ann


conference of the international speech communication association | 1995

Recurrent neural prediction models for speech recognition.

Kyungmin Na; Jekwan Ryu; Dong-Il Chang; Soo-Ik Chae; Souguil Ann


Archive | 1999

Frequency-domain implementation of block adaptive filters for ICA-based multichannel blind deconvolution+I3055

Kyungmin Na; Sangcheol Kang; Kyung Jin Lee; Soo-Ik Chae


Electronics Letters | 1995

Modified delta coding algorithm for real parameter optimisation

Kyungmin Na; Soo-Ik Chae; Souguil Ann


european signal processing conference | 2000

Frequency-domain separation of convolved non-stationary signals with adaptive non-causal FIR filters

Kyungmin Na; Kyung Jin Lee; Soo-Ik Chae

Collaboration


Dive into the Kyungmin Na's collaboration.

Top Co-Authors

Avatar

Soo-Ik Chae

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Souguil Ann

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

JaeYeol Rheem

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Dong-Il Chang

Seoul National University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge