Jong Hwan Lee
KAIST
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jong Hwan Lee.
international conference on acoustics, speech, and signal processing | 2000
Jong Hwan Lee; Ho Young Jung; Te-Won Lee; Soo-Young Lee
In this paper, we proposed new speech features using independent component analysis to human speeches. When independent component analysis is applied to speech signals for efficient encoding the adapted basis functions resemble Gabor-like features. Trained basis functions have some redundancies, so we select some of the basis functions by the reordering method. The basis functions are almost ordered from the low frequency basis vector to the high frequency basis vector. And this is compatible with the fact that human speech signals have much more information in the low frequency range. Those features can be used in automatic speech recognition systems and the proposed method gives much better recognition rates than conventional mel-frequency cepstral features.
Neural Processing Letters | 2002
Jong Hwan Lee; Te-Won Lee; Ho Young Jung; Soo-Young Lee
A new efficient code for speech signals is proposed. To represent speech signals with minimum redundancy we use independent component analysis to adapt features (basis vectors) that efficiently encode the speech signals. The learned basis vectors are sparsely distributed and localized in both time and frequency. Time-frequency analysis of basis vectors shows the property similar with the critical bandwidth of human auditory system. Our results suggest that the obtained codes of speech signals are sparse and biologically plausible.
Neurocomputing | 2008
Jong Hwan Lee; Sang Hoon Oh; Soo-Young Lee
In order to overcome a limited performance of a conventional monaural model, this letter proposes a binaural blind dereverberation model. Its learning rule is derived using a blind least-squares measure by exploiting higher-order characteristics of output components. In order to prevent an unwanted whitening of speech signal, we adopt a semi-blind approach by employing a pre-determined whitening filter. The proposed model is evaluated using several simulated conditions and the results show better speech quality than those of the monaural model. The applicability of the model to the real environment is also shown by applying to real-recorded data. Especially, the proposed model attains much improved word error rates from 13.9+/-5.7(%) to 4.1+/-3.5(%) across 13 speakers for testing in the real speech recognition experiments.
international symposium on neural networks | 2003
Jong Hwan Lee; Soo-Young Lee
In the real room environment, sound source is distorted with delayed versions of itself reflected form walls. This room reverberation severely degrades the intelligibility of speech and performance of automatic speech recognition system. Blind deconvolution is to find the inverse of reverberation channel when only convolved versions of the sources are available at the receiver. However existing blind deconvolution algorithms assume that a source signal has an independent identically-distributed (IID) non-Gaussian probability density function (PDF). In this research, colored nonstationary non-IID speech signals were transformed into an IID-like signal as possible by ICA-based independence transform and the resulting signals were processed using infomax blind deconvolution algorithm for the simulated minimum-phase finite impulse response (FIR) channels. Compared to the pre-whitening method by Torkkola, the proposed method demonstrated much better performance of about 30 dB signal-to-reverberant components ratio (SRR).
international conference on independent component analysis and signal separation | 2006
Hyung Min Park; Jong Hwan Lee; Sang Hoon Oh; Soo-Young Lee
In performing blind deconvolution to remove reverberation from speech signal, most acoustic deconvolution filters need a great many number of taps, and acoustic environments are often time-varying. Therefore, deconvolution filter coefficients should find their desired values with limited data, but conventional methods need lots of data to converge the coefficients. In this paper, we use sparse priors on the acoustic deconvolution filters to speed up the convergence and obtain better performance. In order to derive a learning algorithm which includes priors on the deconvolution filters, we discuss that a deconvolution algorithm can be obtained by the joint probability density of observed signal and the algorithm includes prior information through the posterior probability density. Simulation results show that sparseness on the acoustic deconvolution filters can be successfully used for adaptation of the filters by improving convergence and performance.
international conference on neural information processing | 2004
Jong Hwan Lee; Sang-Hoon Oh; Soo-Young Lee
In this paper, an adaptive blind dereverberation method based on speech generative model is presented. Our ICA-based speech generative model can decompose speeches into independent sources. Experimental results show that the proposed blind dereverberation model successfully performs even in non-minimum phase channels.
ICA | 2000
Jong Hwan Lee; Ho-Young Jung; Te-Won Lee; Soo-young Lee
Electronics Letters | 2000
Jong Hwan Lee; Ho Young Jung; Te-Won Lee; Soo-Young Lee
Archive | 2017
Seung-Schik Yoo; Jong Hwan Lee; Yongzhi Zhang; Wonhye Lee; Krisztina Fischer; Alexandra J. Golby; Nathan McDannold; Ferenc A. Jolesz
2nd TERMIS World Congress | 2009
Wonhye Lee; Seung-Schik Yoo; Vivian K. Lee; Jong Hwan Lee; Krisztina Fischer; Je-Kyun Park