Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gil-Jin Jang is active.

Publication


Featured researches published by Gil-Jin Jang.


IEEE Signal Processing Letters | 2003

Single-channel signal separation using time-domain basis functions

Gil-Jin Jang; Te-Won Lee; Yung-Hwan Oh

We present a new technique for achieving blind source separation when given only a single-channel recording. The main idea is based on exploiting the inherent time structure of sound sources by learning a priori sets of time-domain basis functions that encode the sources in a statistically efficient manner. We derive a learning algorithm using a maximum likelihood approach given the observed single-channel data and sets of basis functions. For each time point, we infer the source parameters and their contribution factors using a flexible but simple density model. We show the separation results of two music signals as well as the separation of two voice signals.


international conference on acoustics, speech, and signal processing | 2001

Learning statistically efficient features for speaker recognition

Gil-Jin Jang; Te-Won Lee; Yung-Hwan Oh

We apply independent component analysis for extracting an optimal basis to the problem of finding efficient features for a speaker. The basis functions learned by the algorithm are oriented and localized in both space and frequency, bearing a resemblance to Gabor functions. The speech segments are assumed to be generated by a linear combination of the basis functions, thus the distribution of speech segments of a speaker is modeled by a basis, which is calculated so that each component should be independent upon others on the given training data. The speaker distribution is modeled by the basis functions. To assess the efficiency of the basis functions, we performed speaker classification experiments and compared our results with the conventional Fourier-basis. Our results show that the proposed method is more efficient than the conventional Fourier-based features, in that they can obtain a higher classification rate.


international conference on acoustics, speech, and signal processing | 2003

A subspace approach to single channel signal separation using maximum likelihood weighting filters

Gil-Jin Jang; Te-Won Lee; Yung-Hwan Oh

Our goal is to extract multiple source signals when only a single observation channel is available. We propose a new signal separation algorithm based on a subspace decomposition. The observation is transformed into subspaces of interest with different sets of basis functions. A flexible model for density estimation allows an accurate modeling of the distributions of the source signals in the subspaces, and we develop a filtering technique using a maximum likelihood (ML) approach to match the observed single channel data with the decomposition. Our experimental results show good separation performance on simulated mixtures of two music signals as well as two voice signals.


The Journal of the Acoustical Society of Korea | 2012

Noise Spectrum Estimation Using Line Spectral Frequencies for Robust Speech Recognition

Gil-Jin Jang; Jeong-Sik Park; Sanghun Kim

This paper presents a novel method for estimating reliable noise spectral magnitude for acoustic background noise suppression where only a single microphone recording is available. The proposed method finds noise estimates from spectral magnitudes measured at line spectral frequencies (LSFs), under the observation that adjacent LSFs are near the peak frequencies and isolated LSFs are close to the relatively flattened valleys of LPC spectra. The parameters used in the proposed method are LPC coefficients, their corresponding LSFs, and the gain of LPC residual signals, so it suits well to LPC-based speech coders.


Optical Science and Technology, SPIE's 48th Annual Meeting | 2003

Sparse representation in speech signal processing

Te-Won Lee; Gil-Jin Jang; Oh-Wook Kwon

We review the sparse representation principle for processing speech signals. A transformation for encoding the speech signals is learned such that the resulting coefficients are as independent as possible. We use independent component analysis with an exponential prior to learn a statistical representation for speech signals. This representation leads to extremely sparse priors that can be used for encoding speech signals for a variety of purposes. We review applications of this method for speech feature extraction, automatic speech recognition and speaker identification. Furthermore, this method is also suited for tackling the difficult problem of separating two sounds given only a single microphone.


Journal of Machine Learning Research | 2003

A maximum likelihood approach to single-channel source separation

Gil-Jin Jang; Te-Won Lee


neural information processing systems | 2002

A Probabilistic Approach to Single Channel Blind Signal Separation

Gil-Jin Jang; Te-Won Lee


conference of the international speech communication association | 1999

Feature vector transformation using independent component analysis and its application to speaker identification.

Gil-Jin Jang; Seong-Jin Yun; Yung-Hwan Oh


Electronics Letters | 2003

Single channel signal separation using MAP-based subspace decomposition

Gil-Jin Jang; Te-Won Lee; Yung-Hwan Oh


conference of the international speech communication association | 1998

Candidate selection based on significance testing and its use in normalisation and scoring.

Ji-Hwan Kim; Gil-Jin Jang; Seong-Jin Yun; Yung-Hwan Oh

Collaboration


Dive into the Gil-Jin Jang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Oh-Wook Kwon

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge