Taejeong Kim
Seoul National University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Taejeong Kim.
Pattern Recognition Letters | 2011
Wonseok Song; Taejeong Kim; Hee Chan Kim; Joon Hwan Choi; Hyoun-Joong Kong; Seung-Rae Lee
The finger-vein pattern is one of the human biometric signatures that can be used for personal verification. The first task of a verification process using finger-vein patterns is extracting the pattern from an infrared finger image. As a robust extraction method, we propose the mean curvature method, which views the vein image as a geometric shape and finds the valley-like structures with negative mean curvatures. When the matched pixel ratio is used in matching vein patterns, experimental results show that, while maintaining low complexity, the proposed method achieves 0.25% equal error rate, which is significantly lower than what existing methods can achieve.
IEEE Transactions on Biomedical Engineering | 2006
Joon Hwan Choi; Hae Kyung Jung; Taejeong Kim
This paper considers neural signal processing applied to extracellular recordings, in particular, unsupervised action potential detection at a low signal-to-noise ratio. It adopts the basic framework of the multiresolution Teager energy operator (MTEO) detector, but presents important new results including a significantly improved MTEO detector with some mathematical analyses, a new alignment technique with its effects on the whole spike sorting system, and a variety of experimental results. Specifically, the new MTEO detector employs smoothing windows normalized by noise power derived from mathematical analyses and has an improved complexity by utilizing the sampling rate. Experimental results prove that this detector achieves higher detection ratios at a fixed false alarm ratio than the TEO detector and the discrete wavelet transform detector. We also propose a method that improves the action potential alignment performance. Observing that the extreme points of the MTEO output are more robust to the background noise than those of the action potentials, we use the MTEO output for action potential alignment. This brings not only noticeable improvement in alignment performance but also quite favorable influence over the classification performance. Accordingly, the proposed detector improves the performance of the whole spike sorting system. We verified the improvement using various modeled neural signals and some real neural recordings.
IEEE Signal Processing Letters | 2004
Tae Young Kim; Hyuk Jin Choi; Kiryung Lee; Taejeong Kim
In this letter, we propose a new asymmetric watermarking system which can accommodate many embedding watermarks but needs only one reference watermark for detection. Such a system is useful in averting attacks that seek to estimate the embedding watermark. In the proposed system, the phase of the reference watermark is shifted randomly (clockwise or counterclockwise) in the discrete Fourier transform domain to make embedding watermarks. They are correlated with one another and have the same correlation with the reference one. We also address how to select the design parameters.
international workshop on digital watermarking | 2003
Kiryung Lee; Dong Sik Kim; Taejeong Kim; Kyung Ae Moon
The blind watermarking scheme extracts the embedded message without access to the host signal. Recently, efficient blind watermarking schemes, which exploit the knowledge of the host signal at the encoder, are proposed [1,2,3]. Scalar quantizers are employed for practical implementation. Even though the scalar quantizer can provide simple encoding and decoding schemes, if the watermarked signal is scaled, then the quantizer step size at the decoder should be scaled accordingly for a reliable decoding. In this paper, we propose a preprocessed decoding scheme, which uses an estimated scale factor. The received signal density is approximated by a Gaussian mixture model, and the scale factor is then estimated by employing the expectation maximization algorithm [6]. In the proposed scheme, the scale factor is estimated from the received signal itself without any additional pilot signal. Numerical results show that the proposed scheme provides a reliable decoding from the scaled signal.
IEEE Transactions on Circuits and Systems for Video Technology | 2000
Jeonghun Yang; Hyuk Jin Choi; Taejeong Kim
This paper proposes a postprocessing algorithm that can reduce the blocking artifacts in discrete cosine transform (DCT) coded images. To analyze blocking artifacts as noise components residing across two neighboring blocks, we use 1-D pixel vectors made of pixel rows or columns across two neighboring blocks. We model the blocky noise in each pixel vector as a shape vector weighted by the boundary discontinuity. The boundary discontinuity of each vector is estimated from the difference between the pixel gradient across the block boundary and that of the internal pixels. We make minimum mean squared error (MMSE) estimates of the shape vectors, indexed by the local image activity, based on the noise statistics prior to postprocessing. Once the estimated shape vectors are stored in the decoder, the proposed algorithm eliminates the noise components by simply subtracting from each pixel vector an appropriate shape vector multiplied by the boundary discontinuity. The experimental results show that the proposed algorithm is highly effective in reducing blocking artifacts in both subjective and objective viewpoints, at low computational burden.
IEEE Transactions on Image Processing | 2005
Kiryung Lee; Dong Sik Kim; Taejeong Kim
In order to reduce the blocking artifact in the Joint Photographic Experts Group (JPEG)-compressed images, a new noniterative postprocessing algorithm is proposed. The algorithm consists of a two-step operation: low-pass filtering and then predicting. Predicting the original image from the low-pass filtered image is performed by using the predictors, which are constructed based on a broken line regression model. The constructed predictor is a generalized version of the projector onto the quantization constraint set , , or the narrow quantization constraint set . We employed different predictors depending on the frequency components in the discrete cosine transform (DCT) domain since each component has different statistical properties. Further, by using a simple classifier, we adaptively applied the predictors depending on the local variance of the DCT block. This adaptation enables an appropriate blurring depending on the smooth or detail region, and shows improved performance in terms of the average distortion and the perceptual view. For the major-edge DCT blocks, which usually suffer from the ringing artifact, the quality of fit to the regression model is usually not good. By making a modification of the regression model for such DCT blocks, we can also obtain a good perceptual view. The proposed algorithm does not employ any sophisticated edge-oriented classifiers and nonlinear filters. Compared to the previously proposed algorithms, the proposed algorithm provides comparable or better results with less computational complexity.
Signal Processing | 2010
Dooseop Choi; Hoseok Do; Hyuk Jin Choi; Taejeong Kim
Based on the observation that low-frequency DCT coefficients of an image are less affected by geometric processings, we propose a new blind MPEG-2 video watermarking algorithm robust to camcorder recording. The mean of the low-frequency DCT coefficients of the video is temporally modulated according to the information bits. To avoid watermarks drift into other frames, we embed watermarks only in the B-frames of MPEG-2 videos, which also allows minimal partial decoding and achieves efficiency. Experimental results show that the proposed scheme achieves high video quality and robustness to camcorder recording and other attacks.
Signal Processing | 2010
Hyun Wook Kim; Dooseop Choi; Hyuk Jin Choi; Taejeong Kim
As an improvement on additive spread spectrum (ASS) watermarking, this paper proposes the selective correlation detector (SCD), which performs correlation detection on a portion rather than the whole block of the signal. The portion is determined based on the watermark-to-cover signal power ratio (WCR) and the estimated variance of the correlation. A similar improvement has been obtained by improved spread spectrum (ISS) watermarking [6] (Malvar and Florencio, 2003), but we show that a better performance can be achieved by combining both algorithms. Experiments with or without a psychoacoustic model are conducted on several audio signals, and confirm the improvements by the SCD and by the combined ISS and SCD.
machine vision applications | 2009
Joon Hwan Choi; Wonseok Song; Taejeong Kim; Seung-Rae Lee; Hee Chan Kim
Finger vein authentication is a personal identification technology using finger vein images acquired by infrared imaging. It is one of the newest technologies in biometrics. Its main advantage over other biometrics is the low risk of forgery or theft, due to the fact that finger veins are not normally visible to others. Extracting finger vein patterns from infrared images is the most difficult part in finger vein authentication. Uneven illumination, varying tissues and bones, and changes in the physical conditions and the blood flow make the thickness and brightness of the same vein different in each acquisition. Accordingly, extracting finger veins at their accurate positions regardless of their thickness and brightness is necessary for accurate personal identification. For this purpose, we propose a new finger vein extraction method which is composed of gradient normalization, principal curvature calculation, and binarization. As local brightness variation has little effect on the curvature and as gradient normalization makes the curvature fairly uniform at vein pixels, our method effectively extracts finger vein patterns regardless of the vein thickness or brightness. In our experiment, the proposed method showed notable improvement as compared with the existing methods.
visual communications and image processing | 1995
Ki-Won Kang; Sang Hoon Lee; Taejeong Kim
In this paper, we propose a method to recover good quality pictures from channel errors in the transmission of coded video sequences. This work is basically an extension of the work presented in to video sequence coding. Transmitted information in most video coding standards is mainly composed of motion vectors (MV) and motion-compensated prediction errors (MCPE). The compressed data are transmitted in binary form through a noisy channel in general. Channel errors in this bit stream result in objectionable degradations in consecutive reconstructed frames. Up to now, there have been many studies for concealing the effects of channel errors on the reconstructed images, but they did not consider recovery of the actual binary data and instead just utilized replacement and/or interpolation techniques to make errors less visible to an observer. In order to have a simple and powerful method to recover the video sequences separately in MV and MCPE from errors, it is necessary to take full advantage of both the source and channel characteristics. The proposed method takes advantage of single-bit-error dominance in a received bit string by using a parity bit for error detection. It also takes advantage of high pixel correlation in usual images by using side- match criterion in selecting the best-fit among candidate replacements.