Changick Kim
KAIST
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Changick Kim.
IEEE Transactions on Circuits and Systems for Video Technology | 2002
Changick Kim; Jenq-Neng Hwang
The new video-coding standard MPEG-4 enables content-based functionality, as well as high coding efficiency, by taking into account shape information of moving objects. A novel algorithm for segmentation of moving objects in video sequences and extraction of video object planes (VOPs) is proposed . For the case of multiple video objects in a scene, the extraction of a specific single video object (VO) based on connected components analysis and smoothness of VO displacement in successive frames is also discussed. Our algorithm begins with a robust double-edge map derived from the difference between two successive frames. After removing edge points which belong to the previous frame, the remaining edge map, moving edge (ME), is used to extract the VOP. The proposed algorithm is evaluated on an indoor sequence captured by a low-end camera as well as MPEG-4 test sequences and produces promising results.
IEEE Transactions on Circuits and Systems for Video Technology | 2005
Changick Kim; Bhaskaran Vasudev
This paper proposes a novel sequence matching technique to detect copies of a video clip. If a video copy detection technique is to be effective, it needs to be robust to the many digitization and encoding processes that give rise to several distortions, including changes in brightness, color, frame format, as well as different blocky artifacts. Most of the video copy detection algorithms proposed so far focus mostly on coping with signal distortions introduced by different encoding parameters; however, these algorithms do not cope well with display format conversions. We propose a copy-detection scheme that is robust to the above-mentioned distortions and is also robust to display format conversions. To this end, each image frame is partitioned into 2 /spl times/ 2 by intensity averaging, and the partitioned values are stored for indexing and matching. Our spatiotemporal approach combines spatial matching of ordinal signatures obtained from the partitions of each frame and temporal matching of temporal signatures from the temporal trails of the partitions. The proposed method has been extensively tested and the results show the proposed scheme is effective in detecting copies which have been subjected to wide range of modifications.
Signal Processing-image Communication | 2003
Changick Kim
Abstract This paper proposes a novel method to detect copied versions of digital images. The proposed copy detection scheme can be used as either an alternative approach or a complementary approach to watermarking. A test image is reduced to an 8×8 sub-image by intensity averaging, and the AC coefficients of its discrete cosine transform (DCT) are used to compute distance from those generated from the query image, of which a user wants to find copies. A challenge is that the replicated image may be processed to elude copy detection or enhance image quality. We show ordinal measure of DCT coefficients, which is based on relative ordering of AC magnitude values and using distance metrics between two rank permutations, is robust to various modifications of the original image. The optimal threshold selection scheme using the maximum a posteriori criterion is also described. The efficacy of the proposed method is extensively tested with both cluster-free and cluster-based detection scheme.
IEEE Transactions on Biomedical Engineering | 2010
Chanho Jung; Changick Kim
In this letter, we present a novel watershed-based method for segmentation of cervical and breast cell images. We formulate the segmentation of clustered nuclei as an optimization problem. A hypothesis concerning the nuclei, which involves a priori knowledge with respect to the shape of nuclei, is tested to solve the optimization problem. We first apply the distance transform to the clustered nuclei. A marker extraction scheme based on the H -minima transform is introduced to obtain the optimal segmentation result from the distance map. In order to estimate the optimal h-value, a size-invariant segmentation distortion evaluation function is defined based on the fitting residuals between the segmented region boundaries and fitted models. Ellipsoidal modeling of contours is introduced to adjust nuclei contours for more effective analysis. Experiments on a variety of real microscopic cell images show that the proposed method yields more accurate segmentation results than the state-of-the-art watershed-based methods.
IEEE Transactions on Image Processing | 2009
Wonjun Kim; Changick Kim
Overlay text brings important semantic clues in video content analysis such as video information retrieval and summarization, since the content of the scene or the editors intention can be well represented by using inserted text. Most of the previous approaches to extracting overlay text from videos are based on low-level features, such as edge, color, and texture information. However, existing methods experience difficulties in handling texts with various contrasts or inserted in a complex background. In this paper, we propose a novel framework to detect and extract the overlay text from the video scene. Based on our observation that there exist transient colors between inserted text and its adjacent background, a transition map is first generated. Then candidate regions are extracted by a reshaping method and the overlay text regions are determined based on the occurrence of overlay text in each candidate. The detected overlay text regions are localized accurately using the projection of overlay text pixels in the transition map and the text extraction is finally conducted. The proposed method is robust to different character size, position, contrast, and color. It is also language independent. Overlay text region update between frames is also employed to reduce the processing time. Experiments are performed on diverse videos to confirm the efficiency of the proposed method.
acm multimedia | 2000
Changick Kim; Jenq-Neng Hwang
In this paper, we present a novel scheme for object-based key-frame extraction facilitated by an efficient video object segmentation system. Key-frames are the subset of still images which best represent the content of a video sequence in an abstracted manner. Thus, key-frame based video abstraction transforms an entire video clip to a small number of representative images. The challenge is that the extraction of key-frames needs to be automated and context dependent so that they maintain the important contents of the video while remove all redundancy. Among various semantic primitives of video, objects of interest along with their actions and generated events can play an important role in some applications such as object-based video surveillance system. Furthermore, on-line processing combined with fast and robust video object segmentation is crucial for real-time applications to report unwanted action or event as soon as it happens. Experimental results on the proposed scheme for object-based video abstraction are presented.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 1992
Hong Jeong; Changick Kim
The authors suggest a regularization method for determining scales for edge detection adaptively for each site in the image plane. Specifically, they extend the optimal filter concept of T. Poggio et al. (1984) and the scale-space concept of A. Witkin (1983) to an adaptive scale parameter. To avoid an ill-posed feature synthesis problem, the scheme automatically finds optimal scales adaptively for each pixel before detecting final edge maps. The authors introduce an energy function defined as a functional over continuous scale space. Natural constraints for edge detection are incorporated into the energy function. To obtain a set of optimal scales that can minimize the energy function, a parallel relaxation algorithm is introduced. Experiments for synthetic and natural scenes show the advantages of the algorithm. In particular, it is shown that this system can detect both step and diffuse edges while drastically filtering out the random noise. >
IEEE Transactions on Circuits and Systems for Video Technology | 2011
Wonjun Kim; Chanho Jung; Changick Kim
This paper presents a novel method for detecting salient regions in both images and videos based on a discriminant center-surround hypothesis that the salient region stands out from its surroundings. To this end, our spatiotemporal approach combines the spatial saliency by computing distances between ordinal signatures of edge and color orientations obtained from the center and the surrounding regions and the temporal saliency by simply computing the sum of absolute difference between temporal gradients of the center and the surrounding regions. Our proposed method is computationally efficient, reliable, and simple to implement and thus it can be easily extended to various applications such as image retargeting and moving object extraction. The proposed method has been extensively tested and the results show that the proposed scheme is effective in detecting saliency compared to various state-of-the-art methods.
IEEE Transactions on Biomedical Engineering | 2010
Chanho Jung; Changick Kim; Seoung Wan Chae; Sukjoong Oh
In a fully automatic cell extraction process, one of the main issues to overcome is the problem related to extracting overlapped nuclei since such nuclei will often affect the quantitative analysis of cell images. In this paper, we present an unsupervised Bayesian classification scheme for separating overlapped nuclei. The proposed approach first involves applying the distance transform to overlapped nuclei. The topographic surface generated by distance transform is viewed as a mixture of Gaussians in the proposed algorithm. In order to learn the distribution of the topographic surface, the parametric expectation-maximization (EM) algorithm is employed. Cluster validation is performed to determine how many nuclei are overlapped. Our segmentation approach incorporates a priori knowledge about the regular shape of clumped nuclei to yield more accurate segmentation results. Experimental results show that the proposed method yields superior segmentation performance, compared to those produced by conventional schemes.
IEEE Transactions on Image Processing | 2005
Changick Kim
We propose a novel algorithm to partition an image with low depth-of-field (DOF) into focused object-of-interest (OOI) and defocused background. The proposed algorithm unfolds into three steps. In the first step, we transform the low-DOF image into an appropriate feature space, in which the spatial distribution of the high-frequency components is represented. This is conducted by computing higher order statistics (HOS) for all pixels in the low-DOF image. Next, the obtained feature space, which is called HOS map in this paper, is simplified by removing small dark holes and bright patches using a morphological filter by reconstruction. Finally, the OOI is extracted by applying region merging to the simplified image and by thresholding. Unlike the previous methods that rely on sharp details of OOI only, the proposed algorithm complements the limitation of them by using morphological filters, which also allows perfect preservation of the contour information. Compared with the previous methods, the proposed method yields more accurate segmentation results, supporting faster processing.In photography, low depth of field (DOF) is an important technique to emphasize the object of interest (OOI) within an image. Thus, low DOF images are widely used in the application area of macro, portrait or sports photography. When viewing a low DOF image, the viewer implicitly concentrates on the regions that are sharper regions of the image and thus segments the image into regions of interest and non regions of interest which has a major impact on the perception of the image. Thus, a robust algorithm for the fully automatic detection of the OOI in low DOF images provides valuable information for subsequent image processing and image retrieval. In this paper we propose a robust and parameterless algorithm for the fully automatic segmentation of low DOF images. We compare our method with three similar methods and show the superior robustness even though our algorithm does not require any parameters to be set by hand. The experiments are conducted on a real world data set with high and low DOF images.