Ju Yong Chang
Seoul National University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ju Yong Chang.
computer vision and pattern recognition | 2012
Heesoo Myeong; Ju Yong Chang; Kyoung Mu Lee
In this paper, we propose a novel framework for modeling image-dependent contextual relationships using graph-based context model. This approach enables us to selectively utilize the contextual relationships suitable for an input query image. We introduce a context link view of contextual knowledge, where the relationship between a pair of annotated regions is represented as a context link on a similarity graph of regions. Link analysis techniques are used to estimate the pairwise context scores of all pairs of unlabeled regions in the input image. Our system integrates the learned context scores into a Markov Random Field (MRF) framework in the form of pairwise cost and infers the semantic segmentation result by MRF optimization. Experimental results on object class segmentation show that the proposed graph-based context model outperforms the current state-of-the-art methods.
computer vision and pattern recognition | 2007
Ju Yong Chang; Kyoung Mu Lee; Sang Uk Lee
In this paper, we propose a new method to integrate multiview normal fields using level sets. In contrast with conventional normal integration algorithms used in shape from shading and photometric stereo that reconstruct a 2.5D surface using a single-view normal field, our algorithm can combine multiview normal fields simultaneously and recover the full 3D shape of a target object. We formulate this multiview normal integration problem by an energy minimization framework and find an optimal solution in a least square sense using a variational technique. A level set method is applied to solve the resultant geometric PDE that minimizes the proposed error functional. It is shown that the resultant flow is composed of the well known mean curvature and flux maximizing flows. In particular, we apply the proposed algorithm to the problem of 3D shape modelling in a multiview photometric stereo setting. Experimental results for various synthetic data show the validity of our approach.
Computer Vision and Image Understanding | 2011
Ju Yong Chang; Haesol Park; In Kyu Park; Kyoung Mu Lee; Sang Uk Lee
In this paper, we present a new surfel (surface element) based multi-view stereo algorithm that runs entirely on GPU. We utilize the flexibility of surfel-based 3D shape representation and global optimization by graph cuts in the same framework. Unlike previous works, the algorithm is optimized to massive parallel processing on GPU. First, we construct surfel candidates by local stereo matching and voting. After refining the position and orientation of the surfel candidates, we extract the optimal surfels by employing graph cuts under photo-consistency and surfel orientation constraints. In contrast to the conventional voxel based methods, the proposed algorithm utilizes more accurate photo-consistency and reconstructs the 3D shape up to sub-voxel accuracy. The orientation of the constructed surfel candidates imposes an effective constraint that reduces the effect of the minimal surface bias. The entire processing pipeline is implemented on the latest GPU to significantly speed up the processing. The experimental results show that the proposed approach reconstructs the 3D shape of an object accurately and efficiently, which runs more than 100 times faster than on CPU.
international conference on image processing | 2003
Ju Yong Chang; Kyoung Mu Lee; Sang Uk Lee
This paper describes a new semiglobal method for SFS (shape-from-shading) using graph cuts. The new algorithm combines the local method proposed by Lee and Rosenfeld (1985) and the global method using energy minimization technique. By employing a new global energy minimization formulation, the convex/concave ambiguity problem of the Lee and Rosenfeld method can be resolved efficiently. A new combinatorial optimization technique, graph cuts method is used for the minimization of the proposed energy functional. Experimental results on a variety of synthetic and real-world images show that the proposed algorithm reconstructs the 3-D shape of objects very efficiently.
asian conference on computer vision | 2006
Ju Yong Chang; Kyoung Mu Lee; Sang Uk Lee
In this paper, we propose a new stereo matching algorithm using an iterated graph cuts and mean shift filtering technique. Our algorithm consists of following two steps. In the first step, given an estimated sparse RDM (Reliable Disparity Map), we obtain an updated dense disparity map through a new constrained energy minimization framework that can cope with occlusion. The graph cuts technique is employed for the solution of the proposed stereo model. In the second step, we re-estimate the RDM from the disparity map obtained in the first step. In order to obtain accurate reliable disparities, the crosschecking technique followed by the mean shift filtering in the color-disparity space is introduced. The proposed algorithm expands the RDM repeatedly through the above two steps until it converges. Experimental results on the standard data set demonstrate that the proposed algorithm achieves comparable performance to the state-of-the-arts, and gives good results especially in the areas such as the disparity discontinuous boundaries and occluded regions, where the conventional methods usually suffer.
Pattern Recognition | 2007
Ju Yong Chang; Kyoung Mu Lee; Sang Uk Lee
In this paper, we propose a new stereo matching algorithm using an iterated graph cuts and mean shift filtering technique. Our algorithm estimates the disparity map progressively through the following two steps. In the first step, with a previously estimated RDM (reliable disparity map) that consists of sparse ground control points, an updated dense disparity map is constructed through a RDM constrained energy minimization framework that can cope with occlusion. The graph cuts technique is employed for the solution of the proposed energy model. In the second step, more accurate and denser RDM is estimated through the disparity crosschecking technique and the mean shift filtering in the CSD (color-spatial-disparity) space. The proposed algorithm expands the reliable disparities in RDM repeatedly through the above two steps until it converges. Experimental results on the standard data set demonstrate that the proposed algorithm achieves comparable performance to the state-of-the-arts, and gives excellent results especially in the areas such as the disparity discontinuous boundaries and occluded regions, where the conventional methods usually suffer.
Computer Vision and Image Understanding | 2015
Ju Yong Chang; Kyoung Mu Lee
Novel large margin formulation for semantic similarity learning.Efficient optimization algorithm to solve the proposed semi-definite program (SDP).Thorough experimental study to compare the performances of several algorithms for hierarchical image classification.State-of-the-art classification performance under the hierarchical-loss criterion. In the present paper, a novel image classification method that uses the hierarchical structure of categories to produce more semantic prediction is presented. This implies that our algorithm may not yield a correct prediction, but the result is likely to be semantically close to the right category. Therefore, the proposed method is able to provide a more informative classification result. The main idea of our method is twofold. First, it uses semantic representation, instead of low-level image features, enabling the construction of high-level constraints that exploit the relationship among semantic concepts in the category hierarchy. Second, from such constraints, an optimization problem is formulated to learn a semantic similarity function in a large-margin framework. This similarity function is then used to classify test images. Experimental results demonstrate that our method provides effective classification results for various real-image datasets.
Computer Vision and Image Understanding | 2018
Ju Yong Chang; Kyoung Mu Lee
This study considers the 3D human pose estimation problem in a single RGB image by proposing a conditional random field (CRF) model over 2D poses, in which the 3D pose is obtained as a byproduct of the inference process. The unary term of the proposed CRF model is defined based on a powerful heat-map regression network, which has been proposed for 2D human pose estimation. This study also presents a regression network for lifting the 2D pose to 3D pose and proposes the prior term based on the consistency between the estimated 3D pose and the 2D pose. To obtain the approximate solution of the proposed CRF model, the N-best strategy is adopted. The proposed inference algorithm can be viewed as sequential processes of bottom-up generation of 2D and 3D pose proposals from the input 2D image based on deep networks and top-down verification of such proposals by checking their consistencies. To evaluate the proposed method, we use two large-scale datasets: Human3.6M and HumanEva. Experimental results show that the proposed method achieves the state-of-the-art 3D human pose estimation performance.
computer vision and pattern recognition | 2018
Shanxin Yuan; Guillermo Garcia-Hernando; Björn Stenger; Gyeongsik Moon; Ju Yong Chang; Kyoung Mu Lee; Pavlo Molchanov; Jan Kautz; Sina Honari; Liuhao Ge; Junsong Yuan; Xinghao Chen; Guijin Wang; Fan Yang; Kai Akiyama; Yang Wu; Qingfu Wan; Meysam Madadi; Sergio Escalera; Shile Li; Dongheui Lee; Iason Oikonomidis; Antonis A. Argyros; Tae-Kyun Kim
computer vision and pattern recognition | 2018
Gyeongsik Moon; Ju Yong Chang; Kyoung Mu Lee