Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Changjae Oh is active.

Publication


Featured researches published by Changjae Oh.


IEEE Transactions on Image Processing | 2014

Probability-Based Rendering for View Synthesis

Bumsub Ham; Dongbo Min; Changjae Oh; Minh N. Do; Kwanghoon Sohn

In this paper, a probability-based rendering (PBR) method is described for reconstructing an intermediate view with a steady-state matching probability (SSMP) density function. Conventionally, given multiple reference images, the intermediate view is synthesized via the depth image-based rendering technique in which geometric information (e.g., depth) is explicitly leveraged, thus leading to serious rendering artifacts on the synthesized view even with small depth errors. We address this problem by formulating the rendering process as an image fusion in which the textures of all probable matching points are adaptively blended with the SSMP representing the likelihood that points among the input reference images are matched. The PBR hence becomes more robust against depth estimation errors than existing view synthesis approaches. The MP in the steady-state, SSMP, is inferred for each pixel via the random walk with restart (RWR). The RWR always guarantees visually consistent MP, as opposed to conventional optimization schemes (e.g., diffusion or filtering-based approaches), the accuracy of which heavily depends on parameters used. Experimental results demonstrate the superiority of the PBR over the existing view synthesis approaches both qualitatively and quantitatively. Especially, the PBR is effective in suppressing flicker artifacts of virtual video rendering although no temporal aspect is considered. Moreover, it is shown that the depth map itself calculated from our RWR-based method (by simply choosing the most probable matching point) is also comparable with that of the state-of-the-art local stereo matching methods.


IEEE Transactions on Broadcasting | 2015

Visual Fatigue Relaxation for Stereoscopic Video via Nonlinear Disparity Remapping

Changjae Oh; Bumsub Ham; Sunghwan Choi; Kwanghoon Sohn

A nonlinear disparity remapping scheme is presented to enhance the visual comfort of stereoscopic videos. The stereoscopic video is analyzed for predicting a degree of fatigue with the viewpoint of three factors: 1) spatial frequency; 2) disparity magnitude; and 3) disparity motion. The degree of fatigue is then estimated in a local manner. It can be visualized as an index map so-called a “visual fatigue map,” and an overall fatigue score is obtained by pooling the visual fatigue map. Based on this information, a nonlinear remapping operator is generated in two phases: 1) disparity range adaptation and 2) operator nonlinearization. First, a disparity range is automatically adjusted according to the determined overall fatigue score. Second, rather than linearly adjusting the disparity range of an original video to the determined disparity range, a nonlinear remapping operator is constructed in a manner that the disparity range of inducible problematic region is compressed, while that of comfortable region is stretched. The proposed scheme is verified via subjective evaluations where visual fatigue and depth sensation are compared among original videos, linearly remapped videos, and nonlinearly remapped videos. Experimental results show that the nonlinearly remapped videos provide more comfort than the linearly remapped videos without losing depth sensation.


Expert Systems With Applications | 2017

Robust interactive image segmentation using structure-aware labeling

Changjae Oh; Bumsub Ham; Kwanghoon Sohn

Interactive segmentation method robust to inaccurate initial labels.Computing the reliability of initial label for accurate segmentation.Demonstrating performance under intensive experiments with synthetic/manual labels. Interactive image segmentation has remained an active research topic in image processing and graphics, since the user intention can be incorporated to enhance the performance. It can be employed to mobile devices which now allow user interaction as an input, enabling various applications. Most interactive segmentation methods assume that the initial labels are correctly and carefully assigned to some parts of regions to segment. Inaccurate labels, such as foreground labels in background regions for example, lead to incorrect segments, even by a small number of inaccurate labels, which is not appropriate for practical usage such as mobile application. In this paper, we present an interactive segmentation method that is robust to inaccurate initial labels (scribbles). To address this problem, we propose a structure-aware labeling method using occurrence and co-occurrence probability (OCP) of color values for each initial label in a unified framework. Occurrence probability captures a global distribution of all color values within each label, while co-occurrence one encodes a local distribution of color values around the label. We show that nonlocal regularization together with the OCP enables robust image segmentation to inaccurately assigned labels and alleviates a small-cut problem. We analyze theoretic relations of our approach to other segmentation methods. Intensive experiments with synthetic and manual labels show that our approach outperforms the state of the art.


british machine vision conference | 2012

Probabilistic Correspondence Matching using Random Walk with Restart.

Changjae Oh; Bumsub Ham; Kwanghoon Sohn

This paper presents a probabilistic method for correspondence matching with a framework of the random walk with restart (RWR). The matching cost is reformulated as a corresponding probability, which enables the RWR to be utilized for matching the correspondences. There are mainly two advantages in our method. First, the proposed method guarantees the non-trivial steady-state solution of a given initial matching probability due to the restarting term in the RWR. It means the number of iteration, a crucial parameter which influences the performance of algorithm, is not needed in contrast to the conventional methods. This gives the consistent results regardless of the evolution time. Second, only an adjacent neighborhood is considered when the matching probabilities are inferred, which lowers the computational complexity while not sacrificing performance. Experimental results show that the performance of the proposed method is competitive to that of state-of-the-art methods both qualitatively and quantitatively.


international conference on consumer electronics | 2016

Non-parametric human segmentation using support vector machine

Kyuwon Kim; Changjae Oh; Kwanghoon Sohn

Human segmentation is an important task in digital cameras. In this study, we present a framework of non-parametric human segmentation based on SVM. By exploiting spatial and color features of training images, the framework achieves noticeably better human segmentation results than GrabCut in terms of the overlap ratio with ground-truth.


IEEE Transactions on Image Processing | 2015

Depth Analogy: Data-Driven Approach for Single Image Depth Estimation Using Gradient Samples

Sunghwan Choi; Dongbo Min; Bumsub Ham; Youngjung Kim; Changjae Oh; Kwanghoon Sohn

Inferring scene depth from a single monocular image is a highly ill-posed problem in computer vision. This paper presents a new gradient-domain approach, called depth analogy, that makes use of analogy as a means for synthesizing a target depth field, when a collection of RGB-D image pairs is given as training data. Specifically, the proposed method employs a non-parametric learning process that creates an analogous depth field by sampling reliable depth gradients using visual correspondence established on training image pairs. Unlike existing data-driven approaches that directly select depth values from training data, our framework transfers depth gradients as reconstruction cues, which are then integrated by the Poisson reconstruction. The performance of most conventional approaches relies heavily on the training RGB-D data used in the process, and such a dependency severely degenerates the quality of reconstructed depth maps when the desired depth distribution of an input image is quite different from that of the training data, e.g., outdoor versus indoor scenes. Our key observation is that using depth gradients in the reconstruction is less sensitive to scene characteristics, providing better cues for depth recovery. Thus, our gradient-domain approach can support a great variety of training range datasets that involve substantial appearance and geometric variations. The experimental results demonstrate that our (depth) gradient-domain approach outperforms existing data-driven approaches directly working on depth domain, even when only uncorrelated training datasets are available.


conference on industrial electronics and applications | 2013

Disparity search range estimation based on dense stereo matching

Rueihung Li; Bumsub Ham; Changjae Oh; Kwanghoon Sohn

This paper presents a scheme for estimating disparity search range based on hierarchical stereo matching. It is important to specify a proper range of search space, since it prevents the solution from being trapped in local minima, and saves a lot of time for estimating disparity maps. Conventionally, it has been estimated by finding a sparse set of the correspondences via feature matching techniques. Instead, we address this problem by considering how the dense correspondences, the ultimate goal of estimating search range, can be estimated without search range. First, we estimate the dense correspondences by adapting a simple local stereo matching technique with an arbitrary search range. The hierarchical scheme is leveraged for reducing computational costs and memory usages. Then, reliable checking techniques are performed for eliminating unreliable correspondences. Finally, the range of search space is estimated by observing the distribution of the reliable correspondences. For the quantitative evaluation, a new error metric, biased root mean squared error (B-RMSE), is proposed, which differentiates the estimated search range whether it is narrower or wider than true disparity range. The experimental results show that the proposed method gives more accurate search range compared to the conventional method.


international symposium on broadband multimedia systems and broadcasting | 2012

Hybrid approach for accurate depth acquisition with structured light and stereo camera

Sunghwan Choi; Bumsub Ham; Changjae Oh; Hyon-Gon Choo; Jin Woong Kim; Kwanghoon Sohn

In this paper, we propose a hybrid approach for accurate depth acquisition by using a structured light-based method with a stereo matching. By projecting additional light patterns onto a scene, a structured light-based method works well on a textureless region where a stereo matching shows poor performance. In contrast, the patterns projected onto a rich textured region obstruct in estimating reliable depth information in the structured light-based method, while a stereo matching excels. We exploit these complementary characteristics by combining the results from both methods that outperform the one from either alone. In our fusion framework, a hybrid stereo matching is introduced, in which the disparity search range for each pixel is limited based on the initial depth map obtained by the structured light-based method. In addition, we introduce a confidence-based fusion method which combines the depth maps, while incorporating the advantages of each method. The experimental results show that the proposed method achieves to estimate accurate depth information, while other methods fail.


IEEE Transactions on Image Processing | 2016

Structure Selective Depth Superresolution for RGB-D Cameras

Youngjung Kim; Bumsub Ham; Changjae Oh; Kwanghoon Sohn

This paper describes a method for high-quality depth superresolution. The standard formulations of image-guided depth upsampling, using simple joint filtering or quadratic optimization, lead to texture copying and depth bleeding artifacts. These artifacts are caused by inherent discrepancy of structures in data from different sensors. Although there exists some correlation between depth and intensity discontinuities, they are different in distribution and formation. To tackle this problem, we formulate an optimization model using a nonconvex regularizer. A nonlocal affinity established in a high-dimensional feature space is used to offer precisely localized depth boundaries. We show that the proposed method iteratively handles differences in structure between depth and intensity images. This property enables reducing texture copying and depth bleeding artifacts significantly on a variety of range data sets. We also propose a fast alternating direction method of multipliers algorithm to solve our optimization problem. Our solver shows a noticeable speed up compared with the conventional majorize-minimize algorithm. Extensive experiments with synthetic and real-world data sets demonstrate that the proposed method is superior to the existing methods.


international conference on connected vehicles and expo | 2015

Segment-based free space estimation using plane normal vector in disparity space

Jeonghyun Seo; Changjae Oh; Kwanghoon Sohn

This paper proposes a framework of segment-based free space estimation using plane normal vector with stereo vision. An image is divided into compact superpixels and each of them is viewed as a plane composed of the normal vector in disparity space. To deal with the variation of illumination and shading in real traffic scenes, we estimate depth information for the segmented stereo pair. The representative normal vector is then computed at superpixel-level, which alleviates the problems of conventional color-based approaches and depth-based approaches simultaneously. Based on the assumption that the central-bottom of input image is navigable region, the free space is then determined by clustering the plane normal vectors with the K-means algorithm. In experiments, the proposed approach is evaluated on the KITTI dataset in which we provide the ground truth labels for free space region. The experimental results demonstrate that the proposed framework effectively estimates the free space under various real traffic scenes, and outperforms current state of the art methods both qualitatively and quantitatively.

Collaboration


Dive into the Changjae Oh's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dongbo Min

Chungnam National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hyon-Gon Choo

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge