Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ouk Choi is active.

Publication


Featured researches published by Ouk Choi.


IEEE Transactions on Image Processing | 2014

A consensus-driven approach for structure and texture aware depth map upsampling.

Ouk Choi; Seung-Won Jung

This paper presents a method for increasing spatial resolution of a depth map using its corresponding high-resolution (HR) color image as a guide. Most of the previous methods rely on the assumption that depth discontinuities are highly correlated with color boundaries, leading to artifacts in the regions where the assumption is broken. To prevent scene texture from being erroneously transferred to reconstructed scene surfaces, we propose a framework for dividing the color image into different regions and applying different methods tailored to each region type. For the region classification, we first segment the low-resolution (LR) depth map into regions of smooth surfaces, and then use them to guide the segmentation of the color image. Using the consensus of multiple image segmentations obtained by different super-pixel generation methods, the color image is divided into continuous and discontinuous regions: in the continuous regions, their HR depth values are interpolated from LR depth samples without exploiting the color information. In the discontinuous regions, their HR depth values are estimated by sequentially applying more complicated depth-histogram-based methods. Through experiments, we show that each step of our method improves depth map upsampling both quantitatively and qualitatively. We also show that our method can be extended to handle real data with occluded regions caused by the displacement between color and depth sensors.


international conference on image processing | 2010

Range unfolding for Time-of-Flight depth cameras

Ouk Choi; Hwasup Lim; Byongmin Kang; Yong Sun Kim; Keechang Lee; James D. K. Kim; Chang-Yeong Kim

Time-of-Flight depth cameras provide a direct way to acquire range images, using the phase delay of the incoming reflected signal with respect to the emitted signal. These cameras, however, have a challenging problem called range folding, which occurs due to the modular error in phase delay—ranges are modulo the maximum range. To our best knowledge, we exploit the first approach to estimate the number of mods at each pixel from only a single range image. The estimation is recasted into an optimization problem in the Markov random field framework, where the number of mods is considered as a label. The actual range is then recovered using the optimal number of mods at each pixel, so-named range unfolding. As demonstrated in the experiments with various range images of real scenes, the proposed method accurately determines the number of mods. In result, the maximum range is practically extended at least twice of that specified by the modulation frequency.


Proceedings of SPIE | 2012

Parametric model-based noise reduction for ToF depth sensors

Yong Sun Kim; Byongmin Kang; Hwasup Lim; Ouk Choi; Keechang Lee; James D. K. Kim; Chang-Yeong Kim

This paper presents a novel Time-of-Flight (ToF) depth denoising algorithm based on parametric noise modeling. ToF depth image includes space varying noise which is related to IR intensity value at each pixel. By assuming ToF depth noise as additive white Gaussian noise, ToF depth noise can be modeled by using a power function of IR intensity. Meanwhile, nonlocal means filter is popularly used as an edge-preserving denoising method for removing additive Gaussian noise. To remove space varying depth noise, we propose an adaptive nonlocal means filtering. According to the estimated noise, the search window and weighting coefficient are adaptively determined at each pixel so that pixels with large noise variance are strongly filtered and pixels with small noise variance are weakly filtered. Experimental results demonstrate that the proposed algorithm provides good denoising performance while preserving details or edges compared to the typical nonlocal means filtering.


international conference on image processing | 2012

Wide range stereo time-of-flight camera

Ouk Choi; Seungkyu Lee

Time-of-Flight cameras provide real-time depth information at each pixel. These cameras, however, suffer from modular errors, recording shortened distance measurements if the objects are farther than a certain maximum range. Previous error-correction methods use either a single depth map or successively acquired multi-frequency depth maps. In general, they are prone to incorrect range extension results when the scene is discontinuous or when the camera or objects are moving. Our method based on a stereo Time-of-Flight camera deals with moving cameras and objects by acquiring a pair of depth maps simultaneously. Challenges caused by noise and different viewpoints are resolved by an iterative Markov random field optimization technique that delivers the consistent recovery of actual depth values beyond the maximum range. The experiments show that the proposed method extends the range use of Time-of-Flight cameras beyond twice.


international conference on image processing | 2011

Bi-layer inpainting for novel view synthesis

Hwasup Lim; Yong Sun Kim; Seungkyu Lee; Ouk Choi; James D. K. Kim; Changyeong Kim

We present a bi-layer inpainting method for synthesizing novel views from a single color image and its corresponding depth map under the exemplar-based inpainting framework. Unlike conventional image inpainting, the decisions of the inpainting direction and the sample regions are important to inpaint disoccluded regions which disclose hidden background regions in the new viewpoint. The proposed algorithm first labels boundaries along the disoccluded regions whether it belongs to the foreground or background objects, and then separates their surrounding regions into the foreground and background regions using the graph cut algorithm. The disoc-cluded regions are then filled from the background boundary with best-match patches taken from the background regions. As demonstrated in the experimental results, the proposed method recovers the disoccluded region with visually plausible quality.


international conference on image processing | 2011

Separable bilateral nonlocal means

Yong Sun Kim; Hwasup Lim; Ouk Choi; Keechang Lee; James D. K. Kim; Chang-Yeong Kim

Nonlocal means filtering is an edge-preserving denoising method whose filter weights are determined by Gaussian weighted patch similarities. The nonlocal means filter shows superior performance in removing additive Gaussian noise at the expense of high computational complexity. In this paper, we propose an efficient and effective denoising method by introducing a separable implementation of the nonlocal means filter and adopting a bilateral kernel for computing patch similarities. Experimental results demonstrate that the proposed method provides comparable performance to the original nonlocal means, with lower computational complexity.


Optical Engineering | 2013

Interframe consistent multifrequency phase unwrapping for time-of-flight cameras

Ouk Choi; Seungkyu Lee; Hwasup Lim

Abstract. Commercially available time-of-flight cameras illuminate the scene with amplitude-modulated infrared light signals and detect their reflections to provide per-pixel depth maps in real time. These cameras, however, suffer from an inherent problem called phase wrapping, which occurs due to the modular ambiguity in the phase delay measurement. As a result, the measured distance to a scene point becomes much shorter than its actual distance if the point is farther than a certain maximum range. There have been multifrequency phase unwrapping methods, which recover the actual distance values by exploiting the consistency in the disambiguated depth values across depth maps of the same scene, acquired at different modulation frequencies. For robust and accurate estimation against noise, a cost function is built that evolves over time to enforce both the interframe depth consistency and the intraframe depth continuity. As demonstrated in the experiments with real scenes, the proposed method correctly disambiguates the depth measurements, extending the maximum range restricted by the modulation frequency.


asian conference on computer vision | 2012

Fusion of time-of-flight and stereo for disambiguation of depth measurements

Ouk Choi; Seungkyu Lee

The complementary nature of time-of-flight and stereo has led to their fusion systems, providing high quality depth maps robustly against depth bias and random noise of the time-of-flight camera as well as the lack of scene texture. This paper shows that the fusion system is also effective for disambiguating time-of-flight depth measurements caused by phase wrapping, which records depth values that are much less than their actual values if the scene points are farther than a certain maximum range. To recover the unwrapped depth map, we build a Markov random field based on a constraint that an accurately unwrapped depth value should minimize the dissimilarity between its projections on the stereo images. The unwrapped depth map is then adapted to stereo matching, reducing the matching ambiguity and enhancing the depth quality in textureless regions. Through experiments we show that the proposed method extends the range use of the time-of-flight camera, delivering unambiguous depth maps of real scenes.


Proceedings of SPIE | 2012

Discrete and continuous optimizations for depth image super-resolution

Ouk Choi; Hwasup Lim; Byongmin Kang; Yong Sun Kim; Keechang Lee; James D. K. Kim; Chang-Yeong Kim

Recently a Time-of-Flight 2D/3D image sensor has been developed, which is able to capture a perfectly aligned pair of a color and a depth image. To increase the sensitivity to infrared light, the sensor electrically combines multiple adjacent pixels into a depth pixel at the expense of depth image resolution. To restore the resolution we propose a depth image super-resolution method that uses a high-resolution color image aligned with an input depth image. In the first part of our method, the input depth image is interpolated into the scale of the color image, and our discrete optimization converts the interpolated depth image into a high-resolution disparity image, whose discontinuities precisely coincide with object boundaries. Subsequently, a discontinuity-preserving filter is applied to the interpolated depth image, where the discontinuities are cloned from the high-resolution disparity image. Meanwhile, our unique way of enforcing the depth reconstruction constraint gives a high-resolution depth image that is perfectly consistent with its original input depth image. We show the effectiveness of the proposed method both quantitatively and qualitatively, comparing the proposed method with two existing methods. The experimental results demonstrate that the proposed method gives sharp high-resolution depth images with less error than the two methods for scale factors of 2, 4, and 8.


2016 International Conference on Electronics, Information, and Communications (ICEIC) | 2016

Robust binarization of gray-coded pattern images for smart projectors

Ouk Choi; Hwasup Lim; Sang Chul Ahn

A smart projector, equipped with a camera, automatically adjusts its keystone and color transformation according to the shape, position, orientation, and reflectance of projection surfaces. To realize the automatic adjustment, smart projectors build a mapping between pixel locations in a projector image and their corresponding locations in the camera image of the projected surface. Complementary gray-coded patterns play an important role in building such correspondences; corresponding pixels share the same code, so that correspondence search becomes as simple as reading out the codes. If the camera views a wider area than the projection region, a significant number of camera pixels capture surfaces outside the projection region. To build correct correspondences, detection and rejection of them is indispensable. In this paper, we build a robust method for detecting and rejecting those pixels, so that the camera images of the patterns will be correctly binarized. Experimental results show that the proposed method effectively rejects outliers while preserving accurately binarized pixels.

Collaboration


Dive into the Ouk Choi's collaboration.

Researchain Logo
Decentralizing Knowledge