Keechang Lee
Samsung
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Keechang Lee.
european conference on computer vision | 2008
Olga Barinova; Vadim Konushin; Anton Yakubenko; Keechang Lee; Hwasup Lim; Anton Konushin
We consider the problem of estimating 3-d structure from a single still image of an outdoor urban scene. Our goal is to efficiently create 3-d models which are visually pleasant. We chose an appropriate 3-d model structure and formulate the task of 3-d reconstruction as model fitting problem. Our 3-d models are composed of a number of vertical walls and a ground plane, where ground-vertical boundary is a continuous polyline. We achieve computational efficiency by special preprocessing together with stepwise search of 3-d model parameters dividing the problem into two smaller sub-problems on chain graphs. The use of Conditional Random Field models for both problems allows to various cues. We infer orientation of vertical walls of 3-d model vanishing points.
IEEE Electron Device Letters | 2010
Seong-Jin Kim; Sang-Wook Han; Byongmin Kang; Keechang Lee; James D. K. Kim; Chang-Yeong Kim
A pixel architecture for providing not only normal 2-D images but also depth information by using a conventional pinned photodiode is presented. This pixel architecture allows the sensor to generate a real-time 3-D image of an arbitrary object. The operation of the pixel is based on the time-of-flight principle detecting the time delay between the emitted and reflected infrared light pulses in a depth image mode. The pixel contains five transistors. Compared to the conventional 4-T CMOS image sensor, the new pixel includes an extra optimized transfer gate for high-speed charge transfer. A fill factor of more than 60% is achieved with a 12 × 12 μm2 size for increasing the sensitivity. A fabricated prototype sensor successfully captures 64 × 16 depth images between 1 and 4 m at a 5-MHz modulation frequency. The depth inaccuracy is measured under 2% at 1 m and 4% at 4 m and is verified by noise analysis.
international conference on image processing | 2010
Ouk Choi; Hwasup Lim; Byongmin Kang; Yong Sun Kim; Keechang Lee; James D. K. Kim; Chang-Yeong Kim
Time-of-Flight depth cameras provide a direct way to acquire range images, using the phase delay of the incoming reflected signal with respect to the emitted signal. These cameras, however, have a challenging problem called range folding, which occurs due to the modular error in phase delay—ranges are modulo the maximum range. To our best knowledge, we exploit the first approach to estimate the number of mods at each pixel from only a single range image. The estimation is recasted into an optimization problem in the Markov random field framework, where the number of mods is considered as a label. The actual range is then recovered using the optimal number of mods at each pixel, so-named range unfolding. As demonstrated in the experiments with various range images of real scenes, the proposed method accurately determines the number of mods. In result, the maximum range is practically extended at least twice of that specified by the modulation frequency.
Optics Express | 2014
Mohammad Mohammadimasoudi; Jeroen Beeckman; Jungsoon Shin; Keechang Lee; Kristiaan Neyts
A wavelength shift of the photonic band gap of 141 nm is obtained by electric switching of a partly polymerized chiral liquid crystal. The devices feature high reflectivity in the photonic band gap without any noticeable degradation or disruption and have response times of 50 µs and 20 µs for switching on and off. The device consists of a mixture of photo-polymerizable liquid crystal, non-reactive nematic liquid crystal and a chiral dopant that has been polymerized with UV light. We investigate the influence of the amplitude of the applied voltage on the width and the depth of the reflection band.
Proceedings of SPIE | 2012
Yong Sun Kim; Byongmin Kang; Hwasup Lim; Ouk Choi; Keechang Lee; James D. K. Kim; Chang-Yeong Kim
This paper presents a novel Time-of-Flight (ToF) depth denoising algorithm based on parametric noise modeling. ToF depth image includes space varying noise which is related to IR intensity value at each pixel. By assuming ToF depth noise as additive white Gaussian noise, ToF depth noise can be modeled by using a power function of IR intensity. Meanwhile, nonlocal means filter is popularly used as an edge-preserving denoising method for removing additive Gaussian noise. To remove space varying depth noise, we propose an adaptive nonlocal means filtering. According to the estimated noise, the search window and weighting coefficient are adaptively determined at each pixel so that pixels with large noise variance are strongly filtered and pixels with small noise variance are weakly filtered. Experimental results demonstrate that the proposed algorithm provides good denoising performance while preserving details or edges compared to the typical nonlocal means filtering.
international conference on image processing | 2011
Yong Sun Kim; Hwasup Lim; Ouk Choi; Keechang Lee; James D. K. Kim; Chang-Yeong Kim
Nonlocal means filtering is an edge-preserving denoising method whose filter weights are determined by Gaussian weighted patch similarities. The nonlocal means filter shows superior performance in removing additive Gaussian noise at the expense of high computational complexity. In this paper, we propose an efficient and effective denoising method by introducing a separable implementation of the nonlocal means filter and adopting a bilateral kernel for computing patch similarities. Experimental results demonstrate that the proposed method provides comparable performance to the original nonlocal means, with lower computational complexity.
Proceedings of SPIE | 2012
Ouk Choi; Hwasup Lim; Byongmin Kang; Yong Sun Kim; Keechang Lee; James D. K. Kim; Chang-Yeong Kim
Recently a Time-of-Flight 2D/3D image sensor has been developed, which is able to capture a perfectly aligned pair of a color and a depth image. To increase the sensitivity to infrared light, the sensor electrically combines multiple adjacent pixels into a depth pixel at the expense of depth image resolution. To restore the resolution we propose a depth image super-resolution method that uses a high-resolution color image aligned with an input depth image. In the first part of our method, the input depth image is interpolated into the scale of the color image, and our discrete optimization converts the interpolated depth image into a high-resolution disparity image, whose discontinuities precisely coincide with object boundaries. Subsequently, a discontinuity-preserving filter is applied to the interpolated depth image, where the discontinuities are cloned from the high-resolution disparity image. Meanwhile, our unique way of enforcing the depth reconstruction constraint gives a high-resolution depth image that is perfectly consistent with its original input depth image. We show the effectiveness of the proposed method both quantitatively and qualitatively, comparing the proposed method with two existing methods. The experimental results demonstrate that the proposed method gives sharp high-resolution depth images with less error than the two methods for scale factors of 2, 4, and 8.
international solid-state circuits conference | 2012
Seong-Jin Kim; Byongmin Kang; James D. K. Kim; Keechang Lee; Chang-Yeong Kim; Kinam Kim
In this paper, we present a 2nd-generation 2D/3D imager based on the pinned- photodiode pixel structure. The time-division readout architecture for both image types (color and depth) is maintained. A complete redesign of the imager makes pixels smaller and more sensitive than before. To obtain reliable depth information using a pinned-photodiode, a depth pixel is split into eight small pieces for high-speed charge transfer, and demodulated electrons are merged into one large storage node, enabling phase delay measurement with 52.8% demodulation contrast at 20MHz frequency. Furthermore, each split pixel gener- ates its own color information, offering a 2D image with full-HD resolution (1920x1080).
Proceedings of SPIE | 2013
Jungsoon Shin; Byongmin Kang; Keechang Lee; James D. K. Kim
We present a 3D ToF (Time-of-Flight) image sensor with adaptive charge subtraction scheme for background light suppression. The proposed sensor can alternately capture high resolution color image and high quality depth map in each frame. In depth-mode, the sensor requires enough integration time for accurate depth acquisition, but saturation will occur in high background light illumination. We propose to divide the integration time into N sub-integration times adaptively. In each sub-integration time, our sensor captures an image without saturation and subtracts the charge to prevent the pixel from the saturation. In addition, the subtraction results are cumulated N times obtaining a final result image without background illumination at full integration time. Experimental results with our own ToF sensor show high background suppression performance. We also propose in-pixel storage and column-level subtraction circuit for chiplevel implementation of the proposed method. We believe the proposed scheme will enable 3D sensors to be used in out-door environment.
international conference on image processing | 2010
Yong Sun Kim; Hwasup Lim; Byongmin Kang; Ouk Choi; Keechang Lee; James D. K. Kim; Chang-Yeong Kim
This paper presents a novel algorithm for realistic 3D face modeling from a single pair of color and time-of-flight (TOF) depth images using a generic, deformable model. Most previous approaches have attempted to emphasize either facial feature or global shape consistency, resulting in either visually plausible or geometrically accurate models. In this paper, we introduce feature-preserving surface registration to achieve both reality and accuracy. The proposed algorithm takes advantages of both facial features from the color image and geometric information from the depth image in an iterative closest point (ICP) framework, where the non-rigid registration is achieved by iteratively minimizing the energy function consisting of feature distance term, surface distance term, and linear elasticity term. As demonstrated in the experimental results, the proposed algorithm builds the personalized 3D face model within a few seconds with high visual quality.