Byongmin Kang
Samsung
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Byongmin Kang.
IEEE Journal of Solid-state Circuits | 2012
Seong-Jin Kim; James D. K. Kim; Byongmin Kang; Keechang Lee
We propose a CMOS image sensor with time-division multiplexing pixel architecture using standard pinned-photodiode for capturing 2-D color image as well as extracting 3-D depth information of a target object. The proposed pixel can alternately provide both color and depth images in each frame. Two split photodiode and four transfer gates in each pixel improve the transfer speed of generated electrons to be capable of demodulating a high-frequency time-of-flight signal. In addition, four-shared pixel architecture acquires a color image with high spatial resolution and generates a reliable depth map by inherent binning operation in charge domain. A 712 496 pixel array has been fabricated using a 0.11μm standard CMOS imaging process and fully characterized. A 6m pixel with 34.5% aperture ratio can be operated at 10-MHz modulation frequency with 70% demodulation contrast. We have successfully captured both images of exactly same scene from the fabricated test chip. It shows a depth uncertainty of less than 60 mm and a linearity error of about 2% between 1 and 3 m distance with 50-ms integration time. Moreover, high-gain readout operation enables to improve the performance, achieving about 43-mm depth uncertainty at 3 m.
IEEE Electron Device Letters | 2010
Seong-Jin Kim; Sang-Wook Han; Byongmin Kang; Keechang Lee; James D. K. Kim; Chang-Yeong Kim
A pixel architecture for providing not only normal 2-D images but also depth information by using a conventional pinned photodiode is presented. This pixel architecture allows the sensor to generate a real-time 3-D image of an arbitrary object. The operation of the pixel is based on the time-of-flight principle detecting the time delay between the emitted and reflected infrared light pulses in a depth image mode. The pixel contains five transistors. Compared to the conventional 4-T CMOS image sensor, the new pixel includes an extra optimized transfer gate for high-speed charge transfer. A fill factor of more than 60% is achieved with a 12 × 12 μm2 size for increasing the sensitivity. A fabricated prototype sensor successfully captures 64 × 16 depth images between 1 and 4 m at a 5-MHz modulation frequency. The depth inaccuracy is measured under 2% at 1 m and 4% at 4 m and is verified by noise analysis.
international conference on image processing | 2010
Ouk Choi; Hwasup Lim; Byongmin Kang; Yong Sun Kim; Keechang Lee; James D. K. Kim; Chang-Yeong Kim
Time-of-Flight depth cameras provide a direct way to acquire range images, using the phase delay of the incoming reflected signal with respect to the emitted signal. These cameras, however, have a challenging problem called range folding, which occurs due to the modular error in phase delay—ranges are modulo the maximum range. To our best knowledge, we exploit the first approach to estimate the number of mods at each pixel from only a single range image. The estimation is recasted into an optimization problem in the Markov random field framework, where the number of mods is considered as a label. The actual range is then recovered using the optimal number of mods at each pixel, so-named range unfolding. As demonstrated in the experiments with various range images of real scenes, the proposed method accurately determines the number of mods. In result, the maximum range is practically extended at least twice of that specified by the modulation frequency.
Proceedings of SPIE | 2012
Yong Sun Kim; Byongmin Kang; Hwasup Lim; Ouk Choi; Keechang Lee; James D. K. Kim; Chang-Yeong Kim
This paper presents a novel Time-of-Flight (ToF) depth denoising algorithm based on parametric noise modeling. ToF depth image includes space varying noise which is related to IR intensity value at each pixel. By assuming ToF depth noise as additive white Gaussian noise, ToF depth noise can be modeled by using a power function of IR intensity. Meanwhile, nonlocal means filter is popularly used as an edge-preserving denoising method for removing additive Gaussian noise. To remove space varying depth noise, we propose an adaptive nonlocal means filtering. According to the estimated noise, the search window and weighting coefficient are adaptively determined at each pixel so that pixels with large noise variance are strongly filtered and pixels with small noise variance are weakly filtered. Experimental results demonstrate that the proposed algorithm provides good denoising performance while preserving details or edges compared to the typical nonlocal means filtering.
Proceedings of SPIE | 2011
Byongmin Kang; Seong-Jin Kim; Seungkyu Lee; Keechang Lee; James D. K. Kim; Changyeong Kim
A Time-of-Flight (ToF) camera uses a near infrared (NIR) to obtain the distance from the camera to an object. The distance is calculated from the amount of time shift between the emitted and reflected NIR. ToF cameras generally modulate NIR with a square wave rather than a sinusoidal wave due to its difficulty in hardware implementation. The previous method using simple trigonometric function estimates the time shift with the difference of electrons generated by the reflected square wave. Thus the estimated time shift includes a harmonic distortion caused by the nonlinearity of trigonometric function. In this paper, we propose a new linear estimation method to reduce the harmonic distortion. For quantitative evaluation, the proposed method is compared to the previous method using our prototype ToF depth camera. Experimental results show that the distance obtained by the proposed method is more accurate than that by the previous method.
Proceedings of SPIE | 2012
Seungkyu Lee; Byongmin Kang; James D. K. Kim; Chang Yeong Kim
Time-of-flight depth sensor provides faster and easier way to 3D scene capturing and reconstruction. The depth sensor, however, suffers from motion blur caused by any movement of camera or objects. In this manuscript, we propose a novel depth motion blur pixel detection and elimination method that can be implemented on any ToF depth sensor with light memory and computation resources. We propose a blur detection method using the relations of electric charge amount. It detects blur pixel at each depth value calculation step only by checking the four electric charge values by four internal control signals. Once we detect blur pixels, their depth values are replaced by any closest normal pixel values. With this method, we eliminate motion blur before we build the depth image with only few more calculations and memory addition.
Proceedings of SPIE | 2012
Ouk Choi; Hwasup Lim; Byongmin Kang; Yong Sun Kim; Keechang Lee; James D. K. Kim; Chang-Yeong Kim
Recently a Time-of-Flight 2D/3D image sensor has been developed, which is able to capture a perfectly aligned pair of a color and a depth image. To increase the sensitivity to infrared light, the sensor electrically combines multiple adjacent pixels into a depth pixel at the expense of depth image resolution. To restore the resolution we propose a depth image super-resolution method that uses a high-resolution color image aligned with an input depth image. In the first part of our method, the input depth image is interpolated into the scale of the color image, and our discrete optimization converts the interpolated depth image into a high-resolution disparity image, whose discontinuities precisely coincide with object boundaries. Subsequently, a discontinuity-preserving filter is applied to the interpolated depth image, where the discontinuities are cloned from the high-resolution disparity image. Meanwhile, our unique way of enforcing the depth reconstruction constraint gives a high-resolution depth image that is perfectly consistent with its original input depth image. We show the effectiveness of the proposed method both quantitatively and qualitatively, comparing the proposed method with two existing methods. The experimental results demonstrate that the proposed method gives sharp high-resolution depth images with less error than the two methods for scale factors of 2, 4, and 8.
IEEE Transactions on Circuits and Systems | 2015
Jaehyuk Choi; Jungsoon Shin; Byongmin Kang
We present a CMOS image sensor with integrated background suppression scheme for detecting small signals out of unwanted background signals. For the background suppression, differential signals with suppressed common-mode background signals are sampled within a short sub-sensing time in order to avoid the saturation from strong background signals. Analog differential signals are digitally accumulated multiple times in one integration time for high SNR. The column-parallel background suppression circuits are pipelined in order to achieve short sub-sensing time. Moreover, additional operations for the noise cancelling are merged with the background suppression and no extra timing for the noise cancelling is required during the sub-sensing time. In order to suppress stronger background signals, sensitivity can be adjusted to be decreased using in-pixel capacitors when strong background signals are present. The prototype image sensor with 1328 × 1008 pixel array has been fabricated with a 0.11 μm 1P4M CIS process. We have successfully captured images from the fabricated sensor chip with strong background signal over 10 klx scene illuminance without optical filters. The background-to-signal ratio is 32.1 dB.
international solid-state circuits conference | 2012
Seong-Jin Kim; Byongmin Kang; James D. K. Kim; Keechang Lee; Chang-Yeong Kim; Kinam Kim
In this paper, we present a 2nd-generation 2D/3D imager based on the pinned- photodiode pixel structure. The time-division readout architecture for both image types (color and depth) is maintained. A complete redesign of the imager makes pixels smaller and more sensitive than before. To obtain reliable depth information using a pinned-photodiode, a depth pixel is split into eight small pieces for high-speed charge transfer, and demodulated electrons are merged into one large storage node, enabling phase delay measurement with 52.8% demodulation contrast at 20MHz frequency. Furthermore, each split pixel gener- ates its own color information, offering a 2D image with full-HD resolution (1920x1080).
Optics Letters | 2014
Sun Kwon Kim; Byongmin Kang; Jingu Heo; Seung-Won Jung; Ouk Choi
We present a method to enhance depth quality of a time-of-flight (ToF) camera without additional devices or hardware modifications. By controlling the turn-off patterns of the LEDs of the camera, we obtain depth and normal maps simultaneously. Sixteen subphase images are acquired with variations in gate-pulse timing and light emission pattern of the camera. The subphase images allow us to obtain a normal map, which are combined with depth maps for improved depth details. These details typically cannot be captured by conventional ToF cameras. By the proposed method, the average of absolute differences between the measured and laser-scanned depth maps has decreased from 4.57 to 3.77 mm.