Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kwanghyuk Bae is active.

Publication


Featured researches published by Kwanghyuk Bae.


ieee global conference on consumer electronics | 2013

Background subtraction based object extraction for Time-of-Flight sensor

Han Shung Cho; Kwanghyuk Bae; Kyu-Min Kyung; Seongyeong Jeong; Tae-Chan Kim

This paper presents a moving object extraction method using background subtraction techniques for Time-of-Flight sensor. Time-of-Flight sensor obtains two different types of data at the same time. One is depth data representing the distance to objects and another is intensity data representing the confidence level of depth data. After each reference background model is constructed for depth and intensity data, each foreground object map is obtained for depth and intensity data by comparing to its reference background model. Final foreground objects are extracted by combining two foreground object maps to remove noise data. The simulation results with MESA SR4000 show that the proposed method extracts moving objects accurately.


international conference on consumer electronics | 2011

New biometrics-acquisition method using time-of-flight depth camera

Tae-Chan Kim; Kyu-Min Kyung; Kwanghyuk Bae

This paper introduces basic concept of biometrics acquisition method using time-of-flight (ToF) depth camera, taking the advantages of obtaining 3D data and near infrared (NIR) image simultaneously. This concept can be applied to various biometrics, such as touch-less or multimodal biometrics.


ieee global conference on consumer electronics | 2012

Fast and efficient method to suppress depth ambiguity for Time-of-Flight sensors

Shung Han Cho; Kwanghyuk Bae; Kyu-Min Kyung; Tae-Chan Kim

This paper presents a fast and efficient method to suppress (i.e. not to unwrap but only to remove) depth ambiguity for Time-of-Flight sensors. Depth and amplitude data for each pixel is obtained by correlating the emitted signal and the reflected signal. Each depth data is classified according to predetermined depth levels and the average amplitude with neighboring pixels is used to determine ambiguous depth beyond the maximum measurable depth. Different amplitude threshold values are used according to the classified depth for the accurate suppression. The proposed method does not require complex computation for edge detection and object segmentation as compared to object segmentation based method. The comparison simulation results demonstrate that the proposed method suppresses the depth ambiguity effectively.


Proceedings of SPIE | 2012

Adaptive switching filter for noise removal in highly corrupted depth maps from Time-of-Flight image sensors

Seung-Hee Lee; Kwanghyuk Bae; Kyu Min Kyung; Tae-Chan Kim

In this work, we present an adaptive switching filter for noise reduction and sharpness preservation in depth maps provided by Time-of-Flight (ToF) image sensors. Median filter and bilateral filter are commonly used in cost-sensitive applications where low computational complexity is needed. However, median filter blurs fine details and edges in depth map while bilateral filter works poorly with impulse noise present in the image. Since the variance of depth is inversely proportional to amplitude, we suggest an adaptive filter that switches between median filter and bilateral filter based on the level of amplitude. If a region of interest has low amplitude indicating low confidence level of measured depth data, then median filter is applied on the depth at the position while regions with high level of amplitude is processed with bilateral filter using Gaussian kernel with adaptive weights. Results show that the suggested algorithm performs surface smoothing and detail preservation as well as median filter and bilateral filter, respectively. By using the suggested algorithm, significant gain in visual quality is obtained in depth maps while low computational cost is maintained.


Proceedings of SPIE | 2011

Depth upsampling method using the confidence map for a fusion of a high resolution color sensor and low resolution time-of-flight depth sensor

Kwanghyuk Bae; Kyu Min Kyung; Tae-Chan Kim

This paper proposes a depth up-sampling method using the confidence map for a fusion of a high resolution color sensor and low resolution time-of-flight depth sensor. The confidence map represents the accuracy of depth depending on the reflectance of a measured object and is estimated with amplitude, offset, and reconstructed error of a received signal. The proposed method suppresses the depth artifacts that are caused by difference between low and high reflective materials on an object at a distance. Although the surface of an object is located at the same distance, the reflectance of small regions within the surface depends on constituent materials. Weighted filter generated by confidence map is added to the modified noise-aware filter for depth up-sampling that is proposed by Derek et al., and is adaptively selected. The proposed method consists of followings; the normalization, the reconstruction, the confidence map estimation, and the modified noise-aware filtering. In the normalization, amplitudes and offsets of received signals are calculated and received signals are normalized by those. The phase shifts are measured between transmitted and received signals. In the reconstruction, received signals are reconstructed using only the values of phase shifts and the reconstruction errors are calculated. The confidence map is estimated with amplitudes, offsets, and reconstruction errors. The coefficients of a modified noise-aware filter are adaptively selected by referring to the confidence map. The proposed method shows the enhanced results of removing depth artifacts in the experiments.


international conference on consumer electronics | 2014

Background elimination method in the event based vision sensor for dynamic environment

Kyu Min Kyung; Kwanghyuk Bae; Shung Han Cho; Seongyeong Jeong; Tae-Chan Kim

This paper presents a background elimination method in the event based vision sensor for dynamic environment. Event based vision sensors quickly output digital data, two dimensional coordinate and time-stamp, in the form of events. Proposed method classifies events into groups according to their position vector. Position vector contains direction and velocity information. Each classified group has their own motion feature. Unintended events are eliminated base on the analysis of motion feature of each group. It makes event based sensor robust in the dynamic environments and guarantees accurate motion detection.


international conference on consumer electronics | 2013

Interpolation method for ToF depth sensor with pseudo 4-tap pixel architecture

Tae-Chan Kim; Kwanghyuk Bae; Kyu Min Kyung; Shung Han Cho

This paper presents an interpolation method for ToF depth sensor with pseudo 4-tap pixel architecture. Pseudo 4-tap pixel architecture uses different modulation signals for different row pixels. While faster data acquisition reduces motion artifacts, depth calculation with two vertical pixels lowers depth resolution. The proposed method uses offset values for similarity weights to improve depth resolution. Experimental results show that edge artifact is improved in the vertical direction.


international soc design conference | 2012

Perspectives on 3D ToF sensor SoC integration for user interface application

Tae-Yon Lee; Jung-kyu Jung; Dong-Ki Min; Yoon-dong Park; Kwanghyuk Bae; Tae-Chan Kim

System-on-a-chip integration of three dimensional time-of-flight image sensor invokes several technical issues, which are different from the case of conventional two dimensional CMOS image sensor. In this paper, we review several technical issues with examples of several cases, and suggest solutions especially in view point of pixel design.


international conference on consumer electronics berlin | 2012

Gesture-dependent depth data extraction for low resolution Time-of-Flight camera

Kyu Min Kyung; Kwanghyuk Bae; Shung Han Cho; Tae-Chan Kim

This paper presents a method to extract gesture-dependent depth data from a low resolution depth image of ToF sensor. A depth image is divided into sub-regions for fast movement detection. Divided sub-regions are classified into foreground and background by using intensity data. Gesture-dependent depth data are found by measuring the degree of movement from the difference between the previous and the current depth image. Lastly, ToF sensor zooms in the center of sub-regions having gesture-dependent depth data. Experiment results show that the proposed method detects minute gesture effectively.


Archive | 2012

Three-dimensional image sensors, cameras, and imaging systems

Tae-Yon Lee; Joon-Ho Lee; Yoon-dong Park; Kyoung-ho Ha; Yong-jei Lee; Kwanghyuk Bae; Kyu-Min Kyung; Tae-Chan Kim

Collaboration


Dive into the Kwanghyuk Bae's collaboration.

Researchain Logo
Decentralizing Knowledge