Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yunsu Bok is active.

Publication


Featured researches published by Yunsu Bok.


computer vision and pattern recognition | 2015

Accurate depth map estimation from a lenslet light field camera

Hae-Gon Jeon; Jaesik Park; Gyeongmin Choe; Jinsun Park; Yunsu Bok; Yu-Wing Tai; In So Kweon

This paper introduces an algorithm that accurately estimates depth maps using a lenslet light field camera. The proposed algorithm estimates the multi-view stereo correspondences with sub-pixel accuracy using the cost volume. The foundation for constructing accurate costs is threefold. First, the sub-aperture images are displaced using the phase shift theorem. Second, the gradient costs are adaptively aggregated using the angular coordinates of the light field. Third, the feature correspondences between the sub-aperture images are used as additional constraints. With the cost volume, the multi-label optimization propagates and corrects the depth map in the weak texture regions. Finally, the local depth map is iteratively refined through fitting the local quadratic function to estimate a non-discrete depth map. Because micro-lens images contain unexpected distortions, a method is also proposed that corrects this error. The effectiveness of the proposed algorithm is demonstrated through challenging real world examples and including comparisons with the performance of advanced depth estimation algorithms.


international conference on computer vision | 2009

Capturing village-level heritages with a hand-held camera-laser fusion sensor

Yunsu Bok; Donggul Choi; Yekeun Jeong; In So Kweon

Preserving a heritage as a digital archive is as important as preserving its physical structure. The digital preservation is essential for massive heritages which are often defenceless against various types of destruction and require frequent restorations. However, capturing heritages gets exceedingly harder as their scale grows. In this paper, we present a novel approach to reconstruct a massive-scale structure using a hand-held fusion sensor system. The approach includes new methods on calibration, motion estimation, and accumulated error reduction. The proposed sensor system consists of four cameras and two 2D laser scanners to obtain a wide field-of-view. A new calibration method successfully achieves a much lower reprojection error compared to the previous method. A motion estimation method provides accurate and robust relative poses by fully utilizing plenty observations. At the last stage, the accumulated error reduction removes the drift occurred over tens of thousands frames by adopting weak GPS prior and loop closing. Therefore the system is able to capture and geo-register large heritage architectures of square kilometers size. Furthermore, because no assumption or restriction is made, the user can freely move the system and can control the level of detail of the digital heritage without any effort. To demonstrate the performance, we have captured several important Korean heritages including Gyeongbok-Gung, the royal palace of Korea. The experimental result shows that the estimated route fits Google’s satellite image and DGPS data while the detailed appearances of representative constructions are captured and preserved well.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2017

Geometric Calibration of Micro-Lens-Based Light Field Cameras Using Line Features

Yunsu Bok; Hae-Gon Jeon; In So Kweon

We present a novel method for the geometric calibration of micro-lens-based light field cameras. Accurate geometric calibration is the basis of various applications. Instead of using sub-aperture images, we directly utilize raw images for calibration. We select appropriate regions in raw images and extract line features from micro-lens images in those regions. For the entire process, we formulate a new projection model of a micro-lens-based light field camera, which contains a smaller number of parameters than previous models. The model is transformed into a linear form using line features. We compute the initial solution of both the intrinsic and the extrinsic parameters by a linear computation and refine them via non-linear optimization. Experimental results demonstrate the accuracy of the correspondences between rays and pixels in raw images, as estimated by the proposed method.


international conference on robotics and automation | 2007

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion

Yunsu Bok; Youngbae Hwang; In So Kweon

The CCD camera and the 2D laser range finder are widely used for motion estimation and 3D reconstruction. With their own strengths and weaknesses, low-level fusion of these two sensors complements each other. We combine these two sensors to perform motion estimation and 3D reconstruction simultaneously and precisely. We develop a motion estimation scheme appropriate for this sensor system. In the proposed method, the motion between two frames is estimated using three points among the scan data, and refined by nonlinear optimization. We validate the accuracy of the proposed method using real images. The results show that the proposed system is a practical solution for motion estimation as well as for 3D reconstruction.


Pattern Recognition Letters | 2016

Automated checkerboard detection and indexing using circular boundaries

Yunsu Bok; Hyowon Ha; In So Kweon

Low probability of missing true corners due to user-defined parameters of feature extraction.Discarding outliers using characteristics of checkerboard corners.Index extension using characteristics of neighboring checkerboard corners.Performance of the proposed method in terms of success ratio.Robustness of the proposed method against partial view, lens distortion and image noise. This paper presents a new algorithm for automated checkerboard detection and indexing. Automated checkerboard detection is essential for reducing user inputs in any camera calibration process. We adopt an iterative refinement algorithm to extract corner candidates. In order to utilize the characteristics of checkerboard corners, we extract a circular boundary from each candidate and find its sign-changing indices. We initialize an arbitrary point and its neighboring two points as seeds and assign world coordinates to the other points. The largest set of world-coordinate-assigned points is selected as the detected checkerboard. The performance of the proposed algorithm is evaluated using images with various sizes and particular conditions.


Journal of Field Robotics | 2017

Robot System of DRC‐HUBO+ and Control Strategy of Team KAIST in DARPA Robotics Challenge Finals

Jeongsoo Lim; In-Ho Lee; Inwook Shim; Hyobin Jung; Hyun Min Joe; Hyoin Bae; Okkee Sim; Jaesung Oh; Taejin Jung; Seunghak Shin; Kyungdon Joo; Mingeuk Kim; Kangkyu Lee; Yunsu Bok; Dong-Geol Choi; Buyoun Cho; Sungwoo Kim; Jung-Woo Heo; Inhyeok Kim; Jungho Lee; In So Kwon; Jun-Ho Oh

This paper summarizes how Team KAIST prepared for the DARPA Robotics Challenge (DRC) Finals, especially in terms of the robot system and control strategy. To imitate the Fukushima nuclear disaster situation, the DRC performed a total of eight tasks and degraded communication conditions. This competition demanded various robotic technologies such as manipulation, mobility, telemetry, autonomy, localization, etc. Their systematic integration and the overall system robustness were also important issues in completing the challenge. In this sense, this paper presents a hardware and software system for the DRC-HUBO+, a humanoid robot that was used for the DRC; it also presents control methods such as inverse kinematics, compliance control, a walking algorithm, and a vision algorithm, all of which were implemented to accomplish the tasks. The strategies and operations for each task are briefly explained with vision algorithms. This paper summarizes what we learned from the DRC before the conclusion. In the competition, 25 international teams participated with their various robot platforms. We competed in this challenge using the DRC-HUBO+ and won first place in the competition.


intelligent robots and systems | 2014

Extrinsic calibration of non-overlapping camera-laser system using structured environment

Yunsu Bok; Dong-Geol Choi; Pascal Vasseur; In So Kweon

In this paper are presented simple and practical solutions to extrinsic calibration between a camera and a 2D laser sensor, without overlap. Previous methods utilized a plane or an intersecting line of two planes as a geometric constraint with enough common field-of-view. These required additional sensors to calibrate non-overlapping systems. In this paper, we present two methods for solving the problem - one utilizes a plane; the other utilizes an intersecting line of two planes. For each method, an initial solution of the relative positions of a non-overlapping camera and a laser sensor, was computed by adopting a reasonable assumption about geometric structures. Then we refined it via non-linear optimization, even if the assumption was not perfectly satisfied. Both simulation results and experiments using real data showed that the proposed methods provided reliable results compared to ground-truth, and similar or better results than those provided by a conventional method.


international conference on robotics and automation | 2011

Complementation of cameras and lasers for accurate 6D SLAM: From correspondences to bundle adjustment

Yekeun Jeong; Yunsu Bok; Jun-Sik Kim; In So Kweon

In this paper, we present an accurate and robust 6D SLAM method that uses multiple 2D sensors, i.e. perspective cameras and planar laser scanners. We have investigated strengths and weaknesses of those two sensors for 6D SLAM by conducting specifically designed experiments, and found that the sensors can complement each other. In order to take full advantages of each approach, we fuse correspondences of those two sensors, rather than individually estimated motions. Correspondences obtained by the two sensors have different characteristics, but can be expressed in a common 2D-3D relation form. We use the correspondences in a single structure-from-motion framework. In the initial motion estimation step, we propose a RANSAC-based method to generate and test multiple motion hypotheses by using multiple pools of correspondences, aiming to avoid potential bias of each sensor data. In the later motion refinement step, we introduce a variant of bundle adjustment to consider different types of constraints from the two sensors. The performance of the proposed method is demonstrated both quantitatively by experiments on closed-loop sequences and qualitatively by large-scale experiments with DGPS trajectory. The proposed method successfully closes a loop of 320 meters in twenty thousand frames by incremental process only.


international conference on image processing | 2011

Two-phase approach for multi-view object extraction

Sungheum Kim; Yu-Wing Tai; Yunsu Bok; Hyeongwoo Kim; In So Kweon

In this paper, we propose an automatic method to extract a foreground object captured from multiple viewpoints. We consider the foreground object is within the visual hull of camera field of views. By exploring the multi-view geometric relationship and color measurements of the input images, we can estimate the foreground segmentations as well as their fractional boundaries. To facilitate efficient computation and high quality mattes, we adopt a two-phase approach. The first phase of our algorithm provides quick and rough binary segmentations of the foreground object using graph-cut; the second phase refines the segmentation boundaries using matting. Our result is the high quality alpha mattes of the foreground object consistently across all different viewpoints. We demonstrate the effectiveness of our method using challenging examples.


intelligent robots and systems | 2011

Capturing city-level scenes with a synchronized camera-laser fusion sensor

Yunsu Bok; Dong-Geol Choi; Yekeun Jeong; In So Kweon

In this paper, we present a sensor fusion system of cameras and 2D laser sensors for 3D reconstruction. The proposed system is designed to capture data on a fast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor. In order to capture data at high speed, we synchronized all sensors by detecting the laser ray at a specific angle and generating a trigger signal for the cameras. Reconstruction of 3D structures is done by estimating frame-by-frame motion and accumulating vertical laser scans. The difference between the proposed system and the previous works using two 2D laser sensors is that we do not assume 2D motion. The motion of the system in 3D space (including absolute scale) is estimated accurately by data-level fusion of images and range data. The problem of error accumulation is solved by loop closing, not by GPS. The moving objects are detected by utilizing the depth information provided by the laser sensor. The experimental results show that the estimated path is successfully overlayed on the satellite images.

Collaboration


Dive into the Yunsu Bok's collaboration.

Researchain Logo
Decentralizing Knowledge