Dong-Geol Choi
KAIST
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dong-Geol Choi.
IEEE Transactions on Intelligent Transportation Systems | 2015
Inwook Shim; Jongwon Choi; Seunghak Shin; Tae-Hyun Oh; Unghui Lee; Byung-Tae Ahn; Dong-Geol Choi; David Hyunchul Shim; In So Kweon
Recently, there have been significant advances in self-driving cars, which will play key roles in future intelligent transportation systems. In order for these cars to be successfully deployed on real roads, they must be able to autonomously drive along collision-free paths while obeying traffic laws. In contrast to many existing approaches that use prebuilt maps of roads and traffic signals, we propose algorithms and systems using Unified Map built with various onboard sensors to detect obstacles, other cars, traffic signs, and pedestrians. The proposed map contains not only the information on real obstacles nearby but also traffic signs and pedestrians as virtual obstacles. Using this map, the path planner can efficiently find paths free from collisions while obeying traffic laws. The proposed algorithms were implemented on a commercial vehicle and successfully validated in various environments, including the 2012 Hyundai Autonomous Ground Vehicle Competition.
Journal of Field Robotics | 2017
Jeongsoo Lim; In-Ho Lee; Inwook Shim; Hyobin Jung; Hyun Min Joe; Hyoin Bae; Okkee Sim; Jaesung Oh; Taejin Jung; Seunghak Shin; Kyungdon Joo; Mingeuk Kim; Kangkyu Lee; Yunsu Bok; Dong-Geol Choi; Buyoun Cho; Sungwoo Kim; Jung-Woo Heo; Inhyeok Kim; Jungho Lee; In So Kwon; Jun-Ho Oh
This paper summarizes how Team KAIST prepared for the DARPA Robotics Challenge (DRC) Finals, especially in terms of the robot system and control strategy. To imitate the Fukushima nuclear disaster situation, the DRC performed a total of eight tasks and degraded communication conditions. This competition demanded various robotic technologies such as manipulation, mobility, telemetry, autonomy, localization, etc. Their systematic integration and the overall system robustness were also important issues in completing the challenge. In this sense, this paper presents a hardware and software system for the DRC-HUBO+, a humanoid robot that was used for the DRC; it also presents control methods such as inverse kinematics, compliance control, a walking algorithm, and a vision algorithm, all of which were implemented to accomplish the tasks. The strategies and operations for each task are briefly explained with vision algorithms. This paper summarizes what we learned from the DRC before the conclusion. In the competition, 25 international teams participated with their various robot platforms. We competed in this challenge using the DRC-HUBO+ and won first place in the competition.
intelligent robots and systems | 2014
Yunsu Bok; Dong-Geol Choi; Pascal Vasseur; In So Kweon
In this paper are presented simple and practical solutions to extrinsic calibration between a camera and a 2D laser sensor, without overlap. Previous methods utilized a plane or an intersecting line of two planes as a geometric constraint with enough common field-of-view. These required additional sensors to calibrate non-overlapping systems. In this paper, we present two methods for solving the problem - one utilizes a plane; the other utilizes an intersecting line of two planes. For each method, an initial solution of the relative positions of a non-overlapping camera and a laser sensor, was computed by adopting a reasonable assumption about geometric structures. Then we refined it via non-linear optimization, even if the assumption was not perfectly satisfied. Both simulation results and experiments using real data showed that the proposed methods provided reliable results compared to ground-truth, and similar or better results than those provided by a conventional method.
intelligent robots and systems | 2011
Yunsu Bok; Dong-Geol Choi; Yekeun Jeong; In So Kweon
In this paper, we present a sensor fusion system of cameras and 2D laser sensors for 3D reconstruction. The proposed system is designed to capture data on a fast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor. In order to capture data at high speed, we synchronized all sensors by detecting the laser ray at a specific angle and generating a trigger signal for the cameras. Reconstruction of 3D structures is done by estimating frame-by-frame motion and accumulating vertical laser scans. The difference between the proposed system and the previous works using two 2D laser sensors is that we do not assume 2D motion. The motion of the system in 3D space (including absolute scale) is estimated accurately by data-level fusion of images and range data. The problem of error accumulation is solved by loop closing, not by GPS. The moving objects are detected by utilizing the depth information provided by the laser sensor. The experimental results show that the estimated path is successfully overlayed on the satellite images.
Sensors | 2014
Yunsu Bok; Dong-Geol Choi; In So Kweon
This paper presents a sensor fusion system of cameras and a 2D laser sensor for large-scale 3D reconstruction. The proposed system is designed to capture data on a fast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor, and they are synchronized by a hardware trigger. Reconstruction of 3D structures is done by estimating frame-by-frame motion and accumulating vertical laser scans, as in previous works. However, our approach does not assume near 2D motion, but estimates free motion (including absolute scale) in 3D space using both laser data and image features. In order to avoid the degeneration associated with typical three-point algorithms, we present a new algorithm that selects 3D points from two frames captured by multiple cameras. The problem of error accumulation is solved by loop closing, not by GPS. The experimental results show that the estimated path is successfully overlaid on the satellite images, such that the reconstruction result is very accurate.
Robotics and Autonomous Systems | 2016
Yunsu Bok; Dong-Geol Choi; In So Kweon
Abstract This paper presents a practical means of extrinsic calibration between a camera and a 2D laser sensor, without overlap. In previous calibration methods, the sensors must be able to see a common geometric structure such as a plane or a line. In order to calibrate a non-overlapping camera–laser system, it is necessary to attach an extra sensor, such as a camera or a 3D laser sensor, whose relative poses from both the camera and the 2D laser sensor can be calculated. In this paper, we propose two means of calibrating a non-overlapping camera–laser system directly without an extra sensor. For each method, the initial solution of the relative pose between the camera and the 2D laser sensor is computed by adopting a reasonable assumption about geometric structures. This is then refined via non-linear optimization, even if the assumption is not met perfectly. Both simulation results and experiments using actual data show that the proposed methods provide reliable results compared to the ground truth, as well as similar or better results than those provided by conventional methods.
IEEE Transactions on Robotics | 2016
Dong-Geol Choi; Yunsu Bok; Jun-Sik Kim; In So Kweon
This paper describes a new methodology for estimating the relative pose between two 2-D lidars. Scanned points of 2-D lidars do not have enough feature information for correspondence matching. For this reason, additional image sensors or artificial landmarks at known locations have been used to find the relative pose. We propose a novel method of estimating the relative pose between 2-D lidars without any additional sensors or artificial landmarks. By scanning two orthogonal planes, we utilize the coplanarity of the scan points on each plane and the orthogonality of the plane normals. Even if we capture planes which are not exactly orthogonal, the method provides good results using nonlinear optimization. Experiments with both synthetic and real data show the validity of the proposed method. We also derive two degenerate cases: one related to plane poses, and the other caused by the relative pose. To the best of our knowledge, this study provides the first solution for the problem.
international conference on robotics and automation | 2014
Dong-Geol Choi; Yunsu Bok; Jun-Sik Kim; In So Kweon
This paper describes a new methodology for estimating a relative pose of two 2D laser sensors. Two dimensional laser scan points do not have enough feature information for motion tracking. For this reason, additional image sensors or artificial landmarks have been used to find a relative pose. We propose the method to estimate a relative pose of 2D laser sensors without any additional sensor or artificial landmark. By scanning two orthogonal planes, we utilize only the coplanarity of the scan points on each plane and the orthogonality of the plane normals. Experiments with both synthetic and real data show the validity of the proposed method. To the best of our knowledge this works provides the first solution for the problem.
international conference on ubiquitous robots and ambient intelligence | 2012
Inwook Shim; Dong-Geol Choi; Seunghak Shin; In So Kweon
In recent year, much progress has been made in outdoor obstacle detection. However, for fast moving robotic platform, high-speed obstacle detection is still a daunting challenge. This paper describes laser based system for the fast obstacle detection. To do this, we introduce how to configure laser range finders by using a plane ruler for outdoor robotic platform. For high-speed obstacle detection, we use the gradient of points. We evaluate processing time and accuracy of our system by testing on real drive track including off-road course.
international conference on robotics and automation | 2016
Inwook Shim; Seunghak Shin; Yunsu Bok; Kyungdon Joo; Dong-Geol Choi; Joon-Young Lee; Jaesik Park; Jun-Ho Oh; In So Kweon
This paper presents a vision system and a depth processing algorithm for DRC-HUBO+, the winner of the DRC finals 2015. Our system is designed to reliably capture 3D information of a scene and objects and to be robust to challenging environment conditions. We also propose a depth-map upsampling method that produces an outliers-free depth map by explicitly handling depth outliers. Our system is suitable for robotic applications in which a robot interacts with the real-world, requiring accurate object detection and pose estimation. We evaluate our depth processing algorithm in comparison with state-of-the-art algorithms on several synthetic and real-world datasets.