Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Seunghak Shin is active.

Publication


Featured researches published by Seunghak Shin.


IEEE Transactions on Intelligent Transportation Systems | 2015

An Autonomous Driving System for Unknown Environments Using a Unified Map

Inwook Shim; Jongwon Choi; Seunghak Shin; Tae-Hyun Oh; Unghui Lee; Byung-Tae Ahn; Dong-Geol Choi; David Hyunchul Shim; In So Kweon

Recently, there have been significant advances in self-driving cars, which will play key roles in future intelligent transportation systems. In order for these cars to be successfully deployed on real roads, they must be able to autonomously drive along collision-free paths while obeying traffic laws. In contrast to many existing approaches that use prebuilt maps of roads and traffic signals, we propose algorithms and systems using Unified Map built with various onboard sensors to detect obstacles, other cars, traffic signs, and pedestrians. The proposed map contains not only the information on real obstacles nearby but also traffic signs and pedestrians as virtual obstacles. Using this map, the path planner can efficiently find paths free from collisions while obeying traffic laws. The proposed algorithms were implemented on a commercial vehicle and successfully validated in various environments, including the 2012 Hyundai Autonomous Ground Vehicle Competition.


Journal of Field Robotics | 2017

Robot System of DRC‐HUBO+ and Control Strategy of Team KAIST in DARPA Robotics Challenge Finals

Jeongsoo Lim; In-Ho Lee; Inwook Shim; Hyobin Jung; Hyun Min Joe; Hyoin Bae; Okkee Sim; Jaesung Oh; Taejin Jung; Seunghak Shin; Kyungdon Joo; Mingeuk Kim; Kangkyu Lee; Yunsu Bok; Dong-Geol Choi; Buyoun Cho; Sungwoo Kim; Jung-Woo Heo; Inhyeok Kim; Jungho Lee; In So Kwon; Jun-Ho Oh

This paper summarizes how Team KAIST prepared for the DARPA Robotics Challenge (DRC) Finals, especially in terms of the robot system and control strategy. To imitate the Fukushima nuclear disaster situation, the DRC performed a total of eight tasks and degraded communication conditions. This competition demanded various robotic technologies such as manipulation, mobility, telemetry, autonomy, localization, etc. Their systematic integration and the overall system robustness were also important issues in completing the challenge. In this sense, this paper presents a hardware and software system for the DRC-HUBO+, a humanoid robot that was used for the DRC; it also presents control methods such as inverse kinematics, compliance control, a walking algorithm, and a vision algorithm, all of which were implemented to accomplish the tasks. The strategies and operations for each task are briefly explained with vision algorithms. This paper summarizes what we learned from the DRC before the conclusion. In the competition, 25 international teams participated with their various robot platforms. We competed in this challenge using the DRC-HUBO+ and won first place in the competition.


intelligent robots and systems | 2016

EureCar turbo: A self-driving car that can handle adverse weather conditions

Unghui Lee; Ji-Won Jung; Seunghak Shin; Yongseop Jeong; Kibaek Park; David Hyunchul Shim; In So Kweon

Autonomous driving technology has made significant advances in recent years. In order for self-driving cars to become practical, they are required to operate safely and reliably even under adverse driving conditions. However, most current autonomous driving cars have only been shown to be operational under amiable weather conditions, i.e., on sunny days on dry roads. In order to enable autonomous cars to handle adverse driving conditions such as rain and wet roads, the algorithm must be able to detect roads within a tolerable margin of error using sensors such as cameras and laser scanners. In this paper, we propose a sensor fusion algorithms that is able to operate under a variety of weather conditions, including rain. Our algorithm was validated when a strong shower occurred during the 2014 Hyundai Motor Companys Autonomous Car Competition. In this paper, we present the competition results that were collected on the same course on both sunny and rainy days. Based on the comparison, we propose the future directions to improve the autonomous driving capability under adverse environmental conditions.


international conference on ubiquitous robots and ambient intelligence | 2012

Multi lidar system for fast obstacle detection

Inwook Shim; Dong-Geol Choi; Seunghak Shin; In So Kweon

In recent year, much progress has been made in outdoor obstacle detection. However, for fast moving robotic platform, high-speed obstacle detection is still a daunting challenge. This paper describes laser based system for the fast obstacle detection. To do this, we introduce how to configure laser range finders by using a plane ruler for outdoor robotic platform. For high-speed obstacle detection, we use the gradient of points. We evaluate processing time and accuracy of our system by testing on real drive track including off-road course.


international conference on robotics and automation | 2016

Vision system and depth processing for DRC-HUBO+

Inwook Shim; Seunghak Shin; Yunsu Bok; Kyungdon Joo; Dong-Geol Choi; Joon-Young Lee; Jaesik Park; Jun-Ho Oh; In So Kweon

This paper presents a vision system and a depth processing algorithm for DRC-HUBO+, the winner of the DRC finals 2015. Our system is designed to reliably capture 3D information of a scene and objects and to be robust to challenging environment conditions. We also propose a depth-map upsampling method that produces an outliers-free depth map by explicitly handling depth outliers. Our system is suitable for robotic applications in which a robot interacts with the real-world, requiring accurate object detection and pose estimation. We evaluate our depth processing algorithm in comparison with state-of-the-art algorithms on several synthetic and real-world datasets.


international conference on ubiquitous robots and ambient intelligence | 2015

Combinatorial approach for lane detection using image and LIDAR reflectance

Seunghak Shin; Inwook Shim; In So Kweon

Recently, lane detection algorithms have played significant roles in the field of vehicle technology. While many well working algorithms have been developed, they are hard to use in complex urban environments. In this paper, we propose an efficient approach for detecting lane markings using image information and LIDAR reflectance. The proposed algorithm has three phases: ground extraction, lane detections, and combining lane information. The proposed algorithm was implemented on a real vehicle and validated in various traffic environments, including the 2014 Hyundai Autonomous Vehicle Competition (AVC).


international conference on robotics and automation | 2012

Efficient Data-Driven MCMC sampling for vision-based 6D SLAM

Jihong Min; Jungho Kim; Seunghak Shin; In So Kweon

In this paper, we propose a Markov Chain Monte Carlo (MCMC) sampling method with the data-driven proposal distribution for six-degree-of-freedom (6-DoF) SLAM. Recently, visual odometry priors have been widely used as the process model in the SLAM formulation to improve the SLAM performance. However, modeling the uncertainties of incremental motions estimated by visual odometry is especially difficult under challenging conditions, such as erratic motion. For a particle-based model representation, it can represent the uncertainty of the camera motion well under erratic motion compared to the constant velocity model or a Gaussian noise model, but the manner of representing the proposal distribution and sampling the particles is extremely important, as we can maintain only a limited number of particles in the high-dimensional state space. Hence, we propose an effective sampling approach by exploiting MCMC sampling and the data-driven proposal distribution to propagate the particles. We demonstrate the performance of the proposed approach for 6-DoF SLAM using both synthetic and real datasets and compare the performance with those of other sampling methods.


international conference on ubiquitous robots and ambient intelligence | 2014

Motion deblurring using coded exposure for a wheeled mobile robot

Kibaek Park; Seunghak Shin; Hae-Gon Jeon; Joon-Young Lee; In So Kweon

We present a motion deblurring framework for a wheeled mobile robot. Motion blur is an inevitable problem in a mobile robot, especially side-view cameras severely suffer from motion blur when a mobile robot moves forward. To handle motion blur in a robot, we develop a fast motion deblurring framework using the concept of coded exposure. We estimate a blur kernel by a simple template matching between adjacent frames with a motion prior and a blind deconvolution algorithm with a Gaussian prior is exploited for fast deblurring. Our system is implemented using an off-the-shelf machine vision camera and enables us to achieve high-quality deblurring results with little computation time. We demonstrate the effectiveness of our system to handle motion blur and validate it is useful for many robot applications such as text recognition and visual structure from motion.


IEEE Signal Processing Letters | 2017

Geometry Guided Three-Dimensional Propagation for Depth From Small Motion

Seunghak Shin; Sunghoon Im; Inwook Shim; Hae-Gon Jeon; In So Kweon

In this letter, we present an accurate Depth from Small Motion approach, which reconstructs three-dimensional (3-D) depth from image sequences with extremely narrow baselines. We start with estimating sparse 3-D points and camera poses via the structure from motion method. For dense depth reconstruction, we propose a novel depth propagation using a geometric guidance term that considers not only the geometric constraint from the surface normal, but also color consistency. In addition, we propose an accurate surface normal estimation method with a multiple range search so that the normal vector can guide the direction of the depth propagation precisely. The major benefit of our depth propagation method is that it obtains detailed structures of a scene without fronto-parallel bias. We validate our method using various indoor and outdoor datasets, and both qualitative and quantitative experimental results show that our new algorithm consistently generates better 3-D depth information than the results of existing state-of-the-art methods.


intelligent robots and systems | 2016

Object proposal using 3D point cloud for DRC-HUBO+

Seunghak Shin; Inwook Shim; Jiyung Jung; Yunsu Bok; Jun-Ho Oh; In So Kweon

We present an object proposal method which utilizes the 3D data obtained from a depth sensor as well as the color information of images. Our object proposal method is designed to improve the performance of the object detection for a mobile robot equipped with a camera and a laser scanner. Compared to traditional object proposal methods using only 2D images, the proposed method provides much less number of candidate windows for object detection. We show less than 100 object proposal windows per image using the proposed method result in high recall tested on the public dataset. Our method presents object proposals in 3D space as well as in 2D image thus it can further be applied to following tasks for mobile robots such as 3D location and pose estimation of the target object after successful object detection. We validate our method using the real-world object detection dataset for outdoor mobile robots captured during the DRC Finals 2015 and the public dataset for comparison with the previous methods.

Collaboration


Dive into the Seunghak Shin's collaboration.

Researchain Logo
Decentralizing Knowledge