Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kiho Kwak is active.

Publication


Featured researches published by Kiho Kwak.


intelligent robots and systems | 2011

Extrinsic calibration of a single line scanning lidar and a camera

Kiho Kwak; Daniel Huber; Hernán Badino; Takeo Kanade

Lidar and visual imagery have been broadly utilized in computer vision and mobile robotics applications because these sensors provide complementary information. However, in order to convert data between the local coordinate systems, we must estimate the rigid body transformation between the sensors. In this paper, we propose a robust-weighted extrinsic calibration algorithm that is implemented easily and has small calibration error. The extrinsic calibration parameters are estimated by minimizing the distance between corresponding features projected onto the image plane. The features are edge and centerline features on a v-shaped calibration target. The proposed algorithm contributes two ways to improve the calibration accuracy. First, we use different weights to distance between a point and a line feature according to the correspondence accuracy of the features. Second, we apply a penalizing function to exclude the influence of outliers in the calibration data sets. We conduct several experiments to evaluate the performance of our extrinsic calibration algorithm, such as comparison of the RMS distance of the ground truth and the projected points, the effect of the number of lidar scan and image, and the effect of pose and range of the calibration target. In the experiments, we show our extrinsic calibration algorithm has calibration accuracy over 50% better than an existing state of the art approach. To evaluate the generality of our algorithm, we also colorize point clouds with different pairs of lidars and cameras calibrated by our algorithm.


international conference on robotics and automation | 2010

Boundary detection based on supervised learning

Kiho Kwak; Daniel Huber; Jeongsook Chae; Takeo Kanade

Detecting the boundaries of objects is a key step in separating foreground objects from the background, which is useful for robotics and computer vision applications, such as object detection, recognition, and tracking. We propose a new method for detecting object boundaries using planar laser scanners (LIDARs) and, optionally, co-registered imagery. We formulate boundary detection as a classification problem, in which we estimate whether a boundary exists in the gap between two consecutive range measurements. Features derived from the LIDAR and imagery are used to train a support vector machine (SVM) classifier to label pairs of range measurements as boundary or non-boundary. We compare this approach to an existing boundary detection algorithm that uses dynamically adjusted thresholds. Experiments show that the new method performs better even when only LIDAR features are used, and additional improvement occurs when image-based features are included, too. The new algorithm performs better on difficult boundary cases, such as obliquely viewed objects.


Sensors | 2016

Indirect Correspondence-Based Robust Extrinsic Calibration of LiDAR and Camera

Sungdae Sim; Juil Sock; Kiho Kwak

LiDAR and cameras have been broadly utilized in computer vision and autonomous vehicle applications. However, in order to convert data between the local coordinate systems, we must estimate the rigid body transformation between the sensors. In this paper, we propose a robust extrinsic calibration algorithm that can be implemented easily and has small calibration error. The extrinsic calibration parameters are estimated by minimizing the distance between corresponding features projected onto the image plane. The features are edge and centerline features on a v-shaped calibration target. The proposed algorithm contributes two ways to improve the calibration accuracy. First, we use different weights to distance between a point and a line feature according to the correspondence accuracy of the features. Second, we apply a penalizing function to exclude the influence of outliers in the calibration datasets. Additionally, based on our robust calibration approach for a single LiDAR-camera pair, we introduce a joint calibration that estimates the extrinsic parameters of multiple sensors at once by minimizing one objective function with loop closing constraints. We conduct several experiments to evaluate the performance of our extrinsic calibration algorithm. The experimental results show that our calibration method has better performance than the other approaches.


international conference on robotics and automation | 2014

Hybrid vision-based SLAM coupled with moving object tracking.

Jihong Min; Jungho Kim; Hyeongwoo Kim; Kiho Kwak; In So Kweon

In this paper we propose a hybrid vision-based SLAM and moving objects tracking (vSLAMMOT) approach. This approach tightly combines two key methods: a superpixel-based segmentation to detect moving objects and a Rao-Blackwellized Particle Filter to estimate a stereo-vision-based SLAM posterior. Most successful methods perform vision-based SLAM (vSLAM) and track moving objects independently. However, we pose both vSLAM and moving object tracking as a single correlated problem to leverage the performance. Our approach estimates the relative camera motion using the previous tracking result, and then detects moving objects from the estimated camera motion recursively. Moving superpixels are detected by a Markov Random Field (MRF) model which uses spatial and temporal information of the moving objects. We demonstrate the performance of the proposed approach for vSLAMMOT using both synthetic and real datasets and compare the performance with other methods.


Multimedia Tools and Applications | 2018

Convergent application for trace elimination of dynamic objects from accumulated lidar point clouds

Phuong Minh Chu; Seoungjae Cho; Sungdae Sim; Kiho Kwak; Kyungeun Cho

In this paper, a convergent multimedia application for filtering traces of dynamic objects from accumulated point cloud data is presented. First, a fast ground segmentation algorithm is designed by dividing each frame data item into small groups. Each group is a vertical line limited by two points. The first point is orthogonally projected from a sensor’s position to the ground. The second one is a point in the outermost data circle. Two voxel maps are employed to save information on the previous and current frames. The position and occupancy status of each voxel are considered for detecting the voxels containing past data of moving objects. To increase detection accuracy, the trace data are sought in only the nonground group. Typically, verifying the intersection between the line segment and voxel is repeated numerous times, which is time-consuming. To increase the speed, a method is proposed that relies on the three-dimensional Bresenham’s line algorithm. Experiments were conducted, and the results showed the effectiveness of the proposed filtering system. In both static and moving sensors, the system immediately eliminated trace data and maintained other static data, while operating three times faster than the sensor rate.


Autonomous Robots | 2017

An incremental nonparametric Bayesian clustering-based traversable region detection method

Honggu Lee; Kiho Kwak; Sungho Jo

Navigation capability in complex and unknown outdoor environments is one of the major requirements for an autonomous vehicle and a robot that perform tasks such as a military mission or planetary exploration. Robust traversability estimation in unknown environments would allow the vehicle or the robot to devise control and planning strategies to maximize their effectiveness. In this study, we present a self-supervised on-line learning architecture to estimate the traversability in complex and unknown outdoor environments. The proposed approach builds a model by clustering appearance data using the newly proposed incremental nonparametric Bayesian clustering algorithm. The clusters are then classified as being either traversable or non-traversable. Because our approach effectively groups unknown regions with similar properties, while the vehicle is in motion without human intervention, the vehicle can be deployed to new environments by automatically adapting to changing environmental conditions. We demonstrate the performance of the proposed clustering algorithm through intensive experiments using synthetic and real data and evaluate the viability of the traversability estimation using real data sets collected in outdoor environment.


international conference on robotics and automation | 2014

Online approximate model representation of unknown objects

Kiho Kwak; Jun-Sik Kim; Daniel Huber; Takeo Kanade

Object representation is useful for many computer vision tasks, such as object detection, recognition, and tracking. Computer vision tasks must handle situations where unknown objects appear and must detect and track some object which is not in the trained database. In such cases, the system must learn or, otherwise derive, descriptions of new objects. In this paper, we investigate creating a representation of previously unknown objects that newly appear in the scene. The representation creates a viewpoint-invariant and scale-normalized model approximately describing an unknown object with multimodal sensors. Those properties of the representation facilitate 3D tracking of the object using 2D-to-2D image matching. The representation has both benefits of an implicit model (referred to as a view-based model) and an explicit model (referred to as a shape-based model). Experimental results demonstrate the viability of the proposed representation and outperform the existing approaches for 3D-pose estimation.


international conference on control, automation and systems | 2014

Probabilistic traversability map building for autonomous navigation

Juil Sock; Kiho Kwak; Jihong Min; Yong-Woon Park

For the successful navigation of an unmanned vehicle, it is important to determine the traversable area reliably. Traditional occupancy grid mapping calculates the presence of obstacle probabilistically, however cannot provide meaningful information on traversability of the given area. In this paper, we address problem of building a traversability map for unmanned ground vehicle. Map building consists of two main parts; first part is the fusion of terrain recognition map and terrain model map. Second part is sequential update of the traversablity map. Maps are built from each sensors separately and fusion takes place to combine the maps. The resulting map is added to the existing map for update. The algorithm is implemented to an unmanned ground vehicle for field-test. Experiments are conducted under different conditions to evaluate the robustness of the algorithm.


international conference on control, automation and systems | 2014

Closed Loop-based extrinsic calibration of multi-modal sensors

Sungdae Sim; Kiho Kwak; Jun Kim; Sang Hyun Joo

By increasing the requirement of reliable and accurate sensor information, the integration of multiple sensors has gained attention. Especially, the fusion of a LIDAR(Light Detection And Ranging) and a camera is one of the sensor combination broadly used because it provides the complementary and redundant information. Many existing calibration approaches consider the problem estimating the relative pose between each sensor pair such as a LIDAR and a camera. However, these approaches do not provide accurate solutions for multisensor configurations such as a LIDAR and cameras or LIDARs and cameras. In this paper, we propose a new extrinsic calibration algorithm using closed-loop constraints for multi-modal sensor configuration. The extrinsic calibration parameters are estimated by minimizing the distance between corresponding features projected onto the image plane. We conduct several experiments to evaluate the performance of our approach, such as comparison of the RMS distance of the ground truth and the projected points, and comparison between the independent sensor pair and our approach.


Symmetry | 2018

Multimedia System for Real-Time Photorealistic Nonground Modeling of 3D Dynamic Environment for Remote Control System

Phuong Minh Chu; Seoungjae Cho; Sungdae Sim; Kiho Kwak; Kyungeun Cho

Nowadays, unmanned ground vehicles (UGVs) are widely used for many applications. UGVs have sensors including multi-channel laser sensors, two-dimensional (2D) cameras, Global Positioning System receivers, and inertial measurement units (GPS–IMU). Multi-channel laser sensors and 2D cameras are installed to collect information regarding the environment surrounding the vehicle. Moreover, the GPS–IMU system is used to determine the position, acceleration, and velocity of the vehicle. This paper proposes a fast and effective method for modeling nonground scenes using multiple types of sensor data captured through a remote-controlled robot. The multi-channel laser sensor returns a point cloud in each frame. We separated the point clouds into ground and nonground areas before modeling the three-dimensional (3D) scenes. The ground part was used to create a dynamic triangular mesh based on the height map and vehicle position. The modeling of nonground parts in dynamic environments including moving objects is more challenging than modeling of ground parts. In the first step, we applied our object segmentation algorithm to divide nonground points into separate objects. Next, an object tracking algorithm was implemented to detect dynamic objects. Subsequently, nonground objects other than large dynamic ones, such as cars, were separated into two groups: surface objects and non-surface objects. We employed colored particles to model the non-surface objects. To model the surface and large dynamic objects, we used two dynamic projection panels to generate 3D meshes. In addition, we applied two processes to optimize the modeling result. First, we removed any trace of the moving objects, and collected the points on the dynamic objects in previous frames. Next, these points were merged with the nonground points in the current frame. We also applied slide window and near point projection techniques to fill the holes in the meshes. Finally, we applied texture mapping using 2D images captured using three cameras installed in the front of the robot. The results of the experiments prove that our nonground modeling method can be used to model photorealistic and real-time 3D scenes around a remote-controlled robot.

Collaboration


Dive into the Kiho Kwak's collaboration.

Top Co-Authors

Avatar

Sungdae Sim

Agency for Defense Development

View shared research outputs
Top Co-Authors

Avatar

Jihong Min

Agency for Defense Development

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jun Kim

Agency for Defense Development

View shared research outputs
Top Co-Authors

Avatar

Daniel Huber

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Takeo Kanade

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Juil Sock

Agency for Defense Development

View shared research outputs
Top Co-Authors

Avatar

Jun-Sik Kim

Korea Institute of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge