Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sungdae Sim is active.

Publication


Featured researches published by Sungdae Sim.


Sensors | 2014

Calibration between Color Camera and 3D LIDAR Instruments with a Polygonal Planar Board

Yoonsu Park; Seok Min Yun; Chee Sun Won; Kyungeun Cho; Kyhyun Um; Sungdae Sim

Calibration between color camera and 3D Light Detection And Ranging (LIDAR) equipment is an essential process for data fusion. The goal of this paper is to improve the calibration accuracy between a camera and a 3D LIDAR. In particular, we are interested in calibrating a low resolution 3D LIDAR with a relatively small number of vertical sensors. Our goal is achieved by employing a new methodology for the calibration board, which exploits 2D-3D correspondences. The 3D corresponding points are estimated from the scanned laser points on the polygonal planar board with adjacent sides. Since the lengths of adjacent sides are known, we can estimate the vertices of the board as a meeting point of two projected sides of the polygonal board. The estimated vertices from the range data and those detected from the color image serve as the corresponding points for the calibration. Experiments using a low-resolution LIDAR with 32 sensors show robust results.


Sensors | 2012

Intuitive Terrain Reconstruction Using Height Observation-Based Ground Segmentation and 3D Object Boundary Estimation

Wei Song; Kyungeun Cho; Kyhyun Um; Chee Sun Won; Sungdae Sim

Mobile robot operators must make rapid decisions based on information about the robot’s surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot’s array of sensors, but some upper parts of objects are beyond the sensors’ measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances.


Sensors | 2016

Indirect Correspondence-Based Robust Extrinsic Calibration of LiDAR and Camera

Sungdae Sim; Juil Sock; Kiho Kwak

LiDAR and cameras have been broadly utilized in computer vision and autonomous vehicle applications. However, in order to convert data between the local coordinate systems, we must estimate the rigid body transformation between the sensors. In this paper, we propose a robust extrinsic calibration algorithm that can be implemented easily and has small calibration error. The extrinsic calibration parameters are estimated by minimizing the distance between corresponding features projected onto the image plane. The features are edge and centerline features on a v-shaped calibration target. The proposed algorithm contributes two ways to improve the calibration accuracy. First, we use different weights to distance between a point and a line feature according to the correspondence accuracy of the features. Second, we apply a penalizing function to exclude the influence of outliers in the calibration datasets. Additionally, based on our robust calibration approach for a single LiDAR-camera pair, we introduce a joint calibration that estimates the extrinsic parameters of multiple sensors at once by minimizing one objective function with loop closing constraints. We conduct several experiments to evaluate the performance of our extrinsic calibration algorithm. The experimental results show that our calibration method has better performance than the other approaches.


The Scientific World Journal | 2014

Sloped terrain segmentation for autonomous drive using sparse 3D point cloud.

Seoungjae Cho; Jonghyun Kim; Warda Ikram; Kyungeun Cho; Young-Sik Jeong; Kyhyun Um; Sungdae Sim

A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame.


International Journal of Distributed Sensor Networks | 2014

Traversable Ground Surface Segmentation and Modeling for Real-Time Mobile Mapping

Wei Song; Seoungjae Cho; Kyungeun Cho; Kyhyun Um; Chee Sun Won; Sungdae Sim

Remote vehicle operator must quickly decide on the motion and path. Thus, rapid and intuitive feedback of the real environment is vital for effective control. This paper presents a real-time traversable ground surface segmentation and intuitive representation system for remote operation of mobile robot. Firstly, a terrain model using voxel-based flag map is proposed for incrementally registering large-scale point clouds in real time. Subsequently, a ground segmentation method with Gibbs-Markov random field (Gibbs-MRF) model is applied to detect ground data in the reconstructed terrain. Finally, we generate a texture mesh for ground surface representation by mapping the triangles in the terrain mesh onto the captured video images. To speed up the computation, we program a graphics processing unit (GPU) to implement the proposed system for large-scale datasets in parallel. Our proposed methods were tested in an outdoor environment. The results show that ground data is segmented effectively and the ground surface is represented intuitively.


Multimedia Tools and Applications | 2018

Convergent application for trace elimination of dynamic objects from accumulated lidar point clouds

Phuong Minh Chu; Seoungjae Cho; Sungdae Sim; Kiho Kwak; Kyungeun Cho

In this paper, a convergent multimedia application for filtering traces of dynamic objects from accumulated point cloud data is presented. First, a fast ground segmentation algorithm is designed by dividing each frame data item into small groups. Each group is a vertical line limited by two points. The first point is orthogonally projected from a sensor’s position to the ground. The second one is a point in the outermost data circle. Two voxel maps are employed to save information on the previous and current frames. The position and occupancy status of each voxel are considered for detecting the voxels containing past data of moving objects. To increase detection accuracy, the trace data are sought in only the nonground group. Typically, verifying the intersection between the line segment and voxel is repeated numerous times, which is time-consuming. To increase the speed, a method is proposed that relies on the three-dimensional Bresenham’s line algorithm. Experiments were conducted, and the results showed the effectiveness of the proposed filtering system. In both static and moving sensors, the system immediately eliminated trace data and maintained other static data, while operating three times faster than the sensor rate.


Archive | 2015

LIDAR Simulation Method for Low-Cost Repetitive Validation

Seongjo Lee; Dahyeon Kang; Seoungjae Cho; Sungdae Sim; Yong Woon Park; Kyhyun Um; Kyungeun Cho

Developments in light detection and ranging (LIDAR) have enabled its application in unmanned automotive technology, and various methods using LIDAR are now being proposed. However, it is more difficult to obtain a ground truth dataset to evaluate the performance of algorithms that use a quantity of three-dimensional (3D) points as compared to those that require only 2D images. This paper describes an approach to creating a ground truth dataset for verifying a variety of algorithms by recording the data on detected objects through simulation in virtual space. This approach is able to verify the performance of algorithms in a variety of environments with less cost than the use of actual LIDAR.


Symmetry | 2018

Multimedia System for Real-Time Photorealistic Nonground Modeling of 3D Dynamic Environment for Remote Control System

Phuong Minh Chu; Seoungjae Cho; Sungdae Sim; Kiho Kwak; Kyungeun Cho

Nowadays, unmanned ground vehicles (UGVs) are widely used for many applications. UGVs have sensors including multi-channel laser sensors, two-dimensional (2D) cameras, Global Positioning System receivers, and inertial measurement units (GPS–IMU). Multi-channel laser sensors and 2D cameras are installed to collect information regarding the environment surrounding the vehicle. Moreover, the GPS–IMU system is used to determine the position, acceleration, and velocity of the vehicle. This paper proposes a fast and effective method for modeling nonground scenes using multiple types of sensor data captured through a remote-controlled robot. The multi-channel laser sensor returns a point cloud in each frame. We separated the point clouds into ground and nonground areas before modeling the three-dimensional (3D) scenes. The ground part was used to create a dynamic triangular mesh based on the height map and vehicle position. The modeling of nonground parts in dynamic environments including moving objects is more challenging than modeling of ground parts. In the first step, we applied our object segmentation algorithm to divide nonground points into separate objects. Next, an object tracking algorithm was implemented to detect dynamic objects. Subsequently, nonground objects other than large dynamic ones, such as cars, were separated into two groups: surface objects and non-surface objects. We employed colored particles to model the non-surface objects. To model the surface and large dynamic objects, we used two dynamic projection panels to generate 3D meshes. In addition, we applied two processes to optimize the modeling result. First, we removed any trace of the moving objects, and collected the points on the dynamic objects in previous frames. Next, these points were merged with the nonground points in the current frame. We also applied slide window and near point projection techniques to fill the holes in the meshes. Finally, we applied texture mapping using 2D images captured using three cameras installed in the front of the robot. The results of the experiments prove that our nonground modeling method can be used to model photorealistic and real-time 3D scenes around a remote-controlled robot.


international conference on multisensor fusion and integration for intelligent systems | 2017

Real-time 3D scene modeling using dynamic billboard for remote robot control systems

Phuong Minh Chu; Seoungjae Cho; Hieu Trong Nguyen; Sungdae Sim; Kiho Kwak; Kyungeun Cho

In this paper, a method for modeling three-dimensional scenes from a Lidar point cloud as well as a billboard calibration approach for remote mobile robot control applications are presented as a combined two-step approach. First, by projecting a local three-dimensional point cloud on two-dimensional coordinate system, we obtain a list of colored points. Based on this list, we apply a proposed ground segmentation algorithm to separate ground and non-ground areas. With the ground part, a dynamic triangular mesh is created by means of a height map and the vehicle position. The non-ground part is divided into small groups. Then, a local voxel map is applied for modeling each group. As a result, all the inner surfaces are eliminated. Second, for billboard calibration, we implement three stages in each frame. In the first stage, at the billboard location, an average ground point is estimated. In the second stage, the distortion angle is calculated. The billboard is updated for each frame in the final stage and corresponds to the terrain gradient.


international conference on control automation and systems | 2016

Removing past data of dynamic objects using static Velodyne LiDAR sensor

Phuong Minh Chu; Seoungjae Cho; Sungdae Sim; Kiho Kwak; Yong Woon Park; Kyungeun Cho

This paper presents a method of removing past data of dynamic objects by employing the Velodyne LiDAR sensor to accumulate points. In the first step, a fixed voxel map is created with the sensor position as the center. In the next step, we employ Bresenhams line algorithm to create three-dimensional line segments from the sensor position to all points in the current frame. Each element in the line segment is a voxel, while each line segment is a list of voxels. Finally, past data of moving objects are removed by deleting all points obtained in the previous frames of each line segment.

Collaboration


Dive into the Sungdae Sim's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kiho Kwak

Agency for Defense Development

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yong Woon Park

Agency for Defense Development

View shared research outputs
Top Co-Authors

Avatar

Wei Song

North China University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge