Sung-In Choi
Kyungpook National University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sung-In Choi.
machine vision applications | 2011
Soon-Yong Park; Sung-In Choi; Jun Kim; Jeong Sook Chae
Abstract3D registration is a computer vision technique of aligning multi-view range images with respect to a reference coordinate system. Aligning range images is an important but time-consuming task for complete 3D reconstruction. In this paper, we propose a real-time 3D registration technique by employing the computing power of graphic processing unit (GPU). A point-to-plane 3D registration technique is completely implemented using CUDA, the up-to-date GPU programming technique. Using a hand-held stereo-vision sensor, we apply the proposed technique to real-time 3D scanning of real objects. Registration of a pair of range images, whose resolution is 320 × 240, takes about 60 ms. 3D scanning results and processing time analysis are shown in experiments. To compare the proposed GPU-based 3D registration with other CPU-based techniques, 3D models of a reference object are reconstructed. Reconstruction results of three different techniques in eight different scanning speed are evaluated.
international conference on 3d imaging, modeling, processing, visualization & transmission | 2011
Lei Zhang; Sung-In Choi; Soon-Yong Park
In this paper, a novel variant of the ICP algorithm is proposed for registration of partially overlapping range images. Biunique correspondence is introduced to enhance the performance of ICP by searching multiple closest points. A new kind of outlier is defined, called No-Correspondence (NC) Outlier, which is the point that is not assigned to a biunique correspondence. In order to maintain efficiency, a coarse-to-fine approach is adopted. Experiments show that the proposed algorithm can find the correct rigid transformation with the existence of large non-overlapping area and poor initial alignment. The proposed algorithm is also applied to SLAM with the use of odometry information.
workshop on applications of computer vision | 2009
Soon-Yong Park; Sung-In Choi; Jaekyoung Moon; Joon Kim; Yong Woon Park
Localization of an unmanned ground vehicle (UGV) is a very important task for autonomous vehicle navigation. In this paper, we propose a computer vision technique to identify the location of an outdoor UGV. The proposed technique is based on 3D registration of 360 degree laser range data to a digital surface model (DSM). A long sequence of range frames is obtained from a rotating range sensor which is mounted on the top of the vehicle. Two novel approaches are proposed for accurate 3D registration of range data and the DSM. First, registration is done between range frames in a pair-wise manner followed by a refinement with the DSM. Second, we divide the DSM to several layers and find correspondences near the current vehicle elevation. This reduces the number of outliers and facilitates fast localization. Experimental results show that the proposed approaches yield better performance in 3D localization compared to conventional 3D registration techniques. Error analysis on four outdoor paths is presented with respect to ground truth.
canadian conference on computer and robot vision | 2012
Udaya Wijenayake; Sung-In Choi; Soon-Yong Park
Much research has been conducted so far to find a perfect structured light coding system. Among them, spatial neighborhood techniques which use a single pattern have become more popular as they can be used for dynamic scene capturing. But the difficulties of decoding the pattern when it loses few pattern symbols still remain as a problem. As a solution for this problem we introduce a new strategy which encodes two patterns into a single pattern image. In our particular experiment, we show that our decoding method can decode the pattern even it has some lost symbols.
Advanced Robotics | 2015
Sung-In Choi; Soon-Yong Park
Several pose estimation algorithms, such as n-point and perspective n-point (PnP), have been introduced over the last few decades to solve the relative and absolute pose estimation problems in robotics research. Since the n-point algorithms cannot decide the real scale of robot motion, the PnP algorithms are often addressed to find the absolute scale of motion. This paper introduce a new PnP algorithm which use only two 3D–2D correspondences by considering only planar motion. Experiment results prove that the proposed algorithm solves the absolute motion in real scale with high accuracy and less computational time compared to previous algorithms. Graphical Abstract
workshop on applications of computer vision | 2009
Soon-Yong Park; Sung-In Choi; Jaekyoung Moon; Joon Kim; Yong Woon Park
3D registration is a computer vision technique of aligning multi-view range images with respect to a reference co-ordinate system. Aligning range images is an important and time-complex step in complete 3D reconstruction. In this paper, we propose a real-time 3D registration technique by employing the accelerated computing power of GPU (Graphic Processing Unit). In the proposed technique, complete steps of a point-to-plane 3D registration technique are implemented using CUDA, an up-to-date GPU programming technique. Using a hand-held stereo-vision sensor, we apply the proposed technique to scan real objects. Registration of a pair of range images which size is 320×240 takes about 60 milliseconds in average. Results of 3D model reconstruction and processing time analysis are shown in experiments.
KIPS Transactions on Software and Data Engineering | 2014
Soon-Yong Park; Sung-In Choi
To generate a complete 3D model from depth images of multiple RGB-D cameras, it is necessary to find 3D transformations between RGB-D cameras. This paper proposes a convenient view calibration technique using a spherical object. Conventional view calibration methods use either planar checkerboards or 3D objects with coded-pattern. In these conventional methods, detection and matching of pattern features and codes takes a significant time. In this paper, we propose a convenient view calibration method using both 3D depth and 2D texture images of a spherical object simultaneously. First, while moving the spherical object freely in the modeling space, depth and texture images of the object are acquired from all RGB-D camera simultaneously. Then, the external parameters of each RGB-D camera is calibrated so that the coordinates of the sphere center coincide in the world coordinate system.
advanced concepts for intelligent vision systems | 2013
Chang-Won Choi; Sung-In Choi; Soon-Yong Park
The road signs provide important information about road and traffic to drivers for safety driving. These signs include not only common traffic signs but also the information about unexpected obstacles and road constructions. Accurate detection and identification of road signs is one of the research topics in vehicle vision area. In this paper we propose a stereo vision technique to automatically detect and track road signs in a video sequence which is acquired from a stereo vision camera mounted on a vehicle. First, color information is used to initially detect the candidates of road signs. Second, the Support Vector Machine (SVM) is used to select true signs from the candidates. Once a road sign is detected in a video frame, it is tacked from the next frame until disappeared. The 2-D position of the detected sign on the next frame is predicted by the motion of the vehicle. Here, the vehicle motion means the 3-D Euclidean motion acquired by using a stereo matching method. Finally, the predicted 2-D position of the sign is corrected by the template matching of a scaled sign template in the near regions of the predicted position. Experimental results show that the proposed method can detect and track road signs successfully. Error comparisons with two different detection and tracking methods are shown.
Journal of Institute of Control, Robotics and Systems | 2009
Soon-Yong Park; Sung-In Choi; Jae-Seok Jang; Soon-Ki Jung; Jun Kim; Jeong-Sook Chae
A computer vision technique of estimating the location of an unmanned ground vehicle is proposed. Identifying the location of the unmaned vehicle is very important task for automatic navigation of the vehicle. Conventional positioning sensors may fail to work properly in some real situations due to internal and external interferences. Given a DSM(Digital Surface Map), location of the vehicle can be estimated by the registration of the DSM and multiview range images obtained at the vehicle. Registration of the DSM and range images yields the 3D transformation from the coordinates of the range sensor to the reference coordinates of the DSM. To estimate the vehicle position, we first register a range image to the DSM coarsely and then refine the result. For coarse registration, we employ a fast random sample matching method. After the initial position is estimated and refined, all subsequent range images are registered by applying a pair-wise registration technique between range images. To reduce the accumulation error of pair-wise registration, we periodically refine the registration between range images and the DSM. Virtual environment is established to perform several experiments using a virtual vehicle. Range images are created based on the DSM by modeling a real 3D sensor. The vehicle moves along three different path while acquiring range images. Experimental results show that registration error is about under 1.3m in average.
Journal of Institute of Control, Robotics and Systems | 2016
Udaya Wijenayake; Sung-In Choi; Soon-Yong Park
In the field of computer vision and robotics, bin picking is an important application area in which object pose estimation is necessary. Different approaches, such as 2D feature tracking and 3D surface reconstruction, have been introduced to estimate the object pose accurately. We propose a new approach where we can use both 2D image features and 3D surface information to identify the target object and estimate its pose accurately. First, we introduce a label detection technique using Maximally Stable Extremal Regions (MSERs) where the label detection results are used to identify the target objects separately. Then, the 2D image features on the detected label areas are utilized to generate 3D surface information. Finally, we calculate the 3D position and the orientation of the target objects using the information of the 3D surface.