Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Changhyun Choi is active.

Publication


Featured researches published by Changhyun Choi.


international conference on robotics and automation | 2012

Voting-based pose estimation for robotic assembly using a 3D sensor

Changhyun Choi; Yuichi Taguchi; Oncel Tuzel; Ming-Yu Liu; Srikumar Ramalingam

We propose a voting-based pose estimation algorithm applicable to 3D sensors, which are fast replacing their 2D counterparts in many robotics, computer vision, and gaming applications. It was recently shown that a pair of oriented 3D points, which are points on the object surface with normals, in a voting framework enables fast and robust pose estimation. Although oriented surface points are discriminative for objects with sufficient curvature changes, they are not compact and discriminative enough for many industrial and real-world objects that are mostly planar. As edges play the key role in 2D registration, depth discontinuities are crucial in 3D. In this paper, we investigate and develop a family of pose estimation algorithms that better exploit this boundary information. In addition to oriented surface points, we use two other primitives: boundary points with directions and boundary line segments. Our experiments show that these carefully chosen primitives encode more information compactly and thereby provide higher accuracy for a wide class of industrial parts and enable faster computation. We demonstrate a practical robotic bin-picking system using the proposed algorithm and a 3D sensor.


intelligent robots and systems | 2013

RGB-D object tracking: A particle filter approach on GPU

Changhyun Choi; Henrik I. Christensen

This paper presents a particle filtering approach for 6-DOF object pose tracking using an RGB-D camera. Our particle filter is massively parallelized in a modern GPU so that it exhibits real-time performance even with several thousand particles. Given an a priori 3D mesh model, the proposed approach renders the object model onto texture buffers in the GPU, and the rendered results are directly used by our parallelized likelihood evaluation. Both photometric (colors) and geometric (3D points and surface normals) features are employed to determine the likelihood of each particle with respect to a given RGB-D scene. Our approach is compared with a tracker in the PCL both quantitatively and qualitatively in synthetic and real RGB-D sequences, respectively.


The International Journal of Robotics Research | 2012

Robust 3D visual tracking using particle filtering on the special Euclidean group: A combined approach of keypoint and edge features

Changhyun Choi; Henrik I. Christensen

We present a 3D model-based visual tracking approach using edge and keypoint features in a particle filtering framework. Recently, particle-filtering-based approaches have been proposed to integrate multiple pose hypotheses and have shown good performance, but most of the work has made an assumption that an initial pose is given. To ameliorate this limitation, we employ keypoint features for initialization of the filter. Given 2D–3D keypoint correspondences, we randomly choose a set of minimum correspondences to calculate a set of possible pose hypotheses. Based on the inlier ratio of correspondences, the set of poses are drawn to initialize particles. After the initialization, edge points are employed to estimate inter-frame motions. While we follow a standard edge-based tracking, we perform a refinement process to improve the edge correspondences between sampled model edge points and image edge points. For better tracking performance, we employ a first-order autoregressive state dynamics, which propagates particles more effectively than Gaussian random walk models. The proposed system re-initializes particles by itself when the tracked object goes out of the field of view or is occluded. The robustness and accuracy of our approach is demonstrated using comparative experiments on synthetic and real image sequences.


intelligent robots and systems | 2012

3D pose estimation of daily objects using an RGB-D camera

Changhyun Choi; Henrik I. Christensen

In this paper, we present an object pose estimation algorithm exploiting both depth and color information. While many approaches assume that a target region is cleanly segmented from background, our approach does not rely on that assumption, and thus it can estimate pose of a target object in heavy clutter. Recently, an oriented point pair feature was introduced as a low dimensional description of object surfaces. The feature has been employed in a voting scheme to find a set of possible 3D rigid transformations between object model and test scene features. While several approaches using the pair features require an accurate 3D CAD model as training data, our approach only relies on several scanned views of a target object, and hence it is straightforward to learn new objects. In addition, we argue that exploiting color information significantly enhances the performance of the voting process in terms of both time and accuracy. To exploit the color information, we define a color point pair feature, which is employed in a voting scheme for more effective pose estimation. We show extensive quantitative results of comparative experiments between our approach and a state-of-the-art.


international conference on robotics and automation | 2010

Real-time 3D model-based tracking using edge and keypoint features for robotic manipulation

Changhyun Choi; Henrik I. Christensen

We propose a combined approach for 3D real-time object recognition and tracking, which is directly applicable to robotic manipulation. We use keypoints features for the initial pose estimation. This pose estimate serves as an initial estimate for edge-based tracking. The combination of these two complementary methods provides an efficient and robust tracking solution. The main contributions of this paper includes: 1) While most of the RAPiD style tracking methods have used simplified CAD models or at least manually well designed models, our system can handle any form of polygon mesh model. To achieve the generality of object shapes, salient edges are automatically identified during an offline stage. Dull edges usually invisible in images are maintained as well for the cases when they constitute the object boundaries. 2) Our system provides a fully automatic recognition and tracking solution, unlike most of the previous edge-based tracking that require a manual pose initialization scheme. Since the edge-based tracking sometimes drift because of edge ambiguity, the proposed system monitors the tracking results and occasionally re-initialize when the tracking results are inconsistent. Experimental results demonstrate our systems efficiency as well as robustness.


intelligent robots and systems | 2013

RGB-D edge detection and edge-based registration

Changhyun Choi; Alexander J. B. Trevor; Henrik I. Christensen

We present a 3D edge detection approach for RGB-D point clouds and its application in point cloud registration. Our approach detects several types of edges, and makes use of both 3D shape information and photometric texture information. Edges are categorized as occluding edges, occluded edges, boundary edges, high-curvature edges, and RGB edges. We exploit the organized structure of the RGB-D image to efficiently detect edges, enabling near real-time performance. We present two applications of these edge features: edge-based pair-wise registration and a pose-graph SLAM approach based on this registration, which we compare to state-of-the-art methods. Experimental results demonstrate the performance of edge detection and edge-based registration both quantitatively and qualitatively.


intelligent robots and systems | 2012

3D textureless object detection and tracking: An edge-based approach

Changhyun Choi; Henrik I. Christensen

This paper presents an approach to textureless object detection and tracking of the 3D pose. Our detection and tracking schemes are coherently integrated in a particle filtering framework on the special Euclidean group, SE(3), in which the visual tracking problem is tackled by maintaining multiple hypotheses of the object pose. For textureless object detection, an efficient chamfer matching is employed so that a set of coarse pose hypotheses is estimated from the matching between 2D edge templates of an object and a query image. Particles are then initialized from the coarse pose hypotheses by randomly drawing based on costs of the matching. To ensure the initialized particles are at or close to the global optimum, an annealing process is performed after the initialization. While a standard edge-based tracking is employed after the annealed initialization, we employ a refinement process to establish improved correspondences between projected edge points from the object model and edge points from an input image. Comparative results for several image sequences with clutter are shown to validate the effectiveness of our approach.


intelligent robots and systems | 2008

Real-time 3D object pose estimation and tracking for natural landmark based visual servo

Changhyun Choi; Seungmin Baek; Sukhan Lee

A real-time solution for estimating and tracking the 3D pose of a rigid object is presented for image-based visual servo with natural landmarks. The many state-of-the-art technologies that are available for recognizing the 3D pose of an object in a natural setting are not suitable for real-time servo due to their time lags. This paper demonstrates that a real-time solution of 3D pose estimation become feasible by combining a fast tracker such as KLT [7] [8] with a method of determining the 3D coordinates of tracking points on an object at the time of SIFT based tracking point initiation, assuming that a 3D geometric model with SIFT description of an object is known a-priori. Keeping track of tracking points with KLT, removing the tracking point outliers automatically, and reinitiating the tracking points using SIFT once deteriorated, the 3D pose of an object can be estimated and tracked in real-time. This method can be applied to both mono and stereo camera based 3D pose estimation and tracking. The former guarantees higher frame rates with about 1 ms of local pose estimation, while the latter assures of more precise pose results but with about 16 ms of local pose estimation. The experimental investigations have shown the effectiveness of the proposed approach with real-time performance.


international conference on robotics and automation | 2011

Robust 3D visual tracking using particle filtering on the SE(3) group

Changhyun Choi; Henrik I. Christensen

In this paper, we present a 3D model-based object tracking approach using edge and keypoint features in a particle filtering framework. Edge points provide 1D information for pose estimation and it is natural to consider multiple hypotheses. Recently, particle filtering based approaches have been proposed to integrate multiple hypotheses and have shown good performance, but most of the work has made an assumption that an initial pose is given. To remove this assumption, we employ keypoint features for initialization of the filter. Given 2D-3D keypoint correspondences, we choose a set of minimum correspondences to calculate a set of possible pose hypotheses. Based on the inlier ratio of correspondences, the set of poses are drawn to initialize particles. For better performance, we employ an autoregressive state dynamics and apply it to a coordinate-invariant particle filter on the SE(3) group. Based on the number of effective particles calculated during tracking, the proposed system re-initializes particles when the tracked object goes out of sight or is occluded. The robustness and accuracy of our approach is demonstrated via comparative experiments.


international symposium on experimental robotics | 2016

Towards coordinated precision assembly with robot teams

Mehmet Remzi Dogar; Ross A. Knepper; Andrew Spielberg; Changhyun Choi; Henrik I. Christensen; Daniela Rus

We present a system in which a flexible team of robots coordinates to assemble large, complex, and diverse structures autonomously. Our system operates across a wide range of spatial scales and tolerances, using a hierarchical perception architecture. For the successful execution of very precise assembly operations under initial uncertainty, our system starts with high-field of view but low accuracy sensors, and gradually uses low field-of-view but high accuracy sensors. Our system also uses a failure detection and recovery system, integrated with this hierarchical perception architecture: upon losing track of a feature, our system retracts to using high-field of view systems to re-localize. Additionally, we contribute manipulation skills and tools necessary to assemble large structures with high precision. First, the team of robots coordinates to transport large assembly parts which are too heavy for a single robot to carry. Second, we develop a new tool which is capable of co-localizing holes and fasteners for robust insertion and fastening. We present real robot experiments where we measure the contribution of the hierarchical perception and failure recovery approach to the robustness of our system. We also present an extensive set of experiments where our robots successfully insert all 80 of the attempted fastener insertion operations.

Collaboration


Dive into the Changhyun Choi's collaboration.

Top Co-Authors

Avatar

Yong-Joo Kim

Chungnam National University

View shared research outputs
Top Co-Authors

Avatar

Henrik I. Christensen

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Sun-Ok Chung

Chungnam National University

View shared research outputs
Top Co-Authors

Avatar

Dae-Hyun Lee

Chungnam National University

View shared research outputs
Top Co-Authors

Avatar

Daniela Rus

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Oncel Tuzel

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar

Yuichi Taguchi

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar

Ming-Yu Liu

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar

Kyeong-Hwan Lee

Chonnam National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge