Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sukjune Yoon is active.

Publication


Featured researches published by Sukjune Yoon.


intelligent robots and systems | 2012

On-board odometry estimation for 3D vision-based SLAM of humanoid robot

Sunghwan Ahn; Sukjune Yoon; Seungyong Hyung; Nosan Kwak; Kyung Shik Roh

This paper addresses a vision-based 3D motion estimation framework for humanoid robots, which copes with human-like walking pattern. A humanoid robot, called Roboray, is designed for dynamic walking control with heel-toe motion like a human. In spite of stability and energy efficiency of the dynamic walking, it accompanies larger swaying motion and more uncertainty in camera movement than the conventional ZMP (Zero Moment Point)-based walking does. The framework effectively uses on-board odometry information from the robot to improve the performance of the visionbased motion estimation. To accomplish this, we propose an onboard odometry filter which fuses kinematic odometry, visual odometry, and raw IMU data. And the odometry filter is combined with vision-based SLAM to provide accurate motion model, so it enhances the SLAM estimates. Experimental results in indoor environment verify that the framework can successfully estimate the pose of Roboray in real-time.


intelligent robots and systems | 2012

Robust descriptors for 3D point clouds using Geometric and Photometric Local Feature

Hyoseok Hwang; Seungyong Hyung; Sukjune Yoon; Kyung Shik Roh

The robust perception of robots is strongly needed to handle various objects skillfully. In this paper, we propose a novel approach to recognize objects and estimate their 6-DOF pose using 3D feature descriptors, called Geometric and Photometric Local Feature (GPLF). The proposed descriptors use both the geometric and photometric information of 3D point clouds from RGB-D camera and integrate those information into efficient descriptors. GPLF shows robust discriminative performance regardless of characteristics such as shapes or appearances of objects in cluttered scenes. The experimental results show how well the proposed approach classifies and identify objects. The performance of pose estimation is robust and stable enough for the robot to manipulate objects. We also compare the proposed approach with previous approaches that use partial information of objects with a representative large-scale RGB-D object dataset.


Advanced Robotics | 2013

Real-time 3D simultaneous localization and map-building for a dynamic walking humanoid robot

Sukjune Yoon; Seungyong Hyung; Minhyung Lee; Kyung Shik Roh; Sunghwan Ahn; Andrew P. Gee; Pished Bunnun; Andrew D Calway; Walterio W. Mayol-Cuevas

In this paper, we develop an onboard real-time 3D visual simultaneous localization and mapping system for a dynamic walking humanoid robot. With the constraints of processing and real-time operation, the system uses a lightweight localization and mapping approach based around the well-known extended Kalman filter but that features a robust and real-time relocalization system able to allow loop-closing and robust localization in 6D. The robot is controlled by torque references at the joints using its dynamic properties. This results in more energy efficient motion but also in lager movement than the one found in a conventional ZMP-based humanoid which carefully maintains the position of the center of mass on the plane. These more agile motions pose challenges for a visual mapping system having to operate in real time. The developed system features a combination of stereo camera, robust visual descriptors, and motion model switching to compensate for the larger motion and uncertainty. We provide practical implementation details of the system and methods, and test on the real humanoid robot. We compare our results with motion obtained with a motion capture system.


Advanced Robotics | 2008

Vision-Based Obstacle Detection and Avoidance: Application to Robust Indoor Navigation of Mobile Robots

Sukjune Yoon; Kyung Shik Roh; Youngbo Shim

We propose a more practical and efficient method for obstacle detection and avoidance. In this paper, a robot detects obstacles based on the projective invariants of stereo cameras, fuses this information with two-dimensional scanning sensor data, and finally builds up a more informative and conservative occupancy map. Although this approach is not supposed to recognize the exact shape of the obstacles, this shortcoming is overcome in the actual application by its fast calculation time and robustness against the illumination conditions. To avoid detected obstacles, a new reactive obstacle avoidance strategy is also presented. To evaluate the proposed method, we applied it to the mobile robot iMARO-III. In this test, iMARO-III has succeeded in long-term operation for 7 days continuously without any intervention of engineers and any collision in the real office environment.


Autonomous Robots | 2007

Independent traction control for uneven terrain using stick-slip phenomenon: application to a stair climbing robot

Hyun Do Choi; Chun Kyu Woo; Soo Hyun Kim; Yoon Keun Kwak; Sukjune Yoon

Abstract Mobile robots are being developed for building inspection and security, military reconnaissance, and planetary exploration. In such applications, the robot is expected to encounter rough terrain. In rough terrain, it is important for mobile robots to maintain adequate traction as excessive wheel slip causes the robot to lose mobility or even be trapped. This paper proposes a traction control algorithm that can be independently implemented to each wheel without requiring extra sensors and devices compared with standard velocity control methods. The algorithm estimates the stick-slip of the wheels based on estimation of angular acceleration. Thus, the traction force induced by torque of wheel converses between the maximum static friction and kinetic friction. Simulations and experiments are performed to validate the algorithm. The proposed traction control algorithm yielded a 40.5% reduction of total slip distance and 25.6% reduction of power consumption compared with the standard velocity control method. Furthermore, the algorithm does not require a complex wheel-soil interaction model or optimization of robot kinematics.


pacific-rim symposium on image and video technology | 2006

Global localization of mobile robot using omni-directional image correlation

Sukjune Yoon; Woo-sup Han; Seung Ki Min; Kyung Shik Roh

This paper presents a localization method using circular correlation of omni-directional image. Mobile robot localization, especially in indoor conditions, is a key component in the development of service robots. Though stereo vision is widely used to find location, the performance is limited due to computational complexity and its view angle. To compensate for this, we utilize a single omni-directional camera which can capture 360( panoramic images around a robot at one time. Position of a mobile robot can be estimated by the correlation between CHL (Circular Horizontal Line) of the landmark image and CHL of image captured at the robot position. To accelerate computation, correlation values are calculated based on FFT (Fast Fourier Transform) and to increase reliability, CHLs are warped and correlation values are recalculated. Experimental results and performance in the real home environment show the feasibility of the method.


international conference on intelligent robotics and applications | 2015

Single-Port Surgical Robot System with Flexible Surgical Instruments

Kyung Shik Roh; Sukjune Yoon; Young Do Kwon; Youngbo Shim; Yong-Jae Kim

This paper presents a new novel SINGLE-PORT access surgery (SPS) robot system. This surgical robot is composed of a surgical slave robot with flexible surgical instruments and an ergonomic master device with an image guided system. This surgical slave robot has a six-degrees-of-freedom (6-DOF) guide tube, two 7-DOF surgical tools, a 3-DOF stereo-endoscope and a 5-DOF slave arm high. The master device has a 14-DOF ergonomic instrument controller and a three-dimentional image guided system. Therefore, the operator can approach surgical instruments to the target with various poses through the master console. The experimental results for surgical operations shows the feasibility of this robot in the field of robotic surgery through single-port.


conference on automation science and engineering | 2007

Efficient Navigation Algorithm Using 1D Panoramic Images

Sukjune Yoon; Woo-sup Han; Seung Ki Min; Youngbo Shim; Kyung Shik Roh

We propose a practical and efficient navigation method of the indoor mobile robot using 1D panoramic images. Mobile robot navigation, one of the most importation components in the robotic application, was carried out using the omni-directional camera that can capture 360deg images around a robot. Therefore, this camera has many advantages in the indoor navigation. In this paper, position of the robot can be estimated by 1D panoramic image correlations. This 1D image is the circular horizontal line in the omni-directional image. The proposed method can estimate the position of the robot without any previous position information in the short time i.e., kidnap problem can be coped with. The path of a robot is generated based on the node map that includes 1D panoramic images and the node information at the captured points. For the feasibility test of the proposed algorithms, we applied them to the mobile robot, iMARO-III. In this test, iMARO-III has succeeded in real world operation without any interaction with operator.


international conference on advanced intelligent mechatronics | 2014

Robust place recognition by spectral graph matching using omni-directional images

Sukjune Yoon; Soon Yong Park; Sunghwan Ahn; Hyoseok Hwang; Kyung Shik Roh

This paper presents an appearance based place recognition method using omni-directional images which describe 360° panoramic scene around a camera at one time. We utilize a spectral graph matching method to measure a degree of similarity between omni-directional query and topological node images. This method builds an affinity matrix of a graph whose nodes represent the potential correspondences while the weights on the edges measure the similarity between pairs of potential correspondences. In this paper, we compute the elements of affinity matrix with individual photometric feature matching and geometrical graph shape matching between omni-directional images. The proposed method enables robust place recognition since it considers both the geometrical and photometrical similarities. The experimental results show that the proposed method can robustly estimate the place in the dynamic indoor environment.


Archive | 2008

Simultaneous localization and map building method and medium for moving robot

Sukjune Yoon; Seung Ki Min; Kyung Shik Roh

Collaboration


Dive into the Sukjune Yoon's collaboration.

Researchain Logo
Decentralizing Knowledge