Hyoungrae Kim
Inha University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hyoungrae Kim.
systems man and cybernetics | 1999
W. Choi; Chang-Woo Ryu; Hyoungrae Kim
Incorporation of various types of sensors increases the degrees of autonomy and intelligence of mobile robots (mobots) in perceiving surroundings, which at the same time imposes a large computational burden on data processing. The purpose of the research presented in this paper is to develop a low-cost multisensor system and incidental algorithms for autonomous navigation of a mobot. This paper proposes digital image processing schemes for map-building and localization of a mobot using a monocular vision system and a single ultrasonic sensor in indoor environments. In localization, the camera calibration is preceded so that the depth information can be acquired from the image obtained by a single camera. For map-building, fast and effective image processing techniques based on morphology are applied to reduce computational complexities. For preliminary experiments, we have integrated a mobot system whose main components are a mono-vision system, a single ultrasonic sensor, and a notebook PC, mounted on a mobile base. The proposed algorithms were implemented, and the mobot was able to localize itself in an allowed position error range and to locate dynamic obstacles moving reasonably fast inside a building. The overall results demonstrate the suitability of the proposed methods for developing autonomous service mobots in indoor environments.
international conference on control automation and systems | 2013
Jaehong Lee; Heon Gu; Hyung-Chan Kim; Jungmin Kim; Hyoungrae Kim; Hakil Kim
This study develops and implements a Kinect-based 3D gesture recognition system for interactive manipulation of 3D objects in educational visualization softwares. The developed system detects and tracks human hands from the RGBD images captured by a Kinect sensor and recognizes human gestures by counting the number of open fingers of each fist and tracking 3D motion of both hands. The status of fists and gestures of hands are recognized as control commands of manipulating 3D structures visualized in 2D monitor by Molecule Viewer. The developed system is implemented on a Windows 7 laptop PC using C# and Emgu CV 2.3.0 library, and tested in ordinary classroom environment. Its performance demonstrates the overall average accuracy of around 90% in recognizing status of hands and gesture commands under various ambient lighting conditions.
Journal of Institute of Control, Robotics and Systems | 2012
Hyung-Ho Lee; Xuenan Cui; Hyoungrae Kim; Seong-Wan Ma; Jaehong Lee; Hakil Kim
This paper proposes a robust object tracking algorithm using object features and on-line learning based particle filter for mobile robots. Mobile robots with a side-view camera have problems as camera jitter, illumination change, object shape variation and occlusion in variety environments. In order to overcome these problems, color histogram and HOG descriptor are fused for efficient representation of an object. Particle filter is used for robust object tracking with on-line learning method IPCA in non-linear environment. The validity of the proposed algorithm is revealed via experiments with DBs acquired in variety environment. The experiments show that the accuracy performance of particle filter using combined color and shape information associated with online learning (92.4 %) is more robust than that of particle filter using only color information (71.1 %) or particle filter using shape and color information without on-line learning (90.3 %).
Journal of Institute of Control, Robotics and Systems | 2014
Hyoungrae Kim; Xuenan Cui; Jaehong Lee; Seungjun Lee; Hakil Kim
Abstract: This paper proposes a method of tracking an object for a person-following mobile robot by combining a monocular camera and a laser scanner, where each sensor can supplement the weaknesses of the other sensor. For human-robot interaction, a mobile robot needs to maintain a distance between a moving person and itself. Maintaining distance consists of two parts: object tracking and person-following. Object tracking consists of particle filtering and online learning using shape features which are extracted from an image. A monocular camera easily fails to track a person due to a narrow field-of-view and influence of illumination changes, and has therefore been used together with a laser scanner. After constructing the geometric relation between the differently oriented sensors, the proposed method demonstrates its robustness in tracking and following a person with a success rate of 94.7% in indoor environments with varying lighting conditions and even when a moving object is located between the robot and the person. Keywords: object tracking, person-following, mobile robot, camera-LRF extrinsic calibration I. 서론 최근 로봇 기술은 다양한 분야에서 발전하고 있다. 특히 인간 중심의 기술로써 서비스 로봇이나 이동 보조 로봇이 개발되는 추세이다. 이런 로봇들은 사람과의 협업을 위해 실내/외 환경을 인식해야 할 뿐만 아니라 사람의 행동 인식이 필요하다. 또한, 인간-로봇 상호작용(human-robot interaction)의 관점에서 로봇이 사람과 가까운 거리를 유지하며 이동하는 것이 다양한 상호작용을 적용하는데 효과적이다. 로봇에서의 컴퓨터비전 기술은 시각을 담당한다. 특히 카메라나 다른 비전 센서로부터 획득된 데이터에서 특정 물체를 추적하는 것은 그 정보를 제어부에 전달함으로써 추종까지 할 수 있게 하는데 큰 역할을 하며 이를 위해 높은 정확성을 가진 추적이 요구된다. 하지만 이동 로봇에서의 추종은 낮과 밤, 실내/외 같은 다양한 조명 조건과 여러 가지 물체 색상, 형태 변화 아래에서 이루어지기 때문에 많은 문제가 발생한다[1]. 컴퓨터비전 분야에서 일반적으로 추적에 실패하는 요인은 카메라를 기준으로 내부적인 것과 외부적인 것으로 나뉜다. 먼저 내부적인 요인은 추적하려는 물체의 자세 및 모양 변화와 같은 대상 물체 자체적인 변화이고, 외부적인 요인은 카메라의 떨림, 시점의 변화, 조명의 변화, 복잡한 배경, 다른 물체에 의한 가려지는 현상과 같이, 대상 물체가 변하는 것 이외에 나타나는 현상이다. 이러한 많은 내부적, 외부적 요인에 의해서 성능이 떨어지는 문제점을 해결하기 위해 다양하게 변화하는 외부 환경에 대해서 강인하게 물체를 추적할 수 있도록 많은 연구가 진행되고 있다[1,2]. 기존의 사람 추종 분야에서 사용한 방법 중 하나는 사람에게 발광 장치를 입혀서 위치를 알아내는 연구가 있다[3,4]. 이는 인공적인 장치를 입히는 등 시스템 주변의 인프라와 사전 캘리브레이션(calibration)이 되어있어야 하기 때문에 실제 환경에서 적용의 한계가 있다. [5,6]에서는 카메라를 기반으로 대상의 얼굴을 검출하여 추종하였으나, 얼굴 검출을 적용하는 것은 사람이 로봇을 정면으로 응시해야 하기 때문에 사용자가 매우 불편할 수 있다. 게다가 [5]에서는 조명과 배경 변화에 강인하지 못하고, [6]에서는 레이저스캐너(laser scanner, LRF)를 사용하였음에도 불구하고 단순히 사람의 다리를 검출하는 것으로 이용되었다. 사람 추종의 또 다른 방법은 미리 저장해놓은 지도를 이용하는 것이다[7]. 이는 레이저스캐너 기반 사람 추종을 위하여 주변 환경에 대한 지도를 필요로 한다. 이 방법은 다양한 환경에서 적용하기 힘들고, 작동 환경이 변하면 지도 또한 변경해주어야 하기 때문에 구동에 한계가 있다. 또한 본 논문과 유사한 [8]의 방법은 레이저스캐너를 이용하여 로봇과 사람 사이의 위치와 거리를 구한다. 그러나 이 방법은 물체에 대한 상대 거리만을 이용하여 안전거리를 확보하는데 지나지 않는다. 본 논문의 기초가 되는 형태 특징 및 색상 정보에 기반한 IPCA (Incremental Principal Component Analysis) 물체 추적은 비
systems, man and cybernetics | 2013
Jaehong Lee; Xuenan Cui; Seungjun Lee; Hakil Kim; Hyoungrae Kim
Challenging problems about mobile robot localization still exist in real-world environments such as lobby or hall. In this paper, a localization method termed INHA (Intuitive Natural landmark-based Homography estimation Algorithm) is proposed. The proposed method is based on a feature matching scheme which only requires a single camera. The feature points are extracted using Oriented FAST and Rotated BRIEF (ORB) and matched by the RANSAC algorithm. The 3D position of the camera is estimated by an EPnP method. The experimental results showed an average distance error of 1% and driving tests confirmed that the proposed method can be applied in lobby or hall environments.
Journal of Institute of Control, Robotics and Systems | 2013
Eun-Soo Park; Yongji Yun; Hyoungrae Kim; Jonghwan Lee; Hoyong Ki; Chulhee Lee; Hakil Kim
This paper proposes a classification method of parallel, vertical parking states and pillars for parking assist system using ultrasonic sensors. Since, in general parking space detection module, the compressed amplitude of ultrasonic data are received, the analysis of them is difficult. To solve these problems, in preprocessing state, symmetric transform and noise removal are performed. In feature extraction process, four features, standard deviation of distance, reconstructed peak, standard deviation of reconstructed signal and sum of width, are proposed. Gaussian fitting model is used to reconstruct saturated peak signal and discriminability of each feature is measured. To find the best combination among these features, multi-class SVM and subset generator are used for more accurate and robust classification. The proposed method shows 92 % classification rate and proves the applicability to parking space detection modules.
systems, man and cybernetics | 2014
Hyoungrae Kim; Jaehong Lee; Hakil Kim; Daehyuk Park
This paper proposes a localization of a vehicle on highway in the cross-sectional direction for the purpose of recognizing the driving lane. By tracking road signs over the highway, the relative position between the vehicle and the sign is calculated and the absolute position is obtained based on the a priori known information of the road sign as traffic regulations for installation. The proposed method uses Kalman filter for road sign tracking, analyzes the motion using the pinhole camera model, and classifies the type of the road sign using ORB (Oriented fast and Rotated BRIEF) features. Then, the driving lane is recognized from the relative position of the vehicle with the road sign. The experiments performed on videos acquired from real-world highway driving demonstrate that the proposed method is capable of compensating the limit of GPS positioning.
2016 International Conference on Electronics, Information, and Communications (ICEIC) | 2016
Cheolyong Jang; Hyoungrae Kim; Eun-Soo Park; Hakil Kim
This paper proposes a traffic sign recognition algorithm which is unaffected by dataset bias. Color information is an important element in traffic sign recognition, the performance of which can be affected by weather conditions, illumination, and the use of different cameras. In order to overcome this problem, our approach involves traffic sign detection and classification. In a detection module, red and blue color enhancement with MSERs is performed to improve the extraction of candidate regions of traffic signs. A Bayesian classifier with a DtB feature is used to detect traffic signs. Detected traffic signs are classified via spatial transformer networks based on convolutional neural networks. In public datasets, this work is evaluated with the results obtained featuring competitive accuracy without a training dataset.
Archive | 2014
Jaehong Lee; Xuenan Cui; Hyoungrae Kim; Seungjun Lee; Hakil Kim
Elevator riding is an essential skill for a mobile service robot to carry out various tasks. This paper proposes a framework for robot navigation based on sensor fusion of a laser range finder (LRF) and a vision camera to detect an elevator door and recognize its state. The state of the door is determined by calibrating and combining LRF-camera data. The indicator including the current floor number of the elevator and a direction arrow is recognized by a neural network classifier. The robot moves inside the elevator after verifying the state of the door being opened and the moving direction of the elevator. The robot confirms the target floor by an artificial landmark and localizes itself by the marker detection. The proposed method is implemented on a robot platform, and elevator riding is achieved as the experiment results.
vehicular technology conference | 2012
Rui Zhang; Eun-Soo Park; Yongji Yun; Hakil Kim; Hyoungrae Kim
Driver assistance is very important in helping the driver in its driving process. Proposed in this paper is a robust method for collision avoidance on the urban road based on the low beam pattern model, which is used to detect objects under night-time condition with an embedded camera. The proposed method consists of two steps. Firstly, the low beam pattern model is computed through perspective transformation and nonlinear regression from the difference signal between the none-beam frame and the beam frame. Secondly, the moving objects are detected by differencing the real-time input video and low beam pattern model. Several night driving videos are adopted in this study and the experimental results demonstrate the feasibility and effectiveness of the proposed method.