Hakil Kim
Michigan State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hakil Kim.
international conference on pattern recognition | 2006
Choonwoo Ryu; Hakil Kim; Anil K. Jain
This paper proposes a minutiae-based template adaptation algorithm which can be applied after the fingerprint authentication process. The algorithm updates a template by using a query fingerprint, which is successfully verified by the fingerprint matcher as a high quality genuine input. This algorithm generates an updated minutiae set by using not only the minutiae but also local fingerprint quality information and utilizes a successive Bayesian estimation to evaluate the credibility of minutiae and their types. The proposed algorithm updates fingerprint minutiae information in the template as well as appends new minutiae from the query fingerprint. Preliminary experiments show an average 32.7% EER reduction and an even higher matching accuracy improvement at low false accept rates
international conference on biometrics | 2016
Eun-Soo Park; Weonjin Kim; Qiongxiu Li; Jungmin Kim; Hakil Kim
As a result of the growing use of commercial fingerprint authentication systems in mobile devices, detection of fingerprint spoofing has become increasingly important and widely used. This study proposes a fingerprint liveness- detection method based on convolutional neural network (CNN) features extracted from fingerprint patches. Firstly, fingerprints are segmented, and then data augmentation is performed to increase the size of training data. Secondly, on the augmented fingerprint, locations of patches are determined through normal distributions of segmented areas of the fingerprint image. Finally, a voting strategy is applied on all patches to make a decision of live or fake fingerprints. Experimental results show that the proposed method can be applied to fingerprint liveness detection with a 3.42% average classification error rate on a LivDet2009 Identix sensor dataset.
Journal of Institute of Control, Robotics and Systems | 2014
Hyoungrae Kim; Xuenan Cui; Jaehong Lee; Seungjun Lee; Hakil Kim
Abstract: This paper proposes a method of tracking an object for a person-following mobile robot by combining a monocular camera and a laser scanner, where each sensor can supplement the weaknesses of the other sensor. For human-robot interaction, a mobile robot needs to maintain a distance between a moving person and itself. Maintaining distance consists of two parts: object tracking and person-following. Object tracking consists of particle filtering and online learning using shape features which are extracted from an image. A monocular camera easily fails to track a person due to a narrow field-of-view and influence of illumination changes, and has therefore been used together with a laser scanner. After constructing the geometric relation between the differently oriented sensors, the proposed method demonstrates its robustness in tracking and following a person with a success rate of 94.7% in indoor environments with varying lighting conditions and even when a moving object is located between the robot and the person. Keywords: object tracking, person-following, mobile robot, camera-LRF extrinsic calibration I. 서론 최근 로봇 기술은 다양한 분야에서 발전하고 있다. 특히 인간 중심의 기술로써 서비스 로봇이나 이동 보조 로봇이 개발되는 추세이다. 이런 로봇들은 사람과의 협업을 위해 실내/외 환경을 인식해야 할 뿐만 아니라 사람의 행동 인식이 필요하다. 또한, 인간-로봇 상호작용(human-robot interaction)의 관점에서 로봇이 사람과 가까운 거리를 유지하며 이동하는 것이 다양한 상호작용을 적용하는데 효과적이다. 로봇에서의 컴퓨터비전 기술은 시각을 담당한다. 특히 카메라나 다른 비전 센서로부터 획득된 데이터에서 특정 물체를 추적하는 것은 그 정보를 제어부에 전달함으로써 추종까지 할 수 있게 하는데 큰 역할을 하며 이를 위해 높은 정확성을 가진 추적이 요구된다. 하지만 이동 로봇에서의 추종은 낮과 밤, 실내/외 같은 다양한 조명 조건과 여러 가지 물체 색상, 형태 변화 아래에서 이루어지기 때문에 많은 문제가 발생한다[1]. 컴퓨터비전 분야에서 일반적으로 추적에 실패하는 요인은 카메라를 기준으로 내부적인 것과 외부적인 것으로 나뉜다. 먼저 내부적인 요인은 추적하려는 물체의 자세 및 모양 변화와 같은 대상 물체 자체적인 변화이고, 외부적인 요인은 카메라의 떨림, 시점의 변화, 조명의 변화, 복잡한 배경, 다른 물체에 의한 가려지는 현상과 같이, 대상 물체가 변하는 것 이외에 나타나는 현상이다. 이러한 많은 내부적, 외부적 요인에 의해서 성능이 떨어지는 문제점을 해결하기 위해 다양하게 변화하는 외부 환경에 대해서 강인하게 물체를 추적할 수 있도록 많은 연구가 진행되고 있다[1,2]. 기존의 사람 추종 분야에서 사용한 방법 중 하나는 사람에게 발광 장치를 입혀서 위치를 알아내는 연구가 있다[3,4]. 이는 인공적인 장치를 입히는 등 시스템 주변의 인프라와 사전 캘리브레이션(calibration)이 되어있어야 하기 때문에 실제 환경에서 적용의 한계가 있다. [5,6]에서는 카메라를 기반으로 대상의 얼굴을 검출하여 추종하였으나, 얼굴 검출을 적용하는 것은 사람이 로봇을 정면으로 응시해야 하기 때문에 사용자가 매우 불편할 수 있다. 게다가 [5]에서는 조명과 배경 변화에 강인하지 못하고, [6]에서는 레이저스캐너(laser scanner, LRF)를 사용하였음에도 불구하고 단순히 사람의 다리를 검출하는 것으로 이용되었다. 사람 추종의 또 다른 방법은 미리 저장해놓은 지도를 이용하는 것이다[7]. 이는 레이저스캐너 기반 사람 추종을 위하여 주변 환경에 대한 지도를 필요로 한다. 이 방법은 다양한 환경에서 적용하기 힘들고, 작동 환경이 변하면 지도 또한 변경해주어야 하기 때문에 구동에 한계가 있다. 또한 본 논문과 유사한 [8]의 방법은 레이저스캐너를 이용하여 로봇과 사람 사이의 위치와 거리를 구한다. 그러나 이 방법은 물체에 대한 상대 거리만을 이용하여 안전거리를 확보하는데 지나지 않는다. 본 논문의 기초가 되는 형태 특징 및 색상 정보에 기반한 IPCA (Incremental Principal Component Analysis) 물체 추적은 비
Journal of Institute of Control, Robotics and Systems | 2013
Eun-Soo Park; Yongji Yun; Hyoungrae Kim; Jonghwan Lee; Hoyong Ki; Chulhee Lee; Hakil Kim
This paper proposes a classification method of parallel, vertical parking states and pillars for parking assist system using ultrasonic sensors. Since, in general parking space detection module, the compressed amplitude of ultrasonic data are received, the analysis of them is difficult. To solve these problems, in preprocessing state, symmetric transform and noise removal are performed. In feature extraction process, four features, standard deviation of distance, reconstructed peak, standard deviation of reconstructed signal and sum of width, are proposed. Gaussian fitting model is used to reconstruct saturated peak signal and discriminability of each feature is measured. To find the best combination among these features, multi-class SVM and subset generator are used for more accurate and robust classification. The proposed method shows 92 % classification rate and proves the applicability to parking space detection modules.
Journal of Institute of Control, Robotics and Systems | 2009
Jun-Chul Kim; Young-Han Jung; Eun-Soo Park; Xuenan Cui; Hakil Kim; Uk-Youl Huh
This paper presents a fast feature extraction method for autonomous mobile robots utilizing parallel processing and based on OpenMP, SSE (Streaming SIMD Extension) and CUDA programming. In the first step on CPU version, the algorithms and codes are optimized and then implemented by parallel processing. The parallel algorithms are debugged to maintain the same level of performance and the process for extracting key points and obtaining dominant orientation with respect to key points is parallelized. After extraction, a parallel descriptor via SSE instructions is constructed. And the GPU version also implemented by parallel processing using CUDA based on the SIFT. The GPU-Parallel descriptor achieves an acceleration up to five times compared with the CPU-Parallel descriptor, but it shows the lower performance than CPU version. CPU version also speed-up the four and half times compared with the original SIFT while maintaining robust performance.
Journal of Institute of Control, Robotics and Systems | 2014
Chungsu Lee; Eun-Soo Park; Jonghwan Lee; Jong-Hee Kim; Hakil Kim
This paper proposes a statistical regression method for classifying pillars and vehicles in parking area using a single ultrasonic sensor. There are three types of information provided by the ultrasonic sensor: TOF, the peak and the width of a pulse, from which 67 different features are extracted through segmentation and data preprocessing. The classification using the multiple SVM and the multinomial logistic regression are applied to the set of extracted features, and has achieved the accuracy of 85% and 89.67%, respectively, over a set of real-world data. The experimental result proves that the proposed feature extraction and classification scheme is applicable to the object classification using an ultrasonic sensor.
Journal of Institute of Control, Robotics and Systems | 2012
Yong-Han Jung; Eun-Soo Park; Hyung-Ho Lee; De-chang Wang; Uk-Youl Huh; Hakil Kim
This paper proposes a moving object detection algorithm for active camera system that can be applied to mobile robot and intelligent surveillance system. Most of moving object detection algorithms based on a stationary camera system. These algorithms used fixed surveillance system that does not consider the motion of the background or robot tracking system that track pre-learned object. Unlike the stationary camera system, the active camera system has a problem that is difficult to extract the moving object due to the error occurred by the movement of camera. In order to overcome this problem, the motion of the camera was compensated by using SURF and Pseudo Perspective model, and then the moving object is extracted efficiently using stochastic Label Cluster transport model. This method is possible to detect moving object because that minimizes effect of the background movement. Our approach proves robust and effective in terms of moving object detection in active camera system.
Journal of Institute of Control, Robotics and Systems | 2010
Jun-Chul Kim; Xuenan Cui; Eun-Soo Park; Hyohoon Choi; Hakil Kim
This paper presents an image alignment algorithm for application of AOI (Automatic Optical Inspection) based on SIFT. Since the correspondences result using SIFT descriptor have many wrong points for aligning, this paper modified and classified those points by five measures called the CCFMR (Cascade Classifier for False Matching Reduction) After reduced the false matching, rotation and translation are estimated by point selection method. Experimental results show that the proposed method has fewer fail matching in comparison to commercial software MIL 8.0, and specially, less than twice with the well-controlled environment’s data sets (such as AOI system). The rotation and translation accuracy is robust than MIL in the noise data sets, but the errors are higher than in a rotation variation data sets although that also meaningful result in the practical system. In addition to, the computational time consumed by the proposed method is four times shorter than that by MIL which increases linearly according to noise.
Journal of Institute of Control, Robotics and Systems | 2009
Xuenan Cui; Eun-Soo Park; Jun-Chul Kim; Hakil Kim
This paper presents a Gabor texture feature extraction method for classification of discolored Metal pad images using GPU(Graphics Processing Unit). The proposed algorithm extracts the texture information using Gabor filters and constructs a pattern map using the extracted information. Finally, the golden pad images are classified by utilizing the feature vectors which are extracted from the constructed pattern map. In order to evaluate the performance of the Gabor texture feature extraction algorithm based on GPU, a sequential processing and parallel processing using OpenMP in CPU of this algorithm were adopted. Also, the proposed algorithm was implemented by using Global memory and Shared memory in GPU. The experimental results were demonstrated that the method using Shared memory in GPU provides the best performance. For evaluating the effectiveness of extracted Gabor texture features, an experimental validation has been conducted on a database of 20 Metal pad images and the experiment has shown no mis-classification.
Sensors | 2018
Thi Hai Binh Nguyen; Eun-Soo Park; Xuenan Cui; Van Huan Nguyen; Hakil Kim
The rapid growth of fingerprint authentication-based applications makes presentation attack detection, which is the detection of fake fingerprints, become a crucial problem. There have been numerous attempts to deal with this problem; however, the existing algorithms have a significant trade-off between accuracy and computational complexity. This paper proposes a presentation attack detection method using Convolutional Neural Networks (CNN), named fPADnet (fingerprint Presentation Attack Detection network), which consists of Fire and Gram-K modules. Fire modules of fPADnet are designed following the structure of the SqueezeNet Fire module. Gram-K modules, which are derived from the Gram matrix, are used to extract texture information since texture can provide useful features in distinguishing between real and fake fingerprints. Combining Fire and Gram-K modules results in a compact and efficient network for fake fingerprint detection. Experimental results on three public databases, including LivDet 2011, 2013 and 2015, show that fPADnet can achieve an average detection error rate of 2.61%, which is comparable to the state-of-the-art accuracy, while the network size and processing time are significantly reduced.