Xuenan Cui
Sangmyung University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Xuenan Cui.
Lecture Notes in Computer Science | 2006
Shenghong Li; Lili Cui; Jong-Uk Choi; Xuenan Cui
In this paper, we present an audio scheme protective of copyright protection using information hiding. We propose visually recognizable binary image and text information as watermark (copyright) information embedded in audio signal. Cepstrum representation of audio can be shown to be very robust to a wide range of attacks. We apply SMM(statistical mean manipulation) theory in embedding image watermarking, and address attacks against lossy audio compression like MP3, white Gaussian noise and so on. A blind detection watermarking can be realized with the proposed scheme.
Journal of Institute of Control, Robotics and Systems | 2012
Hyung-Ho Lee; Xuenan Cui; Hyoungrae Kim; Seong-Wan Ma; Jaehong Lee; Hakil Kim
This paper proposes a robust object tracking algorithm using object features and on-line learning based particle filter for mobile robots. Mobile robots with a side-view camera have problems as camera jitter, illumination change, object shape variation and occlusion in variety environments. In order to overcome these problems, color histogram and HOG descriptor are fused for efficient representation of an object. Particle filter is used for robust object tracking with on-line learning method IPCA in non-linear environment. The validity of the proposed algorithm is revealed via experiments with DBs acquired in variety environment. The experiments show that the accuracy performance of particle filter using combined color and shape information associated with online learning (92.4 %) is more robust than that of particle filter using only color information (71.1 %) or particle filter using shape and color information without on-line learning (90.3 %).
Journal of computing science and engineering | 2012
Naw Chit Too June; Xuenan Cui; Shengzhe Li; Hakil Kim; Kyu-Sung Kwack
Computed tomography (CT) images are widely used for the analysis of the temporal evaluation or monitoring of the progression of a disease. The follow-up examinations of CT scan images of the same patient require a 3D registration technique. In this paper, an automatic and robust registration is proposed for the rigid registration of 3D CT images. The proposed method involves two steps. Firstly, the two CT volumes are aligned based on their principal axes, and then, the alignment from the previous step is refined by the optimization of the similarity score of the image’s voxel. Normalized cross correlation (NCC) is used as a similarity metric and a downhill simplex method is employed to find out the optimal score. The performance of the algorithm is evaluated on phantom images and knee synthetic CT images. By the extraction of the initial transformation parameters with principal axis of the binary volumes, the searching space to find out the parameters is reduced in the optimization step. Thus, the overall registration time is algorithmically decreased without the deterioration of the accuracy. The preliminary experimental results of the study demonstrate that the proposed method can be applied to rigid registration problems of real patient images.
Journal of Institute of Control, Robotics and Systems | 2012
Seung-Wan Ma; Xuenan Cui; Hyung-Ho Lee; Hyung-Rae Kim; Jaehong Lee; Hakil Kim
The recognition of elevator door is needed for mobile service robots to moving between floors in the building. This paper proposed the sensor fusion approach using LRF (Laser Range Finder) and camera to solve the problem. Using the laser scans by the LRF, we extract line segments and detect candidates as the elevator door. Using the image by the camera, the door candidates are verified and selected as real door of the elevator. The outliers are filtered through the verification process. Then, the door state detection is performed by depth analysis within the door. The proposed method uses extrinsic calibration to fuse the LRF and the camera. It gives better results of elevator door recognition compared to the method using LRF only.
Journal of Institute of Control, Robotics and Systems | 2014
Hyoungrae Kim; Xuenan Cui; Jaehong Lee; Seungjun Lee; Hakil Kim
Abstract: This paper proposes a method of tracking an object for a person-following mobile robot by combining a monocular camera and a laser scanner, where each sensor can supplement the weaknesses of the other sensor. For human-robot interaction, a mobile robot needs to maintain a distance between a moving person and itself. Maintaining distance consists of two parts: object tracking and person-following. Object tracking consists of particle filtering and online learning using shape features which are extracted from an image. A monocular camera easily fails to track a person due to a narrow field-of-view and influence of illumination changes, and has therefore been used together with a laser scanner. After constructing the geometric relation between the differently oriented sensors, the proposed method demonstrates its robustness in tracking and following a person with a success rate of 94.7% in indoor environments with varying lighting conditions and even when a moving object is located between the robot and the person. Keywords: object tracking, person-following, mobile robot, camera-LRF extrinsic calibration I. 서론 최근 로봇 기술은 다양한 분야에서 발전하고 있다. 특히 인간 중심의 기술로써 서비스 로봇이나 이동 보조 로봇이 개발되는 추세이다. 이런 로봇들은 사람과의 협업을 위해 실내/외 환경을 인식해야 할 뿐만 아니라 사람의 행동 인식이 필요하다. 또한, 인간-로봇 상호작용(human-robot interaction)의 관점에서 로봇이 사람과 가까운 거리를 유지하며 이동하는 것이 다양한 상호작용을 적용하는데 효과적이다. 로봇에서의 컴퓨터비전 기술은 시각을 담당한다. 특히 카메라나 다른 비전 센서로부터 획득된 데이터에서 특정 물체를 추적하는 것은 그 정보를 제어부에 전달함으로써 추종까지 할 수 있게 하는데 큰 역할을 하며 이를 위해 높은 정확성을 가진 추적이 요구된다. 하지만 이동 로봇에서의 추종은 낮과 밤, 실내/외 같은 다양한 조명 조건과 여러 가지 물체 색상, 형태 변화 아래에서 이루어지기 때문에 많은 문제가 발생한다[1]. 컴퓨터비전 분야에서 일반적으로 추적에 실패하는 요인은 카메라를 기준으로 내부적인 것과 외부적인 것으로 나뉜다. 먼저 내부적인 요인은 추적하려는 물체의 자세 및 모양 변화와 같은 대상 물체 자체적인 변화이고, 외부적인 요인은 카메라의 떨림, 시점의 변화, 조명의 변화, 복잡한 배경, 다른 물체에 의한 가려지는 현상과 같이, 대상 물체가 변하는 것 이외에 나타나는 현상이다. 이러한 많은 내부적, 외부적 요인에 의해서 성능이 떨어지는 문제점을 해결하기 위해 다양하게 변화하는 외부 환경에 대해서 강인하게 물체를 추적할 수 있도록 많은 연구가 진행되고 있다[1,2]. 기존의 사람 추종 분야에서 사용한 방법 중 하나는 사람에게 발광 장치를 입혀서 위치를 알아내는 연구가 있다[3,4]. 이는 인공적인 장치를 입히는 등 시스템 주변의 인프라와 사전 캘리브레이션(calibration)이 되어있어야 하기 때문에 실제 환경에서 적용의 한계가 있다. [5,6]에서는 카메라를 기반으로 대상의 얼굴을 검출하여 추종하였으나, 얼굴 검출을 적용하는 것은 사람이 로봇을 정면으로 응시해야 하기 때문에 사용자가 매우 불편할 수 있다. 게다가 [5]에서는 조명과 배경 변화에 강인하지 못하고, [6]에서는 레이저스캐너(laser scanner, LRF)를 사용하였음에도 불구하고 단순히 사람의 다리를 검출하는 것으로 이용되었다. 사람 추종의 또 다른 방법은 미리 저장해놓은 지도를 이용하는 것이다[7]. 이는 레이저스캐너 기반 사람 추종을 위하여 주변 환경에 대한 지도를 필요로 한다. 이 방법은 다양한 환경에서 적용하기 힘들고, 작동 환경이 변하면 지도 또한 변경해주어야 하기 때문에 구동에 한계가 있다. 또한 본 논문과 유사한 [8]의 방법은 레이저스캐너를 이용하여 로봇과 사람 사이의 위치와 거리를 구한다. 그러나 이 방법은 물체에 대한 상대 거리만을 이용하여 안전거리를 확보하는데 지나지 않는다. 본 논문의 기초가 되는 형태 특징 및 색상 정보에 기반한 IPCA (Incremental Principal Component Analysis) 물체 추적은 비
Journal of Institute of Control, Robotics and Systems | 2009
Jun-Chul Kim; Young-Han Jung; Eun-Soo Park; Xuenan Cui; Hakil Kim; Uk-Youl Huh
This paper presents a fast feature extraction method for autonomous mobile robots utilizing parallel processing and based on OpenMP, SSE (Streaming SIMD Extension) and CUDA programming. In the first step on CPU version, the algorithms and codes are optimized and then implemented by parallel processing. The parallel algorithms are debugged to maintain the same level of performance and the process for extracting key points and obtaining dominant orientation with respect to key points is parallelized. After extraction, a parallel descriptor via SSE instructions is constructed. And the GPU version also implemented by parallel processing using CUDA based on the SIFT. The GPU-Parallel descriptor achieves an acceleration up to five times compared with the CPU-Parallel descriptor, but it shows the lower performance than CPU version. CPU version also speed-up the four and half times compared with the original SIFT while maintaining robust performance.
Journal of Institute of Control, Robotics and Systems | 2010
Jun-Chul Kim; Xuenan Cui; Eun-Soo Park; Hyohoon Choi; Hakil Kim
This paper presents an image alignment algorithm for application of AOI (Automatic Optical Inspection) based on SIFT. Since the correspondences result using SIFT descriptor have many wrong points for aligning, this paper modified and classified those points by five measures called the CCFMR (Cascade Classifier for False Matching Reduction) After reduced the false matching, rotation and translation are estimated by point selection method. Experimental results show that the proposed method has fewer fail matching in comparison to commercial software MIL 8.0, and specially, less than twice with the well-controlled environment’s data sets (such as AOI system). The rotation and translation accuracy is robust than MIL in the noise data sets, but the errors are higher than in a rotation variation data sets although that also meaningful result in the practical system. In addition to, the computational time consumed by the proposed method is four times shorter than that by MIL which increases linearly according to noise.
Journal of Institute of Control, Robotics and Systems | 2009
Xuenan Cui; Eun-Soo Park; Jun-Chul Kim; Hakil Kim
This paper presents a Gabor texture feature extraction method for classification of discolored Metal pad images using GPU(Graphics Processing Unit). The proposed algorithm extracts the texture information using Gabor filters and constructs a pattern map using the extracted information. Finally, the golden pad images are classified by utilizing the feature vectors which are extracted from the constructed pattern map. In order to evaluate the performance of the Gabor texture feature extraction algorithm based on GPU, a sequential processing and parallel processing using OpenMP in CPU of this algorithm were adopted. Also, the proposed algorithm was implemented by using Global memory and Shared memory in GPU. The experimental results were demonstrated that the method using Shared memory in GPU provides the best performance. For evaluating the effectiveness of extracted Gabor texture features, an experimental validation has been conducted on a database of 20 Metal pad images and the experiment has shown no mis-classification.
Sensors | 2018
Thi Hai Binh Nguyen; Eun-Soo Park; Xuenan Cui; Van Huan Nguyen; Hakil Kim
The rapid growth of fingerprint authentication-based applications makes presentation attack detection, which is the detection of fake fingerprints, become a crucial problem. There have been numerous attempts to deal with this problem; however, the existing algorithms have a significant trade-off between accuracy and computational complexity. This paper proposes a presentation attack detection method using Convolutional Neural Networks (CNN), named fPADnet (fingerprint Presentation Attack Detection network), which consists of Fire and Gram-K modules. Fire modules of fPADnet are designed following the structure of the SqueezeNet Fire module. Gram-K modules, which are derived from the Gram matrix, are used to extract texture information since texture can provide useful features in distinguishing between real and fake fingerprints. Combining Fire and Gram-K modules results in a compact and efficient network for fake fingerprint detection. Experimental results on three public databases, including LivDet 2011, 2013 and 2015, show that fPADnet can achieve an average detection error rate of 2.61%, which is comparable to the state-of-the-art accuracy, while the network size and processing time are significantly reduced.
Iet Image Processing | 2018
Mingjie Liu; Cheng-Bin Jin; Bin Yang; Xuenan Cui; Hakil Kim
In recent years, convolutional neural networks (CNNs) have been widely used for visual object tracking, especially in combination with correlation filters (CFs). However, the increasing complex CNN models introduce more useless information, which may decrease the tracking performance. This study proposes an online feature map selection method to remove noisy and irrelevant feature maps from different convolutional layers of CNN, which can reduce computation redundancy and improve tracking accuracy. Furthermore, a novel appearance model update strategy, which exploits the feedback from the peak value of response maps, is developed to avoid model corruption. Finally, an extensive evaluation of the proposed method was conducted over OTB-2013 and OTB-2015 datasets, and compared with different kinds of trackers, including deep learning-based trackers and CF-based trackers. The results demonstrate that the proposed method achieves a highly satisfactory performance.