Kwang Ho An
KAIST
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kwang Ho An.
intelligent robots and systems | 2008
Kwang Ho An; Myung Jin Chung
A human face provides a variety of different communicative functions such as identification, the perception of emotional expression, and lip-reading. For these reasons, many applications in robotics require tracking and recognizing a human face. A novel face recognition system should be able to deal with various changes in face images, such as pose, illumination, and expression, among which pose variation is the most difficult one to deal with. Therefore, face registration (alignment) is the key of robust face recognition. If we can register face images into frontal views, the recognition task would be much easier. To align a face image into a canonical frontal view, we need to know the pose information of a human head. Therefore, in this paper, we propose a novel method for modeling a human head as a simple 3D ellipsoid. And also, we present 3D head tracking and pose estimation methods using the proposed ellipsoidal model. After recovering full motion of the head, we can register face images with pose variations into stabilized view images which are suitable for frontal face recognition. By doing so, simple and efficient frontal face recognition can be easily carried out in the stabilized texture map space instead of the original input image space. To evaluate the feasibility of the proposed approach using a simple ellipsoid model, 3D head tracking experiments are carried out on 45 image sequences with ground truth from Boston University, and several face recognition experiments are conducted on our laboratory database and the Yale Face Database B by using subspace-based face recognition methods such as PCA, PCA+LAD, and DCV.
IEEE Transactions on Consumer Electronics | 2009
Kwang Ho An; Myung Jin Chung
The future interactive TV will automatically provide user-personalized services for each viewer. For the automatic user-personalized services, the interactive TV should recognize viewers and even their emotions or preferences. In this paper, we introduce a novel architecture of the future interactive TV and propose a real-time face analysis system that can detect and recognize human faces and even their expressions, and therefore understand their internal emotional states. The proposed face analysis system consists of three image processing modules: face detection, face recognition, and facial expression recognition. Face detection is employed to detect and track a number of people watching the TV program. Face recognition and facial expression recognition are employed to identify specific TV viewers and recognize their internal emotional states, which is necessary in order to enable personalized user interfaces and services. For robust real-time face analysis system, we present the Ada-LDA learning algorithm based on simple and efficient MspLBP features, which is suitable for multi-class pattern classification. Extensive experimental results show that the proposed face analysis system provides real-time performance with high recognition rates. It can operate at over 15 frames per second.
intelligent robots and systems | 2005
Sung Uk Jung; Do Hyoung Kim; Kwang Ho An; Myung Jin Chung
In this paper, we propose a method of selecting new types of rectangle features that are suitable for facial expression recognition. The basic concept in this paper is similar to Violars approach, which is used for face detection. Instead of previous Haar-like rectangle features, we choose rectangle features for facial expression recognition among all possible rectangle types in a 3/spl times/3 matrix form using the AdaBoost algorithm. Also, the facial expression recognition system constituted with the proposed rectangle features is compared to that with previous rectangle features with regard to its capacity. The results show that the proposed approach has better performance in facial expression recognition in terms of simulation and experimental results.
intelligent robots and systems | 2009
Ji Hoon Joung; Kwang Ho An; Jung Won Kang; Myung Jin Chung; Wonpil Yu
In this paper, we propose a system which reconstructs the environment with both color and 3D information. We perform extrinsic calibration of a camera and a LRF (Laser Range Finder) to fuse 3D information and color information of objects. We also formularize an equation to measure the result of the calibration. Moreover, we acquire 3D data by rotating 2D LRF with camera, and use ICP (Iterative Closest Point) algorithm to combine data acquired in other places. We use the SIFT (Scale Invariant Feature Transform) matching for the initial estimation of ICP algorithm. It offers accurate and stable initial estimation robust to motion change compare to odometry. We also modify the ICP algorithm using color information. Computation time of ICP algorithm can be reduced by using color information.
intelligent robots and systems | 2005
Kwang Ho An; Dong Hyun Yoo; Sung Uk Jung; Myung Jin Chung
For face tracking in a video sequence, various face tracking algorithms have been proposed. However, most of them have a difficulty in finding the initial position and size of a face automatically. In this paper, we present a fast and robust method for fully automatic multi-view face detection and tracking. Using a small number of critical rectangle features selected and trained by Adaboost learning algorithm, we can detect the initial position, size and view of a face correctly. Once a face is reliably detected, we can extract face and upper body color distribution from the detected facial regions and upper body regions for building a robust color modeling respectively. Simultaneously, each color modeling is performed by using k-means clustering and multiple Gaussian models. Then, fast and efficient multi-view face tracking is executed by using several critical features and a simple linear Kalman filter. Our proposed algorithm is robust to rotation, partial occlusions, and scale changes in front of dynamic, unstructured background. In addition, our proposed method is computationally efficient. Therefore, it can be executed in real-time.
intelligent robots and systems | 2006
Do Hyoung Kim; Sung Uk Jung; Kwang Ho An; Hui Sung Lee; Myung Jin Chung
In the last decade, face analysis, e.g. face recognition, face detection, face tracking and facial expression recognition, is a very lively and expanding research field. As computer animated agents and robots bring a social dimension to human computer interaction, interest in this research field is increasing rapidly. In this paper, we introduce an artificial facial expression mimic system which can recognize facial expressions of human and also imitate the recognized facial expressions. We propose a classifier that is based on weak classifiers obtained by using modified rectangular features to recognize human facial expression in real-time. Next, we introduce our robot that is manipulated by a distributed control algorithm and that can make artificial facial expressions. Finally, experimental results of facial expression recognition and facial expression generation are shown for the validity of our artificial facial expression imitator
robotics and biomimetics | 2009
Hyun Chul Roh; Si Jong Kim; Kwang Ho An; Myung Jin Chung
In this paper, a new Digital Image Stabilization (DIS) system is for humanoid eyes presented. This DIS system was biologically inspired by the human Vestibulo-Ocular Reflex (VOR) system, and is based on a KLT tracker and an Inertial Measurement Unit (IMU). In order to more accurately estimate the motion between two consecutive image frames, corresponding points that were discovered by KLT tracker and IMU were used. The initial motion that was estimated with the IMU was incorporated into the KLT tracker in order to improve the speed and accuracy of the tracking process. Also, a Kalman filter was applied to remove unwanted camera motion. The experimental results showed that the proposed DIS system has the characteristics of the high speed and accuracy in various conditions.
international conference on robotics and automation | 2010
Kwang Ho An; Myung Jin Chung
This paper presents a novel approach for multi-class pattern classification - face detection and facial expression recognition, which is based on discriminative multi-scale and multi-position Local Binary Pattern (MspLBP) features selected by a boosting technique called the AdaBoost+LDA (Ada-LDA) method. From a large pool of MspLBP features within a face image, the most discriminative MspLBP features trained by two alternative LDA methods depending on the singularity of the within-class scatter matrix, are selected under the framework of AdaBoost. To verify the feasibility of our approach, we performed two extensive experiments on the famous face databases in terms of face detection and facial expression recognition. First, face detection, a typical example of two-class pattern classification, was carried out on the MIT-CBCL and MIT+CMU face test sets. Second, facial expression recognition, a typical problem of multi-class pattern classification, was performed on the JAFFE face database. Given the same number of features, the proposed face detector shows over 25% higher detection rate than the well-known Violas detector at a given false positive rate of 10%. It can also provide real-time operation with over 10 frames per second rate. For facial expression recognition, our approach also shows a better performance over at least 21% recognition rates than other linear subspace-based methods such as PCA, DCV, and PCA+LDA. Our proposed approach provides a considerable performance improvement with only a small number of discriminative MspLBP features in the multi-class pattern classification problem.
robot and human interactive communication | 2008
Kwang Ho An; Myung Jin Chung
A human face provides a variety of different communicative functions such as identification, the perception of emotional expression, and lip-reading. Many applications in robotics require recognizing a human face. A face recognition system should be able to deal with various changes in face images, such as pose, illumination, and expression, among which pose variation is the most difficult one to deal with. For this reason, face registration is the key of face recognition. If we can register face images into frontal views, the recognition task would be much easier. To align a face image into a canonical frontal view, we need to know the pose information of a human head. A human head can be approximately modeled as a 3D texture mapped ellipsoid. Then, any face image can be considered as a 2D image projection of a 3D ellipsoid at a certain pose. In this paper, both training and test face images are back projected to the surface of a 3D ellipsoid according to their estimated poses and registered into canonical frontal view images. Then, simple and efficient frontal face recognition can be carried out in the texture map domain instead of the original input image domain. To evaluate the feasibility of the proposed approach, several recognition experiments are conducted using subspace-based face recognition methods such as PCA, PCA+LAD, and DCV. By conducting experiments on our laboratory database and the Yale Face Database B, we show that the proposed algorithm provides good performance even when large out-of-plane rotations occur.
robotics and biomimetics | 2009
Kwang Ho An; So Hee Park; Yun Su Chung; Ki Young Moon; Myung Jin Chung
This paper presents a novel approach for face detection, which is based on the discriminative MspLBP features selected by a boosting technique called the Ada-LDA method. By scanning the face image with a scalable sub-window, many sub-regions are obtained from which the MspLBP features are extracted to describe the local structures of a face image. From a large pool of the MspLBP features within the face image, the most discriminative MspLBP features that are trained by two alternative LDA methods depending on the singularity of the within-class scatter matrix of the training samples are selected under the framework of AdaBoost. To verify the feasibility of our face detector, we performed extensive experiments on the MIT-CBCL and MIT+CMU face test sets. Given the same number of features, the proposed face detector shows a detection rate of 25% higher than the well-known Violas detector at a given false positive rate of 10%. Challenging experimental results prove that our face detector can show promising detection performance with only a small number of the discriminative MspLBP features. It can also provide real-time performance. Our face detector can operate at over 16 frames per second.