Khoa Luu
Carnegie Mellon University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Khoa Luu.
International Journal of Central Banking | 2011
Felix Juefei-Xu; Khoa Luu; Marios Savvides; Tien D. Bui; Ching Y. Suen
In this paper, we will present a novel framework of utilizing periocular region for age invariant face recognition. To obtain age invariant features, we first perform preprocessing schemes, such as pose correction, illumination and periocular region normalization. And then we apply robust Walsh-Hadamard transform encoded local binary patterns (WLBP) on preprocessed periocular region only. We find the WLBP feature on periocular region maintains consistency of the same individual across ages. Finally, we use unsupervised discriminant projection (UDP) to build subspaces on WLBP featured periocular images and gain 100% rank-1 identification rate and 98% verification rate at 0.1% false accept rate on the entire FG-NET database. Compared to published results, our proposed approach yields the best recognition and identification results.
international conference on biometrics theory applications and systems | 2009
Khoa Luu; Karl Ricanek; Tien D. Bui; Ching Y. Suen
In this paper, we introduce a novel age estimation technique that combines Active Appearance Models (AAMs) and Support Vector Machines (SVMs), to dramatically improve the accuracy of age estimation over the current state-of-the-art techniques. In this method, characteristics of the input images, face image, are interpreted as feature vectors by AAMs, which are used to discriminate between childhood and adulthood, prior to age estimation. Faces classified as adults are passed to the adult age-determination function and the others are passed to the child age-determination function. Compared to published results, this method yields the highest accuracy recognition rates, both in overall mean-absolute error (MAE) and mean-absolute error for the two periods of human development: childhood and adulthood.
International Journal of Central Banking | 2011
Khoa Luu; Keshav Seshadri; Marios Savvides; Tien D. Bui; Ching Y. Suen
In this paper we propose a novel Contourlet Appearance Model (CAM) that is more accurate and faster at localizing facial landmarks than Active Appearance Models (AAMs). Our CAM also has the ability to not only extract holistic texture information, as AAMs do, but can also extract local texture information using the Nonsubsampled Contourlet Transform (NSCT). We demonstrate the efficiency of our method by applying it to the problem of facial age estimation. Compared to previously published age estimation techniques, our approach yields more accurate results when tested on various face aging databases.
IEEE Transactions on Image Processing | 2015
Felix Juefei-Xu; Khoa Luu; Marios Savvides
In this paper, we investigate a single-sample periocular-based alignment-robust face recognition technique that is pose-tolerant under unconstrained face matching scenarios. Our Spartans framework starts by utilizing one single sample per subject class, and generate new face images under a wide range of 3D rotations using the 3D generic elastic model which is both accurate and computationally economic. Then, we focus on the periocular region where the most stable and discriminant features on human faces are retained, and marginalize out the regions beyond the periocular region since they are more susceptible to expression variations and occlusions. A novel facial descriptor, high-dimensional Walsh local binary patterns, is uniformly sampled on facial images with robustness toward alignment. During the learning stage, subject-dependent advanced correlation filters are learned for pose-tolerant non-linear subspace modeling in kernel feature space followed by a coupled max-pooling mechanism which further improve the performance. Given any unconstrained unseen face image, the Spartans can produce a highly discriminative matching score, thus achieving high verification rate. We have evaluated our method on the challenging Labeled Faces in the Wild database and solidly outperformed the state-of-the-art algorithms under four evaluation protocols with a high accuracy of 89.69%, a top score among image-restricted and unsupervised protocols. The advancement of Spartans is also proven in the Face Recognition Grand Challenge and Multi-PIE databases. In addition, our learning method based on advanced correlation filters is much more effective, in terms of learning subject-dependent pose-tolerant subspaces, compared with many well-established subspace methods in both linear and non-linear cases.
arXiv: Computer Vision and Pattern Recognition | 2017
Chenchen Zhu; Yutong Zheng; Khoa Luu; Marios Savvides
Robust face detection in the wild is one of the ultimate components to support various facial related problems, i.e., unconstrained face recognition, facial periocular recognition, facial landmarking and pose estimation, facial expression recognition, 3D facial model construction, etc. Although the face detection problem has been intensely studied for decades with various commercial applications, it still meets problems in some real-world scenarios due to numerous challenges, e.g., heavy facial occlusions, extremely low resolutions, strong illumination, exceptional pose variations, image or video compression artifacts, etc. In this paper, we present a face detection approach named Contextual Multi-Scale Region-based Convolution Neural Network (CMS-RCNN) to robustly solve the problems mentioned above. Similar to the region-based CNNs, our proposed network consists of the region proposal component and the region-of-interest (RoI) detection component. However, far apart of that network, there are two main contributions in our proposed network that play a significant role to achieve the state-of-the-art performance in face detection. First, the multi-scale information is grouped both in region proposal and RoI detection to deal with tiny face regions. Second, our proposed network allows explicit body contextual reasoning in the network inspired from the intuition of human vision system. The proposed approach is benchmarked on two recent challenging face detection databases, i.e., the WIDER FACE Dataset which contains high degree of variability, as well as the Face Detection Dataset and Benchmark (FDDB). The experimental results show that our proposed approach trained on WIDER FACE Dataset outperforms strong baselines on WIDER FACE Dataset by a large margin, and consistently achieves competitive results on FDDB against the recent state-of-the-art face detection methods.
computer vision and pattern recognition | 2015
Chi Nhan Duong; Khoa Luu; Kha Gia Quach; Tien D. Bui
The “interpretation through synthesis”, i.e. Active Appearance Models (AAMs) method, has received considerable attention over the past decades. It aims at “explaining” face images by synthesizing them via a parameterized model of appearance. It is quite challenging due to appearance variations of human face images, e.g. facial poses, occlusions, lighting, low resolution, etc. Since these variations are mostly non-linear, it is impossible to represent them in a linear model, such as Principal Component Analysis (PCA). This paper presents a novel Deep Appearance Models (DAMs) approach, an efficient replacement for AAMs, to accurately capture both shape and texture of face images under large variations. In this approach, three crucial components represented in hierarchical layers are modeled using the Deep Boltzmann Machines (DBM) to robustly capture the variations of facial shapes and appearances. DAMs are therefore superior to AAMs in inferring a representation for new face images under various challenging conditions. In addition, DAMs have ability to generate a compact set of parameters in higher level representation that can be used for classification, e.g. face recognition and facial age estimation. The proposed approach is evaluated in facial image reconstruction, facial super-resolution on two databases, i.e. LFPW and Helen. It is also evaluated on FG-NET database for the problem of age estimation.
computer vision and pattern recognition | 2016
T. Hoang Ngan Le; Yutong Zheng; Chenchen Zhu; Khoa Luu; Marios Savvides
In this paper, we present an advanced deep learning based approach to automatically determine whether a driver is using a cell-phone as well as detect if his/her hands are on the steering wheel (i.e. counting the number of hands on the wheel). To robustly detect small objects such as hands, we propose Multiple Scale Faster-RCNN (MSFRCNN) approach that uses a standard Region Proposal Network (RPN) generation and incorporates feature maps from shallower convolution feature maps, i.e. conv3 and conv4, for ROI pooling. In our driver distraction detection framework, we first make use of the proposed MS-FRCNN to detect individual objects, namely, a hand, a cell-phone, and a steering wheel. Then, the geometric information is extracted to determine if a cell-phone is being used or how many hands are on the wheel. The proposed approach is demonstrated and evaluated on the Vision for Intelligent Vehicles and Applications (VIVA) Challenge database and the challenging Strategic Highway Research Program (SHRP-2) face view videos that was acquired to monitor drivers under naturalistic driving conditions. The experimental results show that our method archives better performance than Faster R-CNN on both hands on wheel detection and cell-phone usage detection while remaining at similar testing cost. Compare to the state-of-the-art cell-phone usage detection, our approach obtains higher accuracy, is less time consuming and is independent to landmarking. The groundtruth database will be publicly available.
computer vision and pattern recognition | 2010
Khoa Luu; Tien D. Bui; Ching Y. Suen; Karl Ricanek
In this paper, we introduce an advanced age determination technique that combines a feature set derived from an image of the face using multi-factored Principal Components Analysis (PCA) on the shape of the face and its features and the skin of the face to produce a 30×1 linear encoding of the face. The linearly encoded features are combined with Spectral Regression (SR) to improve performance of age determination over the current best techniques. The technique of SR is used to further reduce the dimensionality of the face encoding such that inter-class distances are minimized while maximizing intra-class distances. The SR feature vector is used to classify a face into one-of-two age groups (age recognition). An age-determination function is constructed for each age group in accordance to physiological growth periods for humans - pre-adult (youth) and adult. Compared to published results, this method yields the highest accuracy rates in overall mean-absolute error (MAE), mean-absolute error per decade of life (MAE/D), and cumulative match score.
IEEE Transactions on Image Processing | 2013
T. H. N. Le; Khoa Luu; Marios Savvides
Robust facial hair detection and segmentation is a highly valued soft biometric attribute for carrying out forensic facial analysis. In this paper, we propose a novel and fully automatic system, called SparCLeS, for beard/moustache detection and segmentation in challenging facial images. SparCLeS uses the multiscale self-quotient (MSQ) algorithm to preprocess facial images and deal with illumination variation. Histogram of oriented gradients (HOG) features are extracted from the preprocessed images and a dynamic sparse classifier is built using these features to classify a facial region as either containing skin or facial hair. A level set based approach, which makes use of the advantages of both global and local information, is then used to segment the regions of a face containing facial hair. Experimental results demonstrate the effectiveness of our proposed system in detecting and segmenting facial hair regions in images drawn from three databases, i.e., the NIST Multiple Biometric Grand Challenge (MBGC) still face database, the NIST Color Facial Recognition Technology FERET database, and the Labeled Faces in the Wild (LFW) database.
international conference on biometrics theory applications and systems | 2012
Yiting Xie; Khoa Luu; Marios Savvides
In this paper, we propose a novel approach using Kernel Class-dependent Feature Analysis (KCFA) combined with facial color based features to tackle the problem of ethnicity classification on large scale face databases. In our approach, a new design of multiple filtered responses of the Kernel Class-dependent Feature Analysis is used for ethnicity classes. In order to improve the robustness of our system, the facial color based features are also employed to incorporate with the filtered responses to extract the stable ethnicity features. Compared to previous approaches, our proposed method achieves the best accuracy of ethnicity classification on large industrial based face databases and the standard MBGC face database.