Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where T. Hoang Ngan Le is active.

Publication


Featured researches published by T. Hoang Ngan Le.


Pattern Recognition | 2016

A novel Shape Constrained Feature-based Active Contour model for lips/mouth segmentation in the wild

T. Hoang Ngan Le; Marios Savvides

In this paper, we propose a novel joint formulation of feature-based active contour (FAC) and prior shape constraints (CS) for lips/mouth segmentation in the wild. Our proposed SC-FAC model is able to robustly segment the lips/mouth that belongs to a given mouth shape space while minimizing the energy functional. The shape space is defined by a 2D Modified Active Shape Model (MASM) whereas the active contour model is based on the Chan-Vese functional. Our SC-FAC energy functional is able to overcome the drawback of noise while minimizing the fitting forces under the shape constraints. We conducted our experiments on images captured under challenging conditions such as varying illumination, low contrast, facial expression, low resolution, blurring, wearing beard/moustache and cosmetic affection from the MBGC, VidTIMIT, JAFFE, and LFW databases. The results from our studies indicate that the proposed SC-FAC model is reliable and accurately perform prior shape weak object segmentation. The average performance of the mouth segmentation using proposed SC-FAC on 1918 images from the MBGC database under different illuminations, expressions, and complex background reaches to a Precision of 91.30%, a Recall of 91.32% and an F-measure of 90.62%. HighlightsPropose a novel joint formulation of contour and shape for lips segmentation.Robustly segment lips under challenging environment and complex background.The energy functional is composed of 5 terms based on the Chan-Vese functional.The shape space is defined by a 2D.Experiments are conducted from the MBGC, VidTIMIT, JAFFE, and LFW databases.


computer vision and pattern recognition | 2016

Multiple Scale Faster-RCNN Approach to Driver’s Cell-Phone Usage and Hands on Steering Wheel Detection

T. Hoang Ngan Le; Yutong Zheng; Chenchen Zhu; Khoa Luu; Marios Savvides

In this paper, we present an advanced deep learning based approach to automatically determine whether a driver is using a cell-phone as well as detect if his/her hands are on the steering wheel (i.e. counting the number of hands on the wheel). To robustly detect small objects such as hands, we propose Multiple Scale Faster-RCNN (MSFRCNN) approach that uses a standard Region Proposal Network (RPN) generation and incorporates feature maps from shallower convolution feature maps, i.e. conv3 and conv4, for ROI pooling. In our driver distraction detection framework, we first make use of the proposed MS-FRCNN to detect individual objects, namely, a hand, a cell-phone, and a steering wheel. Then, the geometric information is extracted to determine if a cell-phone is being used or how many hands are on the wheel. The proposed approach is demonstrated and evaluated on the Vision for Intelligent Vehicles and Applications (VIVA) Challenge database and the challenging Strategic Highway Research Program (SHRP-2) face view videos that was acquired to monitor drivers under naturalistic driving conditions. The experimental results show that our method archives better performance than Faster R-CNN on both hands on wheel detection and cell-phone usage detection while remaining at similar testing cost. Compare to the state-of-the-art cell-phone usage detection, our approach obtains higher accuracy, is less time consuming and is independent to landmarking. The groundtruth database will be publicly available.


international conference on biometrics theory applications and systems | 2012

A facial aging approach to identification of identical twins

T. Hoang Ngan Le; Khoa Luu; Keshav Seshadri; Marios Savvides

Biometric recognition based on the characteristics of human faces has attracted a great deal of attention over the past few years. However, the similarity in the facial appearance of identical twins has made the task difficult and has even compromised commercial face recognition systems. In this paper, we shed new light on the study of facial recognition of identical twins and propose a novel approach using twins group classification and facial aging features to tell them apart. Our experiments, conducted on the University of Notre Dame ND-twins database, that was acquired over two years (2009 and 2010), indicate that our proposed approach demonstrates good generalization ability and high identification rates.


international conference on image processing | 2012

Beard and mustache segmentation using sparse classifiers on self-quotient images

T. Hoang Ngan Le; Khoa Luu; Keshav Seshadri; Marios Savvides

In this paper, we propose a novel system for beard and mustache detection and segmentation in challenging facial images. Our system first eliminates illumination artifacts using the self-quotient algorithm. A sparse classifier is then used on these self-quotient images to classify a region as either containing skin or facial hair. We conduct experiments on the MBGC and color FERET databases to demonstrate the effectiveness of our proposed system.


International Journal of Central Banking | 2014

A novel eyebrow segmentation and eyebrow shape-based identification

T. Hoang Ngan Le; Utsav Prabhu; Marios Savvides

Recent studies in biometrics have shown that the periocular region of the face is sufficiently discriminative for robust recognition, and particularly effective in certain scenarios such as extreme occlusions, and illumination variations where traditional face recognition systems are unreliable. In this paper, we first propose a fully automatic, robust and fast graph-cut based eyebrow segmentation technique to extract the eyebrow shape from a given face image. We then propose an eyebrow shape-based identification system for periocular face recognition. Our experiments have been conducted over large datasets from the MBGC and AR databases and the resilience of the proposed approach has been evaluated under varying data conditions. The experimental results show that the proposed eyebrow segmentation achieves high accuracy with an F-Measure of 99.4% and the identification system achieves rates of 76.0% on the AR database and 85.0% on the MBGC database.


international conference on pattern recognition | 2016

Robust hand detection in Vehicles

T. Hoang Ngan Le; Chenchen Zhu; Yutong Zheng; Khoa Luu; Marios Savvides

The problems of hand detection have been widely addressed in many areas, e.g. human computer interaction environment, driver behaviors monitoring, etc. However, the detection accuracy in recent hand detection systems are still far away from the demands in practice due to a number of challenges, e.g. hand variations, highly occlusions, low-resolution and strong lighting conditions. This paper presents the Multiple Scale Faster Region-based Convolutional Neural Network (MS-FRCNN) to handle the problems of hand detection in given digital images collected under challenging conditions. Our proposed method introduces a multiple scale deep feature extraction approach in order to handle the challenging factors to provide a robust hand detection algorithm. The method is evaluated on the challenging hand database, i.e. the Vision for Intelligent Vehicles and Applications (VIVA) Challenge, and compared against various recent hand detection methods. Our proposed method achieves the state-of-the-art results with 20% of the detection accuracy higher than the second best one in the VIVA challenge.


Pattern Recognition | 2015

Facial aging and asymmetry decomposition based approaches to identification of twins

T. Hoang Ngan Le; Keshav Seshadri; Khoa Luu; Marios Savvides

A reliable and accurate biometric identification system must be able to distinguish individuals even in situations where their biometric signatures are very similar. However, the strong similarity in the facial appearance of twins has complicated facial feature based recognition and has even compromised commercial face recognition systems. This paper addresses the above problem and proposes two novel methods to distinguish identical twins using (1) facial aging features and (2) asymmetry decomposition features. Facial aging features are extracted using Gabor filters from regions of the face that typically exhibit wrinkles and laugh lines, while Facial asymmetry decomposition based features are obtained by projecting the difference between the two left sides (consisting of the left half of the face and its mirror) and two right sides (consisting of the right half of the face and its mirror) of a face onto a subspace. Feature vectors obtained using these methods were used for classification. Experiments conducted on images of five types of twins from the University of Notre Dame ND-Twins database indicate that both our proposed approaches achieve high identification rates and are hence quite promising at distinguishing twins. HighlightsThe proposal of two novel approaches to distinguishing identical twins.Facial aging and intrinsic facial symmetry features are sued.A thorough evaluation on a challenging database.The summarizing of existing techniques and the results obtained by them.


international conference on biometrics theory applications and systems | 2016

Towards a deep learning framework for unconstrained face detection

Yutong Zheng; Chenchen Zhu; Khoa Luu; Chandrasekhar Bhagavatula; T. Hoang Ngan Le; Marios Savvides

Robust face detection is one of the most important preprocessing steps to support facial expression analysis, facial landmarking, face recognition, pose estimation, building of 3D facial models, etc. Although this topic has been intensely studied for decades, it is still challenging due to numerous variants of face images in real-world scenarios. In this paper, we present a novel approach named Multiple Scale Faster Region-based Convolutional Neural Network (MS-FRCNN) to robustly detect human facial regions from images collected under various challenging conditions, e.g. large occlusions, extremely low resolutions, facial expressions, strong illumination variations, etc. The proposed approach is benchmarked on two challenging face detection databases, i.e. the Wider Face database and the Face Detection Dataset and Benchmark (FDDB), and compared against recent other face detection methods, e.g. Two-stage CNN, Multi-scale Cascade CNN, Faceness, Aggregate Chanel Features, HeadHunter, Multi-view Face Detection, Cascade CNN, etc. The experimental results show that our proposed approach consistently achieves highly competitive results with the state-of-the-art performance against other recent face detection methods.


international conference on biometrics | 2015

Fast and robust self-training beard/moustache detection and segmentation

T. Hoang Ngan Le; Khoa Luu; Marios Savvides

Facial hair detection and segmentation play an important role in forensic facial analysis. In this paper, we propose a fast, robust, fully automatic and self-training system for beard/moustache detection and segmentation in challenging facial images. In order to overcome the limitations of illumination, facial hair color and near-clear shaving, our facial hair detection self-learns a transformation vector to separate a hair class and a non-hair class from the testing image itself. A feature vector, consisting of Histogram of Gabor (HoG) and Histogram of Oriented Gradient of Gabor (HOGG) at different directions and frequencies, is proposed for both beard/moustache detection and segmentation in this paper. A feature-based segmentation is then proposed to segment the beard/moustache from a region on the face that is discovered to contain facial hair. Experimental results have demonstrated the robustness and effectiveness of our proposed system in detecting and segmenting facial hair in images drawn from three entire databases i.e. the Multiple Biometric Grand Challenge (MBGC) still face database, the NIST color Facial Recognition Technology FERET database and a large subset from Pinellas County database.


computer vision and pattern recognition | 2017

Robust Hand Detection and Classification in Vehicles and in the Wild

T. Hoang Ngan Le; Kha Gia Quach; Chenchen Zhu; Chi Nhan Duong; Khoa Luu; Marios Savvides

Robust hand detection and classification is one of the most crucial pre-processing steps to support human computer interaction, driver behavior monitoring, virtual reality, etc. This problem, however, is very challenging due to numerous variations of hand images in real-world scenarios. This work presents a novel approach named Multiple Scale Region-based Fully Convolutional Networks (MSRFCN) to robustly detect and classify human hand regions under various challenging conditions, e.g. occlusions, illumination, low-resolutions. In this approach, the whole image is passed through the proposed fully convolutional network to compute score maps. Those score maps with their position-sensitive properties can help to efficiently address a dilemma between translation-invariance in classification and detection. The method is evaluated on the challenging hand databases, i.e. the Vision for Intelligent Vehicles and Applications (VIVA) Challenge, Oxford hand dataset and compared against various recent hand detection methods. The experimental results show that our proposed MS-FRCN approach consistently achieves the state-of-the-art hand detection results, i.e. Average Precision (AP) / Average Recall (AR) of 95.1% / 94.5% at level 1 and 86.0% / 83.4% at level 2, on the VIVA challenge. In addition, the proposed method achieves the state-of-the-art results for left/right hand and driver/passenger classification tasks on the VIVA database with a significant improvement on AP/AR of ~7% and ~13% for both classification tasks, respectively. The hand detection performance of MS-RFCN reaches to 75.1% of AP and 77.8% of AR on Oxford database.

Collaboration


Dive into the T. Hoang Ngan Le's collaboration.

Top Co-Authors

Avatar

Marios Savvides

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Khoa Luu

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Chenchen Zhu

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yutong Zheng

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Keshav Seshadri

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Ligong Han

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Karanhaar Singh

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge