Chenchen Zhu
Carnegie Mellon University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Chenchen Zhu.
arXiv: Computer Vision and Pattern Recognition | 2017
Chenchen Zhu; Yutong Zheng; Khoa Luu; Marios Savvides
Robust face detection in the wild is one of the ultimate components to support various facial related problems, i.e., unconstrained face recognition, facial periocular recognition, facial landmarking and pose estimation, facial expression recognition, 3D facial model construction, etc. Although the face detection problem has been intensely studied for decades with various commercial applications, it still meets problems in some real-world scenarios due to numerous challenges, e.g., heavy facial occlusions, extremely low resolutions, strong illumination, exceptional pose variations, image or video compression artifacts, etc. In this paper, we present a face detection approach named Contextual Multi-Scale Region-based Convolution Neural Network (CMS-RCNN) to robustly solve the problems mentioned above. Similar to the region-based CNNs, our proposed network consists of the region proposal component and the region-of-interest (RoI) detection component. However, far apart of that network, there are two main contributions in our proposed network that play a significant role to achieve the state-of-the-art performance in face detection. First, the multi-scale information is grouped both in region proposal and RoI detection to deal with tiny face regions. Second, our proposed network allows explicit body contextual reasoning in the network inspired from the intuition of human vision system. The proposed approach is benchmarked on two recent challenging face detection databases, i.e., the WIDER FACE Dataset which contains high degree of variability, as well as the Face Detection Dataset and Benchmark (FDDB). The experimental results show that our proposed approach trained on WIDER FACE Dataset outperforms strong baselines on WIDER FACE Dataset by a large margin, and consistently achieves competitive results on FDDB against the recent state-of-the-art face detection methods.
computer vision and pattern recognition | 2016
T. Hoang Ngan Le; Yutong Zheng; Chenchen Zhu; Khoa Luu; Marios Savvides
In this paper, we present an advanced deep learning based approach to automatically determine whether a driver is using a cell-phone as well as detect if his/her hands are on the steering wheel (i.e. counting the number of hands on the wheel). To robustly detect small objects such as hands, we propose Multiple Scale Faster-RCNN (MSFRCNN) approach that uses a standard Region Proposal Network (RPN) generation and incorporates feature maps from shallower convolution feature maps, i.e. conv3 and conv4, for ROI pooling. In our driver distraction detection framework, we first make use of the proposed MS-FRCNN to detect individual objects, namely, a hand, a cell-phone, and a steering wheel. Then, the geometric information is extracted to determine if a cell-phone is being used or how many hands are on the wheel. The proposed approach is demonstrated and evaluated on the Vision for Intelligent Vehicles and Applications (VIVA) Challenge database and the challenging Strategic Highway Research Program (SHRP-2) face view videos that was acquired to monitor drivers under naturalistic driving conditions. The experimental results show that our method archives better performance than Faster R-CNN on both hands on wheel detection and cell-phone usage detection while remaining at similar testing cost. Compare to the state-of-the-art cell-phone usage detection, our approach obtains higher accuracy, is less time consuming and is independent to landmarking. The groundtruth database will be publicly available.
international conference on pattern recognition | 2016
T. Hoang Ngan Le; Chenchen Zhu; Yutong Zheng; Khoa Luu; Marios Savvides
The problems of hand detection have been widely addressed in many areas, e.g. human computer interaction environment, driver behaviors monitoring, etc. However, the detection accuracy in recent hand detection systems are still far away from the demands in practice due to a number of challenges, e.g. hand variations, highly occlusions, low-resolution and strong lighting conditions. This paper presents the Multiple Scale Faster Region-based Convolutional Neural Network (MS-FRCNN) to handle the problems of hand detection in given digital images collected under challenging conditions. Our proposed method introduces a multiple scale deep feature extraction approach in order to handle the challenging factors to provide a robust hand detection algorithm. The method is evaluated on the challenging hand database, i.e. the Vision for Intelligent Vehicles and Applications (VIVA) Challenge, and compared against various recent hand detection methods. Our proposed method achieves the state-of-the-art results with 20% of the detection accuracy higher than the second best one in the VIVA challenge.
international conference on biometrics theory applications and systems | 2016
Yutong Zheng; Chenchen Zhu; Khoa Luu; Chandrasekhar Bhagavatula; T. Hoang Ngan Le; Marios Savvides
Robust face detection is one of the most important preprocessing steps to support facial expression analysis, facial landmarking, face recognition, pose estimation, building of 3D facial models, etc. Although this topic has been intensely studied for decades, it is still challenging due to numerous variants of face images in real-world scenarios. In this paper, we present a novel approach named Multiple Scale Faster Region-based Convolutional Neural Network (MS-FRCNN) to robustly detect human facial regions from images collected under various challenging conditions, e.g. large occlusions, extremely low resolutions, facial expressions, strong illumination variations, etc. The proposed approach is benchmarked on two challenging face detection databases, i.e. the Wider Face database and the Face Detection Dataset and Benchmark (FDDB), and compared against recent other face detection methods, e.g. Two-stage CNN, Multi-scale Cascade CNN, Faceness, Aggregate Chanel Features, HeadHunter, Multi-view Face Detection, Cascade CNN, etc. The experimental results show that our proposed approach consistently achieves highly competitive results with the state-of-the-art performance against other recent face detection methods.
computer vision and pattern recognition | 2017
T. Hoang Ngan Le; Kha Gia Quach; Chenchen Zhu; Chi Nhan Duong; Khoa Luu; Marios Savvides
Robust hand detection and classification is one of the most crucial pre-processing steps to support human computer interaction, driver behavior monitoring, virtual reality, etc. This problem, however, is very challenging due to numerous variations of hand images in real-world scenarios. This work presents a novel approach named Multiple Scale Region-based Fully Convolutional Networks (MSRFCN) to robustly detect and classify human hand regions under various challenging conditions, e.g. occlusions, illumination, low-resolutions. In this approach, the whole image is passed through the proposed fully convolutional network to compute score maps. Those score maps with their position-sensitive properties can help to efficiently address a dilemma between translation-invariance in classification and detection. The method is evaluated on the challenging hand databases, i.e. the Vision for Intelligent Vehicles and Applications (VIVA) Challenge, Oxford hand dataset and compared against various recent hand detection methods. The experimental results show that our proposed MS-FRCN approach consistently achieves the state-of-the-art hand detection results, i.e. Average Precision (AP) / Average Recall (AR) of 95.1% / 94.5% at level 1 and 86.0% / 83.4% at level 2, on the VIVA challenge. In addition, the proposed method achieves the state-of-the-art results for left/right hand and driver/passenger classification tasks on the VIVA database with a significant improvement on AP/AR of ~7% and ~13% for both classification tasks, respectively. The hand detection performance of MS-RFCN reaches to 75.1% of AP and 77.8% of AR on Oxford database.
computer vision and pattern recognition | 2016
Chenchen Zhu; Yutong Zheng; Khoa Luu; T. Hoang Ngan Le; Chandrasekhar Bhagavatula; Marios Savvides
Weakly supervised methods have recently become one of the most popular machine learning methods since they are able to be used on large-scale datasets without the critical requirement of richly annotated data. In this paper, we present a novel, self-taught, discriminative facial feature analysis approach in the weakly supervised framework. Our method can find regions which are discriminative across classes yet consistent within a class and can solve many face related problems. The proposed method first trains a deep face model with high discriminative capability to extract facial features. The hypercolumn features are then used to give pixel level representation for better classification performance along with discriminative region detection. In addition, calibration approaches are proposed to enable the system to deal with multi-class and mixed-class problems. The system is also able to detect multiple discriminative regions from one image. Our uniform method is able to achieve competitive results in various face analysis applications, such as occlusion detection, face recognition, gender classification, twins verification and facial attractiveness analysis.
Archive | 2016
Khoa Luu; Chenchen Zhu; Chandrasekhar Bhagavatula; T. Hoang Ngan Le; Marios Savvides
Robust face detection and facial segmentation are crucial pre-processing steps to support facial recognition, expression analysis, pose estimation, building of 3D facial models, etc. In previous approaches, the process of face detection and facial segmentation are usually implemented as sequential, mostly separated modules. In these methods, face detection algorithms are usually first implemented so that facial regions can be located in given images. Segmentation algorithms are then carried out to find the facial boundaries and other facial features, such as the eyebrows, eyes, nose, mouth, etc. However, both of these tasks are challenging due to numerous variations of face images in the wild, e.g. facial expressions, illumination variations, occlusions, resolution, etc. In this chapter, we present a novel approach to detect human faces and segment facial features from given images simultaneously. Our proposed approach performs accurate facial feature segmentation and demonstrates its effectiveness on images from two challenging face databases, i.e. Multiple Biometric Grand Challenge (MBGC) and Labeled Faces in the Wild (LFW).
international conference on big data | 2014
Khoa Luu; Chenchen Zhu; Marios Savvides
Big data has been becoming ubiquitous and applied in numerous fields recently. The challenges to solve a large-scale machine learning problem in big data scenario generally lie in three aspects. Firstly, a proposed machine learning algorithm has to be appropriated for the distributed optimization problem. Secondly, it needs a platform for the distributed implementation. Finally, the communication delays different machines may cause problems in convergence even though the non-distributed algorithm shows a good convergence rate. In order to solve these challenges, we propose a new machine learning approach named Distributed Class-dependent Feature Analysis (DCFA), to combine the advantages of sparse representation in an over-complete dictionary. The classifier is based on the estimation of class-specific optimal filters, by solving an l1-norm optimization problem. We demonstrate how this problem is solved using the Alternating Direction Method of Multipliers and also explore relevant convergency details. More importantly, our proposed framework can be efficiently implemented on a robust distributed framework. Thus, it improves both accuracy and computational time in large-scale databases. Our method achieves very high classification accuracies in face recognition in the presence of occlusions on AR database. It also outperforms the state of the art methods in object recognition on two challenging large-scale object databases, i.e. Caltech101 and Caltech256. It hence shows its applicability to general computer vision and pattern recognition problems. In addition, computational time experiments show our distributed method achieves high speedup of 7.85x on Caltech256 databases with just 10 machine nodes compared to the non-distributed version and can gain even more with more computing resources.
Pattern Recognition | 2017
T. Hoang Ngan Le; Chenchen Zhu; Yutong Zheng; Khoa Luu; Marios Savvides
Abstract This paper presents a Grammar-aware Driver Parsing (GDP) algorithm, with deep features, to provide a novel driver behavior situational awareness system (DB-SAW). A deep model is first trained to extract highly discriminative features of the driver. Then, a grammatical structure on the deep features is defined to be used as prior knowledge for a semi-supervised proposal candidate generation. The Region with Convolutional Neural Networks (R-CNN) method is ultimately utilized to precisely segment parts of the driver. The proposed method not only aims to automatically find parts of the driver in challenging “drivers in the wild” databases, i.e. the standardized Strategic Highway Research Program (SHRP-2) and the challenging Vision for Intelligent Vehicles and Application (VIVA), but is also able to investigate seat belt usage and the position of the drivers hands (on a phone vs on a steering wheel). We conduct experiments on various applications and compare our GDP method against other state-of-the-art detection and segmentation approaches, i.e. SDS [1] , CRF-RNN [2] , DJTL [3] , and R-CNN [4] on SHRP-2 and VIVA databases.
Image and Vision Computing | 2017
T. Hoang Ngan Le; Khoa Luu; Chenchen Zhu; Marios Savvides
This paper presents a robust, fully automatic and semi self-training system to detect and segment facial beard/moustache simultaneously in challenging facial images. Based on the observation that some certain facial areas, e.g. cheeks, do not typically contain any facial hair whereas the others, e.g. brows, often contain facial hair, a self-trained model is first built using a testing image itself. To overcome the limitation of that facial hairs in brows regions and beard/moustache regions are different in length, density, color, etc., a pre-trained model is also constructed using training data. The pre-trained model is only pursued when the self-trained model produces low confident classification results. In the proposed system, we employ the superpixel together a combination of two classifiers, i.e. Random Ferns (rFerns) and Support Vector Machines (SVM) to obtain good classification performance as well as improve time efficiency. A feature vector, consisting of Histogram of Gabor (HoG) and Histogram of Oriented Gradient of Gabor (HOGG) at different directions and frequencies, is generated from both the bounding box of the superpixel and the super pixel foreground. The segmentation result is then refined by our proposed aggregately searching strategy in order to deal with inaccurate landmarking points. Experimental results have demonstrated the robustness and effectiveness of the proposed system. It is evaluated in images drawn from three entire databases i.e. the Multiple Biometric Grand Challenge (MBGC) still face database, the NIST color Facial Recognition Technology FERET database and a large subset from Pinellas County database. Detect and segment beard/moustache simultaneouslyUse advantages of both pre-trained model and self-trained modelWork on superpixelPropose an aggregate searching strategy to overcome the limits of landmarkerPropose a new feature that is able to emphasize high frequency information of facial hair