Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pengfei Dou is active.

Publication


Featured researches published by Pengfei Dou.


computer vision and pattern recognition | 2017

End-to-End 3D Face Reconstruction with Deep Neural Networks

Pengfei Dou; Shishir K. Shah; Ioannis A. Kakadiaris

Monocular 3D facial shape reconstruction from a single 2D facial image has been an active research area due to its wide applications. Inspired by the success of deep neural networks (DNN), we propose a DNN-based approach for End-to-End 3D FAce Reconstruction (UH-E2FAR) from a single 2D image. Different from recent works that reconstruct and refine the 3D face in an iterative manner using both an RGB image and an initial 3D facial shape rendering, our DNN model is end-to-end, and thus the complicated 3D rendering process can be avoided. Moreover, we integrate in the DNN architecture two components, namely a multi-task loss function and a fusion convolutional neural network (CNN) to improve facial expression reconstruction. With the multi-task loss function, 3D face reconstruction is divided into neutral 3D facial shape reconstruction and expressive 3D facial shape reconstruction. The neutral 3D facial shape is class-specific. Therefore, higher layer features are useful. In comparison, the expressive 3D facial shape favors lower or intermediate layer features. With the fusion-CNN, features from different intermediate layers are fused and transformed for predicting the 3D expressive facial shape. Through extensive experiments, we demonstrate the superiority of our end-to-end framework in improving the accuracy of 3D face reconstruction.


international conference on biometrics theory applications and systems | 2015

Pose-robust face signature for multi-view face recognition

Pengfei Dou; Lingfeng Zhang; Yuhang Wu; Shishir K. Shah; Ioannis A. Kakadiaris

Despite the great progress achieved in unconstrained face recognition, pose variations still remain a challenging and unsolved practical issue. We propose a novel framework for multi-view face recognition based on extracting and matching pose-robust face signatures from 2D images. Specifically, we propose an efficient method for monocular 3D face reconstruction, which is used to lift the 2D facial appearance to a canonical texture space and estimate the self-occlusion. On the lifted facial texture we then extract various local features, which are further enhanced by the occlusion encodings computed on the self-occlusion mask, resulting in a pose-robust face signature, a novel feature representation of the original 2D facial image. Extensive experiments on two public datasets demonstrate that our method not only simplifies the matching of multi-view 2D facial images by circumventing the requirement for pose-adaptive classifiers, but also achieves superior performance.


international conference on pattern recognition | 2014

Benchmarking 3D Pose Estimation for Face Recognition

Pengfei Dou; Yuhang Wu; Shishir K. Shah; Ioannis A. Kakadiaris

3D-Model-Aided 2D face recognition (MaFR) has attracted a lot of attention in recent years. By registering a 3D model, facial textures of the gallery and the probe can be lifted and aligned in a common space, thus alleviating the challenge of pose variations. One obstacle preventing accurate registration is the 3D-2D pose estimation, which is easily affected by landmarks. In this work, we present the performance that state-of-the-art pose estimation algorithms could reach using state-of-the-art automatic landmark localization methods. We generated an application-specific dataset with more than 59,000 synthetic face images and ground truth camera pose and landmarks, covering 45 poses and six illumination conditions. Our experiments compared four recently proposed pose estimation algorithms using 2D landmarks detected by two automatic methods. Our results highlight one near-real-time landmark detection method and a highly accurate pose estimation algorithm, which would potentially boost the 3D-Model-Aided 2D face recognition performance.


british machine vision conference | 2014

Robust 3D Face Shape Reconstruction from Single Images via Two-Fold Coupled Structure Learning and Off-the-Shelf Landmark Detectors.

Pengfei Dou; Yuhang Wu; Shishir K. Shah; Ioannis A. Kakadiaris

In this paper, we propose a robust method for monocular face shape reconstruction (MFSR) using a sparse set of facial landmarks that are detected by most of the off-theshelf landmark detectors. Different from the classical shape-from-shading framework, we formulate the MFSR problem as a Two-Fold Coupled Structure Learning (2FCSL) process, which consists of learning a regression between two subspaces spanned by 3D sparse landmarks and 2D sparse landmarks, and a coupled dictionary learned on 3D sparse and dense shape using K-SVD. To handle variations in face pose, we explicitly incorporate pose estimation in our method. Extensive experiments on both synthetic and real data from two challenging datasets using manual and automatic landmarks indicate that our method achieves promising performance and is robust to pose variations and landmark localization noise.


international conference on biometrics | 2015

Hierarchical multi-label framework for robust face recognition

Lingfeng Zhang; Pengfei Dou; Shishir K. Shah; Ioannis A. Kakadiaris

In this paper, we propose a patch based face recognition framework. First, a face image is iteratively divided into multi-level patches and assigned hierarchical labels. Second, local classifiers are built to learn the local prediction of each patch. Third, the hierarchical relationships defined between local patches are used to obtain the global prediction of each patch. We develop three ways to learn the global prediction: majority voting, ℓ1-regularized weighting, and decision rule. Last, the global predictions of different levels are combined as the final prediction. Experimental results on different face recognition tasks demonstrate the effectiveness of our method.


Pattern Recognition | 2018

Monocular 3D facial shape reconstruction from a single 2D image with coupled-dictionary learning and sparse coding

Pengfei Dou; Yuhang Wu; Shishir K. Shah; Ioannis A. Kakadiaris

Abstract Monocular 3D face reconstruction from a single image has been an active research topic due to its wide applications. It has been demonstrated that the 3D face can be reconstructed efficiently using a PCA-based subspace model for facial shape representation and facial landmarks for model parameter estimation. However, due to the limited expressiveness of the subspace model and the inaccuracy of landmark detection, most existing methods are not robust to pose and illumination variation. To overcome this limitation, this work proposes a coupled-dictionary model for parametric facial shape representation and a two-stage framework for 3D face reconstruction from a single 2D image by using facial landmarks. Motivated by image super-resolution, the proposed coupled-model consists of two dictionaries for the sparse and the dense 3D facial shapes, respectively. In the first stage, the sparse 3D face is estimated from facial landmarks by using partial least-squares regression. In the second stage, the dense 3D face is reconstructed by 3D super-resolution on the estimated sparse 3D face. Comprehensive experimental evaluations demonstrate that the proposed coupled-dictionary model outperforms the PCA-based subspace model in 3D face modeling accuracy and that the proposed framework achieves much lower reconstruction error on facial images with pose and illumination variations compared to state-of-the-art algorithms. Moreover, qualitative analysis demonstrates that the proposed method is generalizable to different types of data, including facial images, portraits, and facial sketches.


Image and Vision Computing | 2018

Patch-based face recognition using a hierarchical multi-label matcher

Lingfeng Zhang; Pengfei Dou; Ioannis A. Kakadiaris

This paper proposes a hierarchical multi-label matcher for patch-based face recognition. In signature generation, a face image is iteratively divided into multi-level patches. Two different types of patch divisions and signatures are introduced for 2D facial image and texture-lifted image, respectively. The matcher training consists of three steps. First, local classifiers are built to learn the local matching of each patch. Second, the hierarchical relationships defined between local patches are used to learn the global matching of each patch. Three ways are introduced to learn the global matching: majority voting, l1-regularized weighting, and decision rule. Last, the global matchings of different levels are combined as the final matching. Experimental results on different face recognition tasks demonstrate the effectiveness of the proposed matcher at the cost of gallery generalization. Compared with the UR2D system, the proposed matcher improves the Rank-1 accuracy significantly by 3% and 0.18% on the UHDB31 dataset and IJB-A dataset, respectively.


Image and Vision Computing | 2018

Multi-view 3D face reconstruction with deep recurrent neural networks

Pengfei Dou; Ioannis A. Kakadiaris

Abstract Image-based 3D face reconstruction has great potential in different areas, such as facial recognition, facial analysis, and facial animation. Due to the variations in image quality, single-image-based 3D face reconstruction might not be sufficient to accurately reconstruct a 3D face. To overcome this limitation, multi-view 3D face reconstruction uses multiple images of the same subject and aggregates complementary information for better accuracy. Though appealing, there are multiple challenges in practice. Among these challenges, the most significant is the difficulty to establish coherent and accurate correspondence among a set of images, especially when these images are captured under unconstrained in-the-wild condition. This work proposes a method, Deep Recurrent 3D FAce Reconstruction (DRFAR), to solve the task of multi-view 3D face reconstruction using a subspace representation of the 3D facial shape and a deep recurrent neural network that consists of both a deep convolutional neural network (DCNN) and a recurrent neural network (RNN). The DCNN disentangles the facial identity and the facial expression components for each single image independently, while the RNN fuses identity-related features from the DCNN and aggregates the identity specific contextual information, or the identity signal, from the whole set of images to estimate the facial identity parameter, which is robust to variations in image quality and is consistent over the whole set of images. Experimental results indicate significant improvement over state-of-the-art in both the accuracy and the consistency of 3D face reconstruction. Moreover, face recognition results on IJB-A with the UR2D face recognition pipeline indicate that, compared to single-view 3D face reconstruction, the proposed multi-view 3D face reconstruction algorithm can improve the face identification accuracy of UR2D by two percentage points in Rank-1 identification rate.


International Journal of Central Banking | 2017

Evaluation of a 3D-aided pose invariant 2D face recognition system

Xiang Xu; Ha A. Le; Pengfei Dou; Yuhang Wu; Ioannis A. Kakadiaris


arXiv: Computer Vision and Pattern Recognition | 2018

A Face Recognition Signature Combining Patch-based Features with Soft Facial Attributes.

Lingfeng Zhang; Pengfei Dou; Ioannis A. Kakadiaris

Collaboration


Dive into the Pengfei Dou's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuhang Wu

University of Houston

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiang Xu

University of Houston

View shared research outputs
Researchain Logo
Decentralizing Knowledge