Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jun-g Chen is active.

Publication


Featured researches published by Jun-g Chen.


workshop on applications of computer vision | 2016

Unconstrained face verification using deep CNN features

Jun-Cheng Chen; Vishal M. Patel; Rama Chellappa

In this paper, we present an algorithm for unconstrained face verification based on deep convolutional features and evaluate it on the newly released IARPA Janus Benchmark A (IJB-A) dataset as well as on the traditional Labeled Face in the Wild (LFW) dataset. The IJB-A dataset includes real-world unconstrained faces from 500 subjects with full pose and illumination variations which are much harder than the LFW and Youtube Face (YTF) datasets. The deep convolutional neural network (DCNN) is trained using the CASIA-WebFace dataset. Results of experimental evaluations on the IJB-A and the LFW datasets are provided.


international conference on computer vision | 2015

An End-to-End System for Unconstrained Face Verification with Deep Convolutional Neural Networks

Jun-Cheng Chen; Rajeev Ranjan; Amit Kumar; Ching-Hui Chen; Vishal M. Patel; Rama Chellappa

In this paper, we present an end-to-end system for the unconstrained face verification problem based on deep convolutional neural networks (DCNN). The end-to-end system consists of three modules for face detection, alignment and verification and is evaluated using the newly released IARPA Janus Benchmark A (IJB-A) dataset and its extended version Janus Challenging set 2 (JANUS CS2) dataset. The IJB-A and CS2 datasets include real-world unconstrained faces of 500 subjects with significant pose and illumination variations which are much harder than the Labeled Faces in the Wild (LFW) and Youtube Face (YTF) datasets. Results of experimental evaluations for the proposed system on the IJB-A dataset are provided.


international conference on biometrics theory applications and systems | 2016

A cascaded convolutional neural network for age estimation of unconstrained faces

Jun-Cheng Chen; Amit Kumar; Rajeev Ranjan; Vishal M. Patel; Azadeh Alavi; Rama Chellappa

We propose a coarse-to-fine approach for estimating the apparent age from unconstrained face images using deep convolutional neural networks (DCNNs). The proposed method consists of three modules. The first one is a DCNN-based age group classifier which classifies a given face image into age groups. The second module is a collection of DCNN-based regressors which compute the fine-grained age estimate corresponding in each age class. Finally, any erroneous age prediction is corrected using an error-correcting mechanism. Experimental evaluations on three publicly available datasets for age estimation show that the proposed approach is able to reliably estimate the age; in addition, the coarse-to-fine strategy and the error correction module significantly improve the performance.


international conference on biometrics theory applications and systems | 2015

Unconstrained face verification using fisher vectors computed from frontalized faces

Jun-Cheng Chen; Swami Sankaranarayanan; Vishal M. Patel; Rama Chellappa

We present an algorithm for unconstrained face verification using Fisher vectors computed from frontalized off-frontal gallery and probe faces. In the training phase, we use the Labeled Faces in the Wild (LFW) dataset to learn the Fisher vector encoding and the joint Bayesian metric. Given an image containing the query face, we perform face detection and landmark localization followed by frontalization to normalize the effect of pose. We further extract dense SIFT features which are then encoded using the Fisher vector learnt during the training phase. The similarity scores are then computed using the learnt joint Bayesian metric. CMC curves and FAR/TAR numbers calculated for a subset of the IARPA JANUS challenge dataset are presented.


workshop on applications of computer vision | 2017

Deep Heterogeneous Feature Fusion for Template-Based Face Recognition

Navaneeth Bodla; Jingxiao Zheng; Hongyu Xu; Jun-Cheng Chen; Carlos Castillo; Rama Chellappa

Although deep learning has yielded impressive performance for face recognition, many studies have shown that different networks learn different feature maps: while some networks are more receptive to pose and illumination others appear to capture more local information. Thus, in this work, we propose a deep heterogeneous feature fusion network to exploit the complementary information present in features generated by different deep convolutional neural networks (DCNNs) for template-based face recognition, where a template refers to a set of still face images or video frames from different sources which introduces more blur, pose, illumination and other variations than traditional face datasets. The proposed approach efficiently fuses the discriminative information of different deep features by 1) jointly learning the non-linear high-dimensional projection of the deep features and 2) generating a more discriminative template representation which preserves the inherent geometry of the deep features in the feature space. Experimental results on the IARPA Janus Challenge Set 3 (Janus CS3) dataset demonstrate that the proposed method can effectively improve the recognition performance. In addition, we also present a series of covariate experiments on the face verification task for in-depth qualitative evaluations for the proposed approach.


international conference on image processing | 2016

Fisher vector encoded deep convolutional features for unconstrained face verification

Jun-Cheng Chen; Jingxiao Zheng; Vishal M. Patel; Rama Chellappa

We present a method to combine the Fisher vector representation and the Deep Convolutional Neural Network (DCNN) features to generate a rerpesentation, called the Fisher vector encoded DCNN (FV-DCNN) features, for unconstrained face verification. One of the key features of our method is that spatial and appearance information are simultaneously processed when learning the Gaussian mixture model to encode the DCNN features. Evaluations on two challenging verification datasets show that the proposed FV-DCNN method is able to capture the salient local features and also performs well when compared to many state-of-the-art face verification methods.


information theory and applications | 2016

Towards the design of an end-to-end automated system for image and video-based recognition

Rama Chellappa; Jun-Cheng Chen; Rajeev Ranjan; Swami Sankaranarayanan; Amit Kumar; Vishal M. Patel; Carlos D. Castillo

Over many decades, researchers working in object recognition have longed for an end-to-end automated system that will simply accept 2D or 3D image or videos as inputs and output the labels of objects in the input data. Computer vision methods that use representations derived based on geometric, radiometric and neural considerations and statistical and structural matchers and artificial neural network-based methods where a multi-layer network learns the mapping from inputs to class labels have provided competing approaches for image recognition problems. Over the last four years, methods based on Deep Convolutional Neural Networks (DCNNs) have shown impressive performance improvements on object detection/recognition challenge problems. This has been made possible due to the availability of large annotated data, a better understanding of the non-linear mapping between image and class labels as well as the affordability of GPUs. In this paper, we present a brief history of developments in computer vision and artificial neural networks over the last forty years for the problem of image-based recognition. We then present the design details of a deep learning system for end-to-end unconstrained face verification/recognition. Some open issues regarding DCNNs for object recognition problems are then discussed. We caution the readers that the views expressed in this paper are from the authors and authors only!


international conference on image processing | 2015

Landmark-based fisher vector representation for video-based face verification

Jun-Cheng Chen; Vishal M. Patel; Rama Chellappa

Unconstrained video-based face verification is a challenging problem because of dramatic variations in pose, illumination, and image quality of each face in a video. In this paper, we propose a landmark-based Fisher vector representation for video-to-video face verification. The proposed representation encodes dense multi-scale SIFT features extracted from patches centered at detected facial landmarks, and face similarity is computed with the distance measure learned from joint Bayesian metric learning. Experimental results demonstrate that our approach achieves significantly better performance than other competitive video-based face verification algorithms on two challenging unconstrained video face dataseis, Multiple Biometric Grand Challenge (MBGC) and Face and Ocular Challenge Series (FOCS).


Proceedings of the National Academy of Sciences of the United States of America | 2018

Face recognition accuracy of forensic examiners, superrecognizers, and face recognition algorithms

P. Jonathon Phillips; Amy N. Yates; Ying Hu; Carina A. Hahn; Eilidh Noyes; Kelsey Jackson; Jacqueline G. Cavazos; Géraldine Jeckeln; Rajeev Ranjan; Swami Sankaranarayanan; Jun-Cheng Chen; Carlos D. Castillo; Rama Chellappa; David White; Alice J. O’Toole

Significance This study measures face identification accuracy for an international group of professional forensic facial examiners working under circumstances that apply in real world casework. Examiners and other human face “specialists,” including forensically trained facial reviewers and untrained superrecognizers, were more accurate than the control groups on a challenging test of face identification. Therefore, specialists are the best available human solution to the problem of face identification. We present data comparing state-of-the-art face recognition technology with the best human face identifiers. The best machine performed in the range of the best humans: professional facial examiners. However, optimal face identification was achieved only when humans and machines worked in collaboration. Achieving the upper limits of face identification accuracy in forensic applications can minimize errors that have profound social and personal consequences. Although forensic examiners identify faces in these applications, systematic tests of their accuracy are rare. How can we achieve the most accurate face identification: using people and/or machines working alone or in collaboration? In a comprehensive comparison of face identification by humans and computers, we found that forensic facial examiners, facial reviewers, and superrecognizers were more accurate than fingerprint examiners and students on a challenging face identification test. Individual performance on the test varied widely. On the same test, four deep convolutional neural networks (DCNNs), developed between 2015 and 2017, identified faces within the range of human accuracy. Accuracy of the algorithms increased steadily over time, with the most recent DCNN scoring above the median of the forensic facial examiners. Using crowd-sourcing methods, we fused the judgments of multiple forensic facial examiners by averaging their rating-based identity judgments. Accuracy was substantially better for fused judgments than for individuals working alone. Fusion also served to stabilize performance, boosting the scores of lower-performing individuals and decreasing variability. Single forensic facial examiners fused with the best algorithm were more accurate than the combination of two examiners. Therefore, collaboration among humans and between humans and machines offers tangible benefits to face identification accuracy in important applications. These results offer an evidence-based roadmap for achieving the most accurate face identification possible.


ieee international conference on automatic face gesture recognition | 2017

A Proximity-Aware Hierarchical Clustering of Faces

W. S. Lin; Jun-Cheng Chen; Rama Chellappa

In this paper, we propose an unsupervised face clustering algorithm called “Proximity-Aware Hierarchical Clustering” (PAHC) that exploits the local structure of deep representations. In the proposed method, a similarity measure between deep features is computed by evaluating linear SVM margins. SVMs are trained using nearest neighbors of sample data, and thus do not require any external training data. Clus- ters are then formed by thresholding the similarity scores. We evaluate the clustering performance using three challenging un- constrained face datasets, including Celebrity in Frontal-Profile (CFP), IARPA JANUS Benchmark A (IJB-A), and JANUS Challenge Set 3 (JANUS CS3) datasets. Experimental results demonstrate that the proposed approach can achieve significant improvements over state-of-the-art methods. Moreover, we also show that the proposed clustering algorithm can be applied to curate a set of large-scale and noisy training dataset while maintaining sufficient amount of images and their variations due to nuisance factors. The face verification performance on JANUS CS3 improves significantly by finetuning a DCNN model with the curated MS-Celeb-1M dataset which contains over three million face images.

Collaboration


Dive into the Jun-g Chen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carlos Castillo

Qatar Computing Research Institute

View shared research outputs
Top Co-Authors

Avatar

Alice J. O'Toole

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Connor Parde

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Matthew Hill

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Eilidh Noyes

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Y. Colon

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Y. Ivette Colon

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Alice J. O’Toole

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Amy N. Yates

National Institute of Standards and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge