Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Patrick J. Grother is active.

Publication


Featured researches published by Patrick J. Grother.


computer vision and pattern recognition | 2005

Face recognition based on frontal views generated from non-frontal images

Volker Blanz; Patrick J. Grother; P J. Phillips; Thomas Vetter

This paper presents a method for face recognition across large changes in viewpoint. Our method is based on a morphable model of 3D faces that represents face-specific information extracted from a dataset of 3D scans. For non-frontal face recognition in 2D still images, the morphable model can be incorporated in two different approaches: in the first, it serves as a preprocessing step by estimating the 3D shape of novel faces from the non-frontal input images, and generating frontal views of the reconstructed faces at a standard illumination using 3D computer graphics. The transformed images are then fed into state-of-the-art face recognition systems that are optimized for frontal views. This method was shown to be extremely effective in the Face Recognition Vendor Test FRVT 2002. In the process of estimating the 3D shape of a face from an image, a set of model coefficients are estimated. In the second method, face recognition is performed directly from these coefficients. In this paper we explain the algorithm used to preprocess the images in FRVT 2002, present additional FRVT 2002 results, and compare these results to recognition from the model coefficients.


computer vision and pattern recognition | 2015

Pushing the frontiers of unconstrained face detection and recognition: IARPA Janus Benchmark A

Brendan Klare; Benjamin Klein; Emma Taborsky; Austin Blanton; Jordan Cheney; Kristen Allen; Patrick J. Grother; Alan Mah; Mark J. Burge; Anil K. Jain

Rapid progress in unconstrained face recognition has resulted in a saturation in recognition accuracy for current benchmark datasets. While important for early progress, a chief limitation in most benchmark datasets is the use of a commodity face detector to select face imagery. The implication of this strategy is restricted variations in face pose and other confounding factors. This paper introduces the IARPA Janus Benchmark A (IJB-A), a publicly available media in the wild dataset containing 500 subjects with manually localized face images. Key features of the IJB-A dataset are: (i) full pose variation, (ii) joint use for face recognition and face detection benchmarking, (iii) a mix of images and videos, (iv) wider geographic variation of subjects, (v) protocols supporting both open-set identification (1:N search) and verification (1:1 comparison), (vi) an optional protocol that allows modeling of gallery subjects, and (vii) ground truth eye and nose locations. The dataset has been developed using 1,501,267 million crowd sourced annotations. Baseline accuracies for both face detection and face recognition from commercial and open source algorithms demonstrate the challenge offered by this new unconstrained benchmark.


Pattern Recognition | 1994

Evaluation of Pattern Classifiers for Fingerprint and OCR Applications

James L. Blue; Gerald T. Candela; Patrick J. Grother; Rama Chellappa; Charles L. Wilson

Abstract The classification accuracy of four statistical and three neural network classifiers for two image based pattern classification problems is evaluated. These are optical character recognition (OCR) for isolated handprinted digits, and fingerprint classification. It is hoped that the evaluation results reported will be useful for designers of practical systems for these two important commercial applications. For the OCR problem, the Karhunen-Loeve (K-L) transform of the images is used to generate the input feature set. Similarly for the fingerprint problem, the K-L transform of the ridge directions is used to generate the input feature set. The statistical classifiers used are Euclidean minimum distance, quadratic minimum distance, normal, and k -nearest neighbor. The neural network classifiers used are multi-layer perceptron, radial basis function, and probabilistic neural network. The OCR data consist of 7480 digit images for training and 23,140 digit images for testing. The fingerprint data used consist of 2000 training and 2000 testing images. In addition to evaluation for accuracy, the multi-layer perceptron and radial basis function networks are evaluated for size and generalization capability. For the evaluated datasets the best accuracy obtained for either problem is provided by a probabilistic neural network. Minimum classification error is 2.5% for OCR and 7.2% for fingerprints.


NIST Interagency/Internal Report (NISTIR) - 5469 | 1994

NIST Form-Based Handprint Recognition System

Michael D. Garris; James L. Blue; Gerald T. Candela; D L. Dommick; Jon C. Geist; Patrick J. Grother; Stanley Janet; Charles L. Wilson

1


Pattern Recognition | 1997

Fast implementations of nearest neighbor classifiers

Patrick J. Grother; Gerald T. Candela; James L. Blue

Abstract Standard implementations of non-parametric classifiers have large computational requirements. Parzen classifiers use the distances of an unknown vector to all N prototype samples, and consequently exhibit O( N ) behavior in both memory and time. We describe four techniques for expediting the nearest neighbor methods: replacing the linear search with a new kd tree method, exhibiting approximately O (N 1 2 ) behavior; employing an L ∞ instead of L 2 distance metric; using variance-ordered features; and rejecting prototypes by evaluating distances in low dimensionality subspaces. We demonstrate that variance-ordered features yield significant efficiency gains over the same features linearly transformed to have uniform variance. We give results for a large OCR problem, but note that the techniques expedite recognition for arbitrary applications. Three of four techniques preserve recognition accuracy.


computer vision and pattern recognition | 2004

How features of the human face affect recognition: a statistical comparison of three face recognition algorithms

Geof H. Givens; J.R. Beveridge; Bruce A. Draper; Patrick J. Grother; P J. Phillips

Recognition difficulty is statistically linked to 11 subject covariate factors such as age and gender for three face recognition algorithms: principle components analysis, an interpersonal image difference classifier, and an elastic bunch graph matching algorithm. The covariates assess race, gender, age, glasses use, facial hair, bangs, mouth state, complexion, state of eyes, makeup use, and facial expression. We use two statistical models. First, an ANOVA relates covariates to normalized similarity scores. Second, logistic regression relates subject covariates to probability of rank one recognition. These models have strong explanatory power as measured by R/sup 2/ and deviance reduction, while providing complementary and corroborative results. Some factors, like changes to the eye status, affect all algorithms similarly. Other factors, such as race, affect different algorithms differently. Tabular and graphical summaries of results provide a wealth of empirical evidence. Plausible explanations of many results can be motivated from knowledge of the algorithms. Other results are surprising and suggest a need for further study.


NIST Interagency/Internal Report (NISTIR) - 7836 | 2012

IREX III - Performance of Iris Identification Algorithms

Patrick J. Grother; George W. Quinn; James R. Matey; Mei L. Ngan; Wayne J. Salamon; Gregory P. Fiumara; Craig I. Watson

Disclaimer Specific hardware and software products identified in this report were used in order to perform the evaluations described in this document. In no case does identification of any commercial product, trade name, or vendor, imply recommendation or endorsement by the National Institute of Standards and Technology, nor does it imply that the products and equipment identified are necessarily the best available for the purpose.


Proceedings of SPIE | 1992

Karhunen Loève feature extraction for neural handwritten character recognition

Patrick J. Grother

The optimality of the Karhunen Loeve (KL) transform is well known. Since its basis is the eigenvector set of the covariance matrix, a statistical, not functional, representation of the variance in pattern ensembles is generated. By using the KL transform coefficients as a natural feature representation of a character image, the eigenvector set can be regarded as an unsupervised biological feature extractor for a (neural) classifier. The covariance matrix and its eigenvectors are obtained from 76,753 handwritten digits. This operation is a unique expense; once the basis set is calculated it forms a linear first layer of a three weight layer feed forward network. The subsequent nonlinear perceptron layers are trained using a scaled conjugate gradient algorithm that typically affords an order of magnitude reduction in computation over the ubiquitous back-propagation algorithm. In conjunction with a massively parallel computer, training is expedited such that tens of initially different random weight sets are trained and evaluated. Increase in training set size (up to 76,755 patterns) gives less accurate learning but improved generalization on the fixed disjoint test set. A neural classifier is realized that recognizes 96.1% of 15,000 handwritten digits from 944 different writers. This recognition is attributed to the energy compaction optimality of the KL transform.


Pattern Recognition | 1994

Binary decision clustering for neural-network-based optical character recognition

Charles L. Wilson; Patrick J. Grother; Constance S. Barnes

Abstract A multiple neural network system for handprinted character recognition is presented. It consists of a set of input networks which discriminate between all two-class pairs, for example “1” from “7”, and an output network which takes the signals from the input networks and yields a digit recognition decision. For a ten-digit classification problem this requires 45 binary decision machines in the input network. The output stage is typically a single trained network. The neural network paradigms adopted in these input and output networks are the multi-layer perceptron, the radial-bias function network and the probabilistic neural network. A simple majority vote rule was also tested in place of the output network. The various resulting digit classifiers were trained on 7480 isolated images and tested on a disjoint set of size 23140. The Karhunen-Loeve transforms of the images of each pair of two classes formed the training set for each BDM. Several different combinations of neural network input and output structures gave similar classification performance. The minimum error rate achieved was 2.5% with no rejection obtained by combining a PNN input array with an RBF output stage. This combined network had an error rate of 0.7% with 10% rejection.


computer vision and pattern recognition | 2017

IARPA Janus Benchmark-B Face Dataset

Cameron Whitelam; Emma Taborsky; Austin Blanton; Brianna Maze; Jocelyn C. Adams; Timothy Miller; Nathan D. Kalka; Anil K. Jain; James A. Duncan; Kristen Allen; Jordan Cheney; Patrick J. Grother

Despite the importance of rigorous testing data for evaluating face recognition algorithms, all major publicly available faces-in-the-wild datasets are constrained by the use of a commodity face detector, which limits, among other conditions, pose, occlusion, expression, and illumination variations. In 2015, the NIST IJB-A dataset, which consists of 500 subjects, was released to mitigate these constraints. However, the relatively low number of impostor and genuine matches per split in the IJB-A protocol limits the evaluation of an algorithm at operationally relevant assessment points. This paper builds upon IJB-A and introduces the IARPA Janus Benchmark-B (NIST IJB-B) dataset, a superset of IJB-A. IJB-B consists of 1,845 subjects with human-labeled ground truth face bounding boxes, eye/nose locations, and covariate metadata such as occlusion, facial hair, and skintone for 21,798 still images and 55,026 frames from 7,011 videos. IJB-B was also designed to have a more uniform geographic distribution of subjects across the globe than that of IJB-A. Test protocols for IJB-B represent operational use cases including access point identification, forensic quality media searches, surveillance video searches, and clustering. Finally, all images and videos in IJB-B are published under a Creative Commons distribution license and, therefore, can be freely distributed among the research community.

Collaboration


Dive into the Patrick J. Grother's collaboration.

Top Co-Authors

Avatar

George W. Quinn

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Charles L. Wilson

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Gerald T. Candela

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Craig I. Watson

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Elham Tabassi

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Stanley Janet

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

James L. Blue

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

James R. Matey

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Mei L. Ngan

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

R. A. Wilkinson

National Institute of Standards and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge