Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael D. Garris is active.

Publication


Featured researches published by Michael D. Garris.


NIST Interagency/Internal Report (NISTIR) - 5469 | 1994

NIST Form-Based Handprint Recognition System

Michael D. Garris; James L. Blue; Gerald T. Candela; D L. Dommick; Jon C. Geist; Patrick J. Grother; Stanley Janet; Charles L. Wilson

1


international symposium on neural networks | 1991

Methods for enhancing neural network handwritten character recognition

Michael D. Garris; R. A. Wilkinson; Charles L. Wilson

An efficient method for increasing the generalization capacity of neural character recognition is presented. The network uses a biologically inspired architecture for feature extraction and character classification. The numerical methods used are optimized for use on massively parallel array processors. The method for training set construction, when applied to handwritten digit recognition, yielded a writer-independent recognition rate of 92%. The activation strength produced by network recognition is an effective statistical confidence measure of the accuracy of recognition. A method of using the activation strength for reclassification is described which, when applied to handwritten digit recognition, reduced substitutional errors to 2.2%.<<ETX>>


Proceedings of the conference on Analysis of neural network applications | 1991

Analysis of a biologically motivated neural network for character recognition

Michael D. Garris; R. A. Wilkinson; Charles L. Wilson

A neural network architecture for size-invariant and local shape-invariant digit recognition has been developed. The network is based on known biological data on the structure of vertebrate vision but is implemented using more conventional numerical methods for image feature extraction and pattern classification. The input receptor field structure of the network uses Gabor function feature selection. The classification section of the network uses back-propagation. Using these features as neurode inputs, an implementation of back-propagation on a serial machine achieved 100% accuracy when trained and tested on a single font size and style while classifying at a rate of 2 ms per character. Taking the same trained network, recognition greater than 99.9% accuracy was achieved when tested with digits of different font sizes. A network trained on multiple font styles when tested achieved greater than 99.9% accuracy and, when tested with digits of different font sizes, achieved greater than 99.8% accuracy. These networks, trained only with good quality prototypes, recognized images degraded with 15% random noise with an accuracy of 89%. In addition to raw recognition results, a study was conducted where activation distributions of correct responses from the network were compared against activation distributions of incorrect responses. By establishing a threshold between these two distributions, a reject mechanism was developed to minimize substitutional errors. This allowed substitutional errors on images degraded with 10% random noise to be reduced from 2.08% to 0.25%.


international symposium on neural networks | 1990

Self-organizing neural network character recognition on a massively parallel computer

Charles L. Wilson; R. A. Wilkinson; Michael D. Garris

Two neural-network-based methods are combined to develop font-independent character recognition on a distributed array processor. Feature localization and noise reduction are achieved using least-squares optimized Gabor filtering. The filtered images are then presented to an ART-1-based learning algorithm which produces self-organizing sets of neural network weights used for character recognition. Implementation of these algorithms on a highly parallel computer with 1024 processors allows high-speed character recognition to be achieved in 8 ms/image with greater than 99% accuracy on machine print and 80% accuracy on unconstrained hand-printed characters


Proceedings of the IEEE | 2006

NIST Fingerprint Evaluations and Developments

Michael D. Garris; Elham Tabassi; Charles L. Wilson

This paper presents an R&D framework used by the National Institute of Standards and Technology (NIST) for biometric technology testing and evaluation. The focus of this paper is on fingerprint-based verification and identification. Since 9/11 the NIST Image Group has been mandated by Congress to run a program for biometric technology assessment and biometric systems certification. Four essential areas of activity are discussed: 1) developing test datasets; 2) conducting performance assessment; 3) technology development; and 4) standards participation. A description of activities and accomplishments are provided for each of these areas. In the process, methods of performance testing are described and results from specific biometric technology evaluations are presented. This framework is anticipated to have broad applicability to other technology and application domains


document recognition and retrieval | 1999

Federal Register document image database

Michael D. Garris; Stanley Janet; W Klein

A new, fully-automated process has been developed at NIST to derive ground truth for document images. The method involves matching optical character recognition (OCR) results from a page with typesetting files for an entire book. Public domain software used to derive the ground truth is provided in the form of Perl scripts and C source code, and includes new, more efficient string alignment technology and a word- level scoring package. With this ground truthing technology, it is now feasible to produce much larger data sets, at much lower cost, than was ever possible with previous labor- intensive, manual data collection projects. Using this method, NIST has produced a new document image database for evaluating Document Analysis and Recognition technologies and Information Retrieval systems. The database produced contains scanned images, SGML-tagged ground truth text, commercial OCR results, and image quality assessment results for pages published in the 1994 Federal Register. These data files are useful in a wide variety of experiments and research. There were roughly 250 issues, comprised of nearly 69,000 pages, published in the Federal Register in 1994. This volume of the database contains the pages of 20 books published in January of that year. In all, there are 4711 page images provided, with 4519 of them having corresponding ground truth. This volume is distributed on two ISO-9660 CD- ROMs. Future volumes may be released, depending on the level of interest.


Proceedings of SPIE, the International Society for Optical Engineering | 2006

Nonparametric statistical data analysis of fingerprint minutiae exchange with two-finger fusion

Jin Chu Wu; Michael D. Garris

A nonparametric inferential statistical data analysis is presented. The utility of this method is demonstrated through analyzing results from minutiae exchange with two-finger fusion. The analysis focused on high-accuracy vendors and two modes of matching standard fingerprint templates: 1) Native Matching - where the same vendor generates the templates and the matcher, and 2) Scenario 1 Interoperability - where vendor As enrollment template is matched to vendor Bs authentication template using vendor Bs matcher. The purpose of this analysis is to make inferences about the underlying population from sample data, which provide insights at an aggregate level. This is very different from the data analysis presented in the MINEX04 report in which vendors are individually ranked and compared. Using the nonparametric bootstrap bias-corrected and accelerated (BCa) method, 95 % confidence intervals are computed for each mean error rate. Nonparametric significance tests are then applied to further determine if the difference between two underlying populations is real or by chance with a certain probability. Results from this method show that at a greater-than-95 % confidence level there is a significant degradation in accuracy of Scenario 1 Interoperability with respect to Native Matching. The difference of error rates can reach on average a two-fold increase in False Non-Match Rate. Additionally, it is proved why two-finger fusion using the sum rule is more accurate than single-finger matching under the same conditions. Results of a simulation are also presented to show the significance of the confidence intervals derived from the small size of samples, such as six error rates in some of our cases.


systems man and cybernetics | 1995

Off-line handwriting recognition from forms

Michael D. Garris; James L. Blue; Gerald T. Candela; Darrin L. Dimmick; Jon C. Geist; Patrick J. Grother; Stanley Janet; Charles L. Wilson

A public domain optical character recognition (OCR) system has been developed by the National Institute of Standards and Technology (NIST) to provide a baseline of performance on off-line handwriting recognition from forms. The systems source code, training data, and performance assessment tools are all publicly available. The system recognizes the handprint written on handwriting sample forms as distributed on the CD-ROM, NIST Special Database 19. The public domain package contains a number of significant contributions to OCR technology, including an optimized probabilistic neural network classifier that operates a factor of 20 times faster than traditional software implementations of this algorithm. The modular design of the software makes it useful for training and testing set validation, multiple system voting schemes, and component evaluation and comparison. As an example, the OCR results from two versions of the recognition system are presented and analyzed.


IS&T/SPIE's Symposium on Electronic Imaging: Science and Technology | 1992

Machine-assisted human classification of segmented characters for OCR testing and training

R. Allen Wilkinson; Michael D. Garris; Jon C. Geist

NIST needed a large set of segmented characters for use as a test set for the First Census Optical Character Recognition (OCR) Systems Conference. A machine-assisted human classification system was developed to expedite the process. The testing set consists of 58,000 digits and 10,000 upper and lower case characters entered on forms by high school students and is distributed as Testdata 1. A machine system was able to recognize a majority of the characters but all system decisions required human verification. The NIST recognition system was augmented with human verification to produce the testing database. This augmented system consists of several parts, the recognition system, a checking pass, a correcting pass, and a clean up pass. The recognition system was developed at NIST. The checking pass verifies that an image is in the correct class. The correcting pass allows classes to be changed. The clean-up pass forces the system to stabilize by making all images accepted with verified classifications or rejected. In developing the testing set we discovered that segmented characters can be ambiguous even without context bias. This ambiguity can be caused by over- segmentation or by the way a person writes. For instance, it is possible to create four ambiguous characters to represent all ten digits. This means that a quoted accuracy rate for a set of segmented characters is meaningless without reference to human performance on the same set of characters. This is different from the case of isolated fields where most of the ambiguity can be overcome by using context which is available in the non-segmented image. For instance, in the First Census OCR Conference, one system achieved a forced decision error rate for digits of 1.6% while 21 other systems achieved error rates of 3.2% to 5.1%. This statement cannot be evaluated until human performance on the same set of characters presented one at a time without context has been measured.


Photonics for port and harbor security. Conference | 2005

NIST biometric evaluations and developments

Michael D. Garris; Charles L. Wilson

This paper presents an R&D framework used by the National Institute of Standards and Technology (NIST) for biometric technology testing and evaluation. The focus of this paper is on fingerprint-based verification and identification. Since 9-11 the NIST Image Group has been mandated by Congress to run a program for biometric technology assessment and biometric systems certification. Four essential areas of activity are discussed: 1) developing test datasets, 2) conducting performance assessment; 3) technology development; and 4) standards participation. A description of activities and accomplishments are provided for each of these areas. In the process, methods of performance testing are described and results from specific biometric technology evaluations are presented. This framework is anticipated to have broad applicability to other technology and application domains.

Collaboration


Dive into the Michael D. Garris's collaboration.

Top Co-Authors

Avatar

Charles L. Wilson

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Stanley Janet

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Patrick J. Grother

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Gerald T. Candela

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

James L. Blue

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

John D. Grantham

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

John M. Libert

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

R. A. Wilkinson

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Shahram Orandi

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Craig I. Watson

National Institute of Standards and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge