Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ross J. Micheals is active.

Publication


Featured researches published by Ross J. Micheals.


Proceedings of the IEEE | 2001

Into the woods: visual surveillance of noncooperative and camouflaged targets in complex outdoor settings

Terrance E. Boult; Ross J. Micheals; Xiang Gao; Michael Eckmann

Autonomous video surveillance and monitoring of human subjects in video has a rich history. Many deployed systems are able to reliably track human motion in indoor and controlled outdoor environments, e.g., parking lots and university campuses. A challenging domain of vital military importance is the surveillance of noncooperative and camouflaged targets within cluttered outdoor settings. These situations require both sensitivity and a very wide field of view and, therefore, are a natural application of omnidirectional video. Fundamentally, target finding is a change detection problem. Detection of camouflaged and adversarial targets implies the need for extreme sensitivity. Unfortunately, blind change detection in woods and fields may lead to a high fraction of false alarms, since natural scene motion and lighting changes produce highly dynamic scenes. Naturally, this desire for high sensitivity leads to a direct tradeoff between miss detections and false alarms. This paper discusses the current state of the art in video-based target detection, including an analysis of background adaptation techniques. The primary focus of the paper is the Lehigh Omnidirectional Tracking System (LOTS) and its components. This includes adaptive multibackground modeling, quasi-connected components (a novel approach to spatio-temporal grouping), background subtraction analyses, and an overall system evaluation.


Lecture Notes in Computer Science | 2003

Face recognition vendor test 2002 performance metrics

Patrick J. Grother; Ross J. Micheals; P. Jonathon Phillips

We present the methodology and recognition performance characteristics used in the Face Recognition Vendor Test 2002. We refine the notion of a biometric imposter, and show that the traditional measures of identification and verification performance, are limiting cases of the open-universe watch list task. The watch list problem generalizes the tradeoff of detection and identification of persons of interest against a false alarm rate. In addition, we use performance scores on disjoint populations to establish a means of computing and displaying distribution-free estimates of the variation of verification vs. false alarm performance. Finally we formalize gallery normalization, which is an extension of previous evaluation methodologies; we define a pair of gallery dependent mappings that can be applied as a post recognition step to vectors of distance or similarity scores. All the methods are biometric non-specific, and applicable to large populations.


Image and Vision Computing | 2004

Omni-directional visual surveillance

Terrance E. Boult; Xiang Gao; Ross J. Micheals; Michael Eckmann

Abstract Perimeter security generally requires watching areas that afford trespassers reasonable cover and concealment. By definition, such ‘interesting’ areas have limited visibility distance. Furthermore, targets of interest generally attempt to conceal themselves within the cover, sometimes adding camouflage to further reduce their visibility. Such targets are only visible while in motion. The combined result of limited visibility and target visibility severely reduces the usefulness of any approach using a standard Pan/Tilt/Zoom (PTZ) camera. As a result, these situations call for a very sensitive system with a wide field of view, and are a natural application for Omni-directional Video Surveillance and Monitoring. This paper describes a frame-rate, low-power, omni-directional tracking system (LOTS). The paper discusses related background work including resolution issues in omni-directional imaging. One of the novel system component details is quasi-connected-components (QCC). QCC combines gap filling, thresholding-with-hysteresis (TWH) and a novel region merging/cleaning approach. The multi-background modeling and dynamic thresholding make an ideal approach for difficult situations like outdoor tracking in high clutter. The paper also describes target geolocation and issues in the system user interface. The single viewpoint property of the omni-directional imaging system used simplifies the backprojection and unwarping. We end with a summary of an external evaluation of an early form of the system and comments about recent work and field tests.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2011

Meta-Recognition: The Theory and Practice of Recognition Score Analysis

Walter J. Scheirer; Anderson Rocha; Ross J. Micheals; Terrance E. Boult

In this paper, we define meta-recognition, a performance prediction method for recognition algorithms, and examine the theoretical basis for its postrecognition score analysis form through the use of the statistical extreme value theory (EVT). The ability to predict the performance of a recognition system based on its outputs for each match instance is desirable for a number of important reasons, including automatic threshold selection for determining matches and nonmatches, and automatic algorithm selection or weighting for multi-algorithm fusion. The emerging body of literature on postrecognition score analysis has been largely constrained to biometrics, where the analysis has been shown to successfully complement or replace image quality metrics as a predictor. We develop a new statistical predictor based upon the Weibull distribution, which produces accurate results on a per instance recognition basis across different recognition problems. Experimental results are provided for two different face recognition algorithms, a fingerprint recognition algorithm, a SIFT-based object recognition system, and a content-based image retrieval system.


european conference on computer vision | 2010

Robust fusion: extreme value theory for recognition score normalization

Walter J. Scheirer; Anderson Rocha; Ross J. Micheals; Terrance E. Boult

Recognition problems in computer vision often benefit from a fusion of different algorithms and/or sensors, with score level fusion being among the most widely used fusion approaches. Choosing an appropriate score normalization technique before fusion is a fundamentally difficult problem because of the disparate nature of the underlying distributions of scores for different sources of data. Further complications are introduced when one or more fusion inputs outright fail or have adversarial inputs, which we find in the fields of biometrics and forgery detection. Ideally a score normalization should be robust to model assumptions, modeling errors, and parameter estimation errors, as well as robust to algorithm failure. In this paper, we introduce the w-score, a new technique for robust recognition score normalization. We do not assume a match or non-match distribution, but instead suggest that the top scores of a recognition systems non-match scores follow the statistical Extreme Value Theory, and show how to use that to provide consistent robust normalization with a strong statistical basis.


international conference on biometrics | 2009

An Automated Video-Based System for Iris Recognition

Yooyoung Lee; P. Jonathon Phillips; Ross J. Micheals

We have successfully implemented a Video-based Automated System for Iris Recognition (VASIR), evaluating its successful performance on the MBGC dataset. The proposed method facilitates the ultimate goal of automatically detecting an eye area, extracting eye images, and selecting the best quality iris image from video frames. The selection methods performance is evaluated by comparing it to the selection performed by humans. Maseks algorithm was adapted to segment and normalize the iris region. Encoding the iris pattern and then completing the matching followed this stage. The iris templates from video images were compared to pre-existing still iris images for the purpose of the verification. This experiment has shown that even under varying illumination conditions, low quality, and off-angle video imagery, that iris recognition is feasible. Furthermore, our study showed that in practice an automated best image selection is nearly equivalent to human selection.


computer vision and pattern recognition | 2001

Efficient evaluation of classification and recognition systems

Ross J. Micheals; Terrance E. Boult

In this paper, a new framework for evaluating a variety of computer vision systems and components is introduced. This framework is particularly well suited for domains such as classification or recognition systems, where blind application of the i.i.d. assumption would reduce an evaluations accuracy, such as with classification or recognition systems. With few exceptions, most previous work on vision system evaluation does not include confidence intervals, since they are difficult to calculate, and are often coupled with strict requirements. We show how a set of previously overlooked replicate statistics tools can be used to obtain tighter confidence intervals of evaluation estimates while simultaneously reducing the amount of data and computation required to reach such sound evaluatory conclusions. In the included application of the new methodology, the well-known FERET face recognition system evaluation is extended to incorporate standard errors and confidence intervals.


Handbook of Face Recognition | 2011

Evaluation Methods in Face Recognition

P. Jonathon Phillips; Patrick J. Grother; Ross J. Micheals

The heart of designing and conducting evaluations and is the experimental protocol. The protocol states how an evaluation is to be conducted and how the results are to be computed. In this chapter we concentrate on describing the FERET and FRVT 2002 protocols. The FRVT 2002 evaluation protocol is based in the FERET evaluation protocols. The FRVT 2002 protocol is designed for biometric evaluations in general, not just for evaluating face recognition algorithms. These two evaluation protocol served as a basis for the FRVT 2006 and MBE 2010 evaluations.


Computer Vision and Image Understanding | 2013

Sensitivity analysis for biometric systems: A methodology based on orthogonal experiment designs

Yooyoung Lee; James J. Filliben; Ross J. Micheals; P. Jonathon Phillips

The purpose of this paper is to introduce an effective and structured methodology for carrying out a biometric system sensitivity analysis. The goal of sensitivity analysis is to provide the researcher/developer with insight and understanding of the key factors-algorithmic, subject-based, procedural, image quality, environmental, among others-that affect the matching performance of the biometric system under study. This proposed methodology consists of two steps: (1) the design and execution of orthogonal fractional factorial experiment designs which allow the scientist to efficiently investigate the effect of a large number of factors-and interactions-simultaneously, and (2) the use of a select set of statistical data analysis graphical procedures which are fine-tuned to unambiguously highlight important factors, important interactions, and locally-optimal settings. We illustrate this methodology by application to a study of VASIR (Video-based Automated System for Iris Recognition)-NIST iris-based biometric system. In particular, we investigated k=8 algorithmic factors from the VASIR system by constructing a (2^6^-^1x3^1x4^1) orthogonal fractional factorial design, generating the corresponding performance data, and applying an appropriate set of analysis graphics to determine the relative importance of the eight factors, the relative importance of the 28 two-term interactions, and the local best settings of the eight algorithms. The results showed that VASIRs performance was primarily driven by six factors out of the eight, along with four two-term interactions. A virtue of our two-step methodology is that it is systematic and general, and hence may be applied with equal rigor and effectiveness to other biometric systems, such as fingerprints, face, voice, and DNA.


Journal of Research of the National Institute of Standards and Technology | 2013

VASIR: An Open-Source Research Platform for Advanced Iris Recognition Technologies

Yooyoung Lee; Ross J. Micheals; James J. Filliben; P. Jonathon Phillips

The performance of iris recognition systems is frequently affected by input image quality, which in turn is vulnerable to less-than-optimal conditions due to illuminations, environments, and subject characteristics (e.g., distance, movement, face/body visibility, blinking, etc.). VASIR (Video-based Automatic System for Iris Recognition) is a state-of-the-art NIST-developed iris recognition software platform designed to systematically address these vulnerabilities. We developed VASIR as a research tool that will not only provide a reference (to assess the relative performance of alternative algorithms) for the biometrics community, but will also advance (via this new emerging iris recognition paradigm) NIST’s measurement mission. VASIR is designed to accommodate both ideal (e.g., classical still images) and less-than-ideal images (e.g., face-visible videos). VASIR has three primary modules: 1) Image Acquisition 2) Video Processing, and 3) Iris Recognition. Each module consists of several sub-components that have been optimized by use of rigorous orthogonal experiment design and analysis techniques. We evaluated VASIR performance using the MBGC (Multiple Biometric Grand Challenge) NIR (Near-Infrared) face-visible video dataset and the ICE (Iris Challenge Evaluation) 2005 still-based dataset. The results showed that even though VASIR was primarily developed and optimized for the less-constrained video case, it still achieved high verification rates for the traditional still-image case. For this reason, VASIR may be used as an effective baseline for the biometrics community to evaluate their algorithm performance, and thus serves as a valuable research platform.

Collaboration


Dive into the Ross J. Micheals's collaboration.

Top Co-Authors

Avatar

Mary F. Theofanos

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Brian C. Stanton

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

P. Jonathon Phillips

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Terrance E. Boult

University of Colorado Colorado Springs

View shared research outputs
Top Co-Authors

Avatar

James J. Filliben

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Shahram Orandi

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Yooyoung Lee

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Patrick J. Grother

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

P J. Phillips

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Elham Tabassi

National Institute of Standards and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge