Bruce A. Draper
Colorado State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Bruce A. Draper.
computer vision and pattern recognition | 2010
David S. Bolme; J. Ross Beveridge; Bruce A. Draper; Yui Man Lui
Although not commonly used, correlation filters can track complex objects through rotations, occlusions and other distractions at over 20 times the rate of current state-of-the-art techniques. The oldest and simplest correlation filters use simple templates and generally fail when applied to tracking. More modern approaches such as ASEF and UMACE perform better, but their training needs are poorly suited to tracking. Visual tracking requires robust filters to be trained from a single frame and dynamically adapted as the appearance of the target object changes. This paper presents a new type of correlation filter, a Minimum Output Sum of Squared Error (MOSSE) filter, which produces stable correlation filters when initialized using a single frame. A tracker based upon MOSSE filters is robust to variations in lighting, scale, pose, and nonrigid deformations while operating at 669 frames per second. Occlusion is detected based upon the peak-to-sidelobe ratio, which enables the tracker to pause and resume where it left off when the object reappears.
Computer Vision and Image Understanding | 2003
Bruce A. Draper; Kyungim Baek; Marian Stewart Bartlett; J. Ross Beveridge
This paper compares principal component analysis (PCA) and independent component analysis (ICA) in the context of a baseline face recognition system, a comparison motivated by contradictory claims in the literature. This paper shows how the relative performance of PCA and ICA depends on the task statement, the ICA architecture, the ICA algorithm, and (for PCA) the subspace distance metric. It then explores the space of PCA/ICA comparisons by systematically testing two ICA algorithms and two ICA architectures against PCA with four different distance measures on two tasks (facial identity and facial expression). In the process, this paper verifies the results of many of the previous comparisons in the literature, and relates them to each other and to this work. We are able to show that the FastICA algorithm configured according to ICA architecture II yields the highest performance for identifying faces, while the InfoMax algorithm configured according to ICA architecture II is better for recognizing facial actions. In both cases, PCA performs well but not as well as ICA.
international conference on computer vision systems | 2003
David S. Bolme; J. Ross Beveridge; Marcio Teixeira; Bruce A. Draper
The CSU Face Identification Evaluation System provides standard face recognition algorithms and standard statistical methods for comparing face recognition algorithms. The system includes standardized image pre-processing software, three distinct face recognition algorithms, analysis software to study algorithm performance, and Unix shell scripts to run standard experiments. All code is written in ANSI C. The preprocessing code replicates feature of preprocessing used in the FERET evaluations. The three algorithms provided are Principle Components Analysis (PCA), a.k.a Eigenfaces, a combined Principle Components Analysis and Linear Discriminant Analysis algorithm (PCA+LDA), and a Bayesian Intrapersonal/Extrapersonal Classifier (BIC). The PCA+LDA and BIC algorithms are based upon algorithms used in the FERET study contributed by the University of Maryland and MIT respectively. There are two analysis. The first takes as input a set of probe images, a set of gallery images, and similarity matrix produced by one of the three algorithms. It generates a Cumulative Match Curve of recognition rate versus recognition rank. The second analysis tool generates a sample probability distribution for recognition rate at recognition rank 1, 2, etc. It takes as input multiple images per subject, and uses Monte Carlo sampling in the space of possible probe and gallery choices. This procedure will, among other things, add standard error bars to a Cumulative Match Curve. The System is available through our website and we hope it will be used by others to rigorously compare novel face identification algorithms to standard algorithms using a common implementation and known comparison techniques.
International Journal of Computer Vision | 1988
Bruce A. Draper; Robert T. Collins; John Brolio; Allen R. Hanson; Edward M. Riseman
THE SCHEMA SYSTEM EMBODIES A KNOWLEDGE-BASED APPROACH TO SCENE INTERPRE- TATION. LOW-LEVEL ROUTINES ARE APPLIED TO EXTRACT IMAGE DESCRIPTORS CALLED TOKENS, AND THESE TOKENS ARE FURTHER ORGANIZED BY INTERMEDIATE-LEVEL ROUT- INES INTO MORE ABSTRACT STRUCTURES THAT CAN BE ASSOCIATED WITH OBJECT INST- ANCES. THE THOUSANDS OF TOKENS THAT ARE EXTRACTED FROM AN IMAGE CAN BE GROUPED IN A COMBINATORIALLY EXPLOSIVE MANNER. THEREFORE, KNOWLEDGE IN THE SCHEMA SYSTEM IS NOT LIMITED TO THE DESCRIPTIONS OF OBJECTS; IT INCLUDES INFORMATION ABOUT HOW EACH OBJECT CAN BE RECOGNIZED. OBJECT SCHEMAS CONTROL THE INVOCATION AND EXECUTION OF THE LOW-LEVEL AND INTERMEDIATE-LEVEL ROUT- INES WITH THE GOAL OF FORMING HYPOTHESES ABOUT OBJECTS IN THE SCENE. THE SYSTEM DESCRIBED PRODUCES IMAGE INTERPRETATIONS BASED ON TWO-DIMENSIONAL REASONING, ALTHOUGH NOTHING IN THE SYSTEM ORGANIZATION AND CONTROL STRATEG- IES PRECLUDE THE INCLUSION OF THREE-DIMENSIONAL INFORMATION. THE SCHEMA FRAMEWORK EXPLOITS COARSE-GRAINED PARALLELISM IN A COOPERA- TIVE INTERPRETATION PROCESS. SCHEMA INSTANCES RUN CONCURRENTLY, AND AN OB- JECT SCHEMA OFTEN HAS AVAILABLE A VARIETY OF STRATEGIES FOR IDENTIFICATION, EACH ONE INVOKING KNOWLEDGE SOURCES TO GATHER SUPPORT FOR THE PRESENCE OF A HYPOTHESIZED OBJECT. INTER-SCHEMA COMMUNICATION IS CARRIED OUT ASYNCHRON- OUSLY THROUGH A GLOBAL BLACKBOARD. IN THIS WAY SCHEMA INSTANCES COOPERATE TO IDENTIFY AND LOCATE THE SIGNIFICANT OBJECTS PRESENT IN THE SCENE.
international conference on biometrics | 2009
P. Jonathon Phillips; Patrick J. Flynn; J. Ross Beveridge; W. Todd Scruggs; Alice J. O'Toole; David S. Bolme; Kevin W. Bowyer; Bruce A. Draper; Geof H. Givens; Yui Man Lui; Hassan Sahibzada; Joseph A. Scallan; Samuel Weimer
The goal of the Multiple Biometrics Grand Challenge (MBGC) is to improve the performance of face and iris recognition technology from biometric samples acquired under unconstrained conditions. The MBGC is organized into three challenge problems. Each challenge problem relaxes the acquisition constraints in different directions. In the Portal Challenge Problem, the goal is to recognize people from near-infrared (NIR) and high definition (HD) video as they walk through a portal. Iris recognition can be performed from the NIR video and face recognition from the HD video. The availability of NIR and HD modalities allows for the development of fusion algorithms. The Still Face Challenge Problem has two primary goals. The first is to improve recognition performance from frontal and off angle still face images taken under uncontrolled indoor and outdoor lighting. The second is to improve recognition performance on still frontal face images that have been resized and compressed, as is required for electronic passports. In the Video Challenge Problem, the goal is to recognize people from video in unconstrained environments. The video is unconstrained in pose, illumination, and camera angle. All three challenge problems include a large data set, experiment descriptions, ground truth, and scoring code.
IEEE Computer | 2003
Walid A. Najjar; Willem A. P. Bohm; Bruce A. Draper; Jeffrey Hammes; Robert G. Rinker; J.R. Beveridge; Monica Chawathe; Charlie Ross
RC systems typically consist of an array of configurable computing elements. The computational granularity of these elements ranges from simple gates - as abstracted by FPGA lookup tables - to complete arithmetic-logic units with or without registers. A rich programmable interconnect completes the array. RC system developer manually partitions an application into two segments: a hardware component in a hardware description language such as VHDL or Verilog that will execute as a circuit on the FPGA and a software component that will execute as a program on the host. Single-assignment C is a C language variant designed to create an automated compilation path from an algorithmic programming language to an FPGA-based reconfigurable computing system.
computer vision and pattern recognition | 2001
J.R. Beveridge; Kai She; Bruce A. Draper; Geof H. Givens
The FERET evaluation compared recognition rates for different semi-automated and automated face recognition algorithms. We extend FERET by considering when differences in recognition rates are statistically distinguishable subject to changes in test imagery. Nearest Neighbor classifiers using principal component and linear discriminant subspaces are compared using different choices of distance metric. Probability distributions for algorithm recognition rates and pairwise differences in recognition rates are determined using a permutation methodology. The principal component subspace with Mahalanobis distance is the best combination; using L2 is second best. Choice of distance measure for the linear discriminant subspace matters little, and performance is always worse than the principal components classifier using either Mahalanobis or L1 distance. We make the source code for the algorithms, scoring procedures and Monte Carlo study available in the hopes others will extend this comparison to newer algorithms.
machine vision applications | 2005
J. Ross Beveridge; David S. Bolme; Bruce A. Draper; Marcio Teixeira
Abstract.The CSU Face Identification Evaluation System includes standardized image preprocessing software, four distinct face recognition algorithms, analysis tools to study algorithm performance, and Unix shell scripts to run standard experiments. All code is written in ANSII C. The four algorithms provided are principle components analysis (PCA), a.k.a eigenfaces, a combined principle components analysis and linear discriminant analysis algorithm (PCA + LDA), an intrapersonal/extrapersonal image difference classifier (IIDC), and an elastic bunch graph matching (EBGM) algorithm. The PCA + LDA, IIDC, and EBGM algorithms are based upon algorithms used in the FERET study contributed by the University of Maryland, MIT, and USC, respectively. One analysis tool generates cumulative match curves; the other generates a sample probability distribution for recognition rate at recognition rank 1, 2, etc., using Monte Carlo sampling to generate probe and gallery choices. The sample probability distributions at each rank allow standard error bars to be added to cumulative match curves. The tool also generates sample probability distributions for the paired difference of recognition rates for two algorithms. Whether one algorithm consistently outperforms another is easily tested using this distribution. The CSU Face Identification Evaluation System is available through our Web site and we hope it will be used by others to rigorously compare novel face identification algorithms to standard algorithms using a common implementation and known comparison techniques.
IEEE Transactions on Image Processing | 2003
Bruce A. Draper; J.R. Beveridge; A.P.W. Bohm; Charlie Ross; Monica Chawathe
The Cameron project has developed a language called single assignment C (SA-C), and a compiler for mapping image-based applications written in SA-C to field programmable gate arrays (FPGAs). The paper tests this technology by implementing several applications in SA-C and compiling them to an Annapolis Microsystems (AMS) WildStar board with a Xilinx XV2000E FPGA. The performance of these applications on the FPGA is compared to the performance of the same applications written in assembly code or C for an 800 MHz Pentium III. (Although no comparison across processors is perfect, these chips were the first of their respective classes fabricated at 0.18 microns, and are therefore of comparable ages.) We find that applications written in SA-C and compiled to FPGAs are between 8 and 800 times faster than the equivalent program run on the Pentium III.
international conference on computer vision | 2001
José Bins; Bruce A. Draper
The number of features that can be completed over an image is, for practical purposes, limitless. Unfortunately, the number of features that can be computed and exploited by most computer vision systems is considerably less. As a result, it is important to develop techniques for selecting features from very large data sets that include many irrelevant or redundant features. This work addresses the feature selection problem by proposing a three-step algorithm. The first step uses a variation of the well known Relief algorithm to remove irrelevance; the second step clusters features using K-means to remove redundancy; and the third step is a standard combinatorial feature selection algorithm. This three-step combination is shown to be more effective than standard feature selection algorithms for large data sets with lots of irrelevant and redundant features. It is also shown to he no worse than standard techniques for data sets that do not have these properties. Finally, we show a third experiment in which a data set with 4096 features is reduced to 5% of its original size with very little information loss.