Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brian Heflin is active.

Publication


Featured researches published by Brian Heflin.


international conference on biometrics theory applications and systems | 2012

Detecting and classifying scars, marks, and tattoos found in the wild

Brian Heflin; Walter J. Scheirer; Terrance E. Boult

Within the forensics community, there is a growing interest in automatic biometric-based approaches for describing subjects in an image. By labeling scars, marks and tattoos, a collection of these discriminative attributes can be assigned to images and used to assist in large-scale person search and identification. Typically, the imagery considered in a forensics context consists to some degree of uncontrolled, unprofessionally generated photographs. Recent work has shown that it is quite feasible to detect scars and marks, as well as categorize tattoos, presuming that the source imagery is controlled in some manner. In this work, we introduce a new methodology for detecting and classifying scars, marks and tattoos found in unconstrained imagery typical of forensics scenarios. Novel approaches for initial feature detection and automatic segmentation are described. We also consider the “open set” nature of the classification problem, and describe an appropriate machine learning methodology that addresses it. An extensive series of experiments for representative unconstrained data is presented, highlighting the effectiveness of our approach for images found “in the wild”.


workshop on applications of computer vision | 2013

Animal recognition in the Mojave Desert: Vision tools for field biologists

Michael J. Wilber; Walter J. Scheirer; Phil Leitner; Brian Heflin; James Zott; Daniel Reinke; David K. Delaney; Terrance E. Boult

The outreach of computer vision to non-traditional areas has enormous potential to enable new ways of solving real world problems. One such problem is how to incorporate technology in the effort to protect endangered and threatened species in the wild. This paper presents a snapshot of our interdisciplinary teams ongoing work in the Mojave Desert to build vision tools for field biologists to study the currently threatened Desert Tortoise and Mohave Ground Squirrel. Animal population studies in natural habitats present new recognition challenges for computer vision, where open set testing and access to just limited computing resources lead us to algorithms that diverge from common practices. We introduce a novel algorithm for animal classification that addresses the open set nature of this problem and is suitable for implementation on a smartphone. Further, we look at a simple model for object recognition applied to the problem of individual species identification. A thorough experimental analysis is provided for real field data collected in the Mojave desert.


International Journal of Central Banking | 2011

Face and eye detection on hard datasets

Jon Parris; Michael J. Wilber; Brian Heflin; Ham M. Rara; Ahmed El-Barkouky; Aly A. Farag; Javier R. Movellan; Modesto Castrilon-Santana; Javier Lorenzo-Navarro; Mohammad Nayeem Teli; Sébastien Marcel; Cosmin Atanasoaei; Terrance E. Boult

Face and eye detection algorithms are deployed in a wide variety of applications. Unfortunately, there has been no quantitative comparison of how these detectors perform under difficult circumstances. We created a dataset of low light and long distance images which possess some of the problems encountered by face and eye detectors solving real world problems. The dataset we created is composed of reimaged images (photohead) and semi-synthetic heads imaged under varying conditions of low light, atmospheric blur, and distances of 3m, 50m, 80m, and 200m. This paper analyzes the detection and localization performance of the participating face and eye algorithms compared with the Viola Jones detector and four leading commercial face detectors. Performance is characterized under the different conditions and parameterized by per-image brightness and contrast. In localization accuracy for eyes, the groups/companies focusing on long-range face detection outperform leading commercial applications.


workshop on applications of computer vision | 2014

Exemplar codes for facial attributes and tattoo recognition

Michael J. Wilber; Ethan M. Rudd; Brian Heflin; Yui-Man Lui; Terrance E. Boult

When implementing real-world computer vision systems, researchers can use mid-level representations as a tool to adjust the trade-off between accuracy and efficiency. Unfortunately, existing mid-level representations that improve accuracy tend to decrease efficiency, or are specifically tailored to work well within one pipeline or vision problem at the exclusion of others. We introduce a novel, efficient mid-level representation that improves classification efficiency without sacrificing accuracy. Our Exemplar Codes are based on linear classifiers and probability normalization from extreme value theory. We apply Exemplar Codes to two problems: facial attribute extraction and tattoo classification. In these settings, our Exemplar Codes are competitive with the state of the art and offer efficiency benefits, making it possible to achieve high accuracy even on commodity hardware with a low computational budget.


international conference on biometrics theory applications and systems | 2009

Difficult detection: A comparison of two different approaches to eye detection for unconstrained environments

Walter J. Scheirer; Anderson Rocha; Brian Heflin; Terrance E. Boult

Eye detection is a well studied problem for the constrained face recognition problem, where we find controlled distances, lighting, and limited pose variation. A far more difficult scenario for eye detection is the unconstrained face recognition problem, where we do not have any control over the environment or the subject. In this paper, we take a look at two different approaches for eye detection under difficult acquisition circumstances, including low-light, distance, pose variation, and blur. A new machine learning approach and several correlation filter approaches, including a new adaptive variant, are compared. We present experimental results on a variety of controlled data sets (derived from FERET and CMU PIE) that have been re-imaged under the difficult conditions of interest with an EMCCD based acquisition system. The results of our experiments show that our new detection approaches are extremely accurate under all tested conditions, and significantly improve detection accuracy compared to a leading commercial detector. This unique evaluation brings us one step closer to a better solution for the unconstrained face recognition problem.


conference of the industrial electronics society | 2010

Single image deblurring for a real-time face recognition system

Brian Heflin; Brian C. Parks; Walter J. Scheirer; Terrance E. Boult

Blur due to motion and atmospheric turbulence is a variable that impacts the accuracy of computer vision-based face recognition techniques. However, in images captured in the wild, such variables can hardly be avoided, requiring methods to account for these degradations in order to achieve accurate results in real time. One such method is to estimate the blur and then use deconvolution to negate or, at the very least, mitigate the effects of blur. In this paper, we describe a method for estimating motion blur and a method for estimating atmospheric blur. Unlike previous blur estimation methods, both methods are fully automated and require no input parameters, thus allowing integration into a real-time facial recognition pipeline. We show experimentally, on datasets processed to include synthetic and real motion and atmospheric blur, that these techniques improve recognition more than prior work. At multiple levels of blur, our results demonstrate significant improvement over related works and our baseline on data derived from both the FERET (fairly constrained data) and Labeled Faces in the Wild (fairly unconstrained data) sets.


international conference on biometrics theory applications and systems | 2010

Correcting rolling-shutter distortion of CMOS sensors using facial feature detection

Brian Heflin; Walter J. Scheirer; Terrance E. Boult

This paper proposes a fully automated post image processing scheme based on facial feature detection to correct the horizontal temporal shear or rolling shutter distortion. This distortion occurs when obtaining images or video sequences from a CMOS camera with a rolling shutter whenever there is relative horizontal movement between the sensor and the object being imaged during the integration time of the image frame. Unlike CCD sensors, such as the interline CCD, which provides an electronic shutter mechanism called a global shutter in which the light collection starts and ends at exactly the same time for all pixels, CMOS sensors can not hold and store all the pixels at the same time. Each scanline is exposed, sampled, and stored in sequence, resulting in the rolling shutter effect or temporal distortion of the image that will cause inaccurate facial recognition results. Facial feature detection is performed using correlation based methods with low computational complexity. The location of key facial feature points is then used to calculate the temporal horizontal shear or the distortion of the image. This information can then be used to remove the temporal horizontal shear distortion from the detected face or the entire image. We present experimental results on controlled data sets and real scenes to show that the proposed method yields excellent results in reversing the temporal horizontal shear caused by the CMOS rolling shutter sensor and significantly improves the accuracy of our facial recognition algorithm.


Sensors, and Command, Control, Communications, and Intelligence (C3I) Technologies for Homeland Security and Homeland Defense VI | 2007

24/7 security system: 60-FPS color EMCCD camera with integral human recognition

T. L. Vogelsong; Terrance E. Boult; D. W. Gardner; Robert Woodworth; R. C. Johnson; Brian Heflin

An advanced surveillance/security system is being developed for unattended 24/7 image acquisition and automated detection, discrimination, and tracking of humans and vehicles. The low-light video camera incorporates an electron multiplying CCD sensor with a programmable on-chip gain of up to 1000:1, providing effective noise levels of less than 1 electron. The EMCCD camera operates in full color mode under sunlit and moonlit conditions, and monochrome under quarter-moonlight to overcast starlight illumination. Sixty frame per second operation and progressive scanning minimizes motion artifacts. The acquired image sequences are processed with FPGA-compatible real-time algorithms, to detect/localize/track targets and reject non-targets due to clutter under a broad range of illumination conditions and viewing angles. The object detectors that are used are trained from actual image data. Detectors have been developed and demonstrated for faces, upright humans, crawling humans, large animals, cars and trucks. Detection and tracking of targets too small for template-based detection is achieved. For face and vehicle targets the results of the detection are passed to secondary processing to extract recognition templates, which are then compared with a database for identification. When combined with pan-tilt-zoom (PTZ) optics, the resulting system provides a reliable wide-area 24/7 surveillance system that avoids the high life-cycle cost of infrared cameras and image intensifiers.


Archive | 2011

A Look at Eye Detection for Unconstrained Environments

Brian Heflin; Walter J. Scheirer; Anderson Rocha; Terrance E. Boult

Eye detection is a well studied problem for the constrained face recognition problem, where we find controlled distances, lighting, and limited pose variation. A far more difficult scenario for eye detection is the unconstrained face recognition problem, where we do not have any control over the environment or the subject. In this chapter, we take a look at two different approaches for eye detection under difficult acquisition circumstances, including low-light, distance, pose variation, and blur. A machine learning approach and several correlation filter approaches, including our own adaptive variant, are compared. We present experimental results for a variety of controlled data sets (derived from FERET and CMU PIE) that have been re-imaged under the difficult conditions of interest with an EMCCD based acquisition system, as well as on a realistic surveillance oriented set (SCface). The results of our experiments show that our detection approaches are extremely accurate under all tested conditions, and significantly improve detection accuracy compared to a leading commercial detector. This unique evaluation brings us one step closer to a better solution for the unconstrained face recognition problem.


Archive | 2011

Pattern Recognition, Machine Intelligence and Biometrics: Expanding Frontiers

Brian Heflin; Walter J. Scheirer; Anderson Rocha; Terrance E. Boult

Collaboration


Dive into the Brian Heflin's collaboration.

Top Co-Authors

Avatar

Terrance E. Boult

University of Colorado Colorado Springs

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anderson Rocha

State University of Campinas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aly A. Farag

University of Louisville

View shared research outputs
Top Co-Authors

Avatar

Brian C. Parks

University of Colorado Colorado Springs

View shared research outputs
Top Co-Authors

Avatar

Ethan M. Rudd

University of Colorado Colorado Springs

View shared research outputs
Top Co-Authors

Avatar

Ham M. Rara

University of Louisville

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge