Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gabriel C. Birch is active.

Publication


Featured researches published by Gabriel C. Birch.


Archive | 2015

History and Evolution of the Johnson Criteria.

Tracy A. Sjaardema; Collin S. Smith; Gabriel C. Birch

The Johnson Criteria metric calculates probability of detection of an object imaged by an optical system, and was created in 1958 by John Johnson. As understanding of target detection has improved, detection models have evolved to better model additional factors such as weather, scene content, and object placement. The initial Johnson Criteria, while sufficient for technology and understanding at the time, does not accurately reflect current research into target acquisition and technology. Even though current research shows a dependence on human factors, there appears to be a lack of testing and modeling of human variability.


Optical Engineering | 2015

Sinusoidal Siemens star spatial frequency response measurement errors due to misidentified target centers

Gabriel C. Birch; John Clark Griffin

Abstract. Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. Using the closed-form solution, we propose a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.


international carnahan conference on security technology | 2017

Unmanned aerial system detection and assessment through temporal frequency analysis

Bryana L. Woo; Gabriel C. Birch; Jaclynn J. Stubbs; Camron G. Kouhestani

There is a desire to detect and assess unmanned aerial systems (UAS) with a high probability of detection and low nuisance alarm rates in numerous fields of security. Currently available solutions rely upon exploiting electronic signals emitted from the UAS. While these methods may enable some degree of security, they fail to address the emerging domain of autonomous UAS that do not transmit or receive information during the course of a mission. We examine frequency analysis of pixel fluctuation over time to exploit the temporal frequency signature present in imagery data of UAS. This signature is present for autonomous or controlled multirotor UAS and allows for lower pixels-on-target detection. The methodology also acts as a method of assessment due to the distinct frequency signatures of UAS when examined against the standard nuisance alarms such as birds or non-UAS electronic signal emitters. The temporal frequency analysis method is paired with machine learning algorithms to demonstrate a UAS detection and assessment method that requires minimal human interaction. The use of the machine learning algorithm allows each necessary human assess to increase the likelihood of autonomous assessment, allowing for increased system performance over time.


international carnahan conference on security technology | 2017

Physical security assessment with convolutional neural network transfer learning

Jaclynn J. Stubbs; Gabriel C. Birch; Bryana L. Woo; Camron G. Kouhestani

Deep learning techniques have demonstrated the ability to perform a variety of object recognition tasks using visible imager data; however, deep learning has not been implemented as a means to autonomously detect and assess targets of interest in a physical security system. We demonstrate the use of transfer learning on a convolutional neural network (CNN) to significantly reduce training time while keeping detection accuracy of physical security relevant targets high. Unlike many detection algorithms employed by video analytics within physical security systems, this method does not rely on temporal data to construct a background scene; targets of interest can halt motion indefinitely and still be detected by the implemented CNN. A key advantage of using deep learning is the ability for a network to improve over time. Periodic retraining can lead to better detection and higher confidence rates. We investigate training data size versus CNN test accuracy using physical security video data. Due to the large number of visible imagers, significant volume of data collected daily, and currently deployed human in the loop ground truth data, physical security systems present a unique environment that is well suited for analysis via CNNs. This could lead to the creation of algorithmic element that reduces human burden and decreases human analyzed nuisance alarms.


Archive | 2015

Security camera resolution measurements: Horizontal TV lines versus modulation transfer function measurements.

Gabriel C. Birch; John Clark Griffin

The horizontal television lines (HTVL) metric has been the primary quantity used by division 6000 related to camera resolution for high consequence security systems. This document shows HTVL measurements are fundamen- tally insufficient as a metric to determine camera resolution, and propose a quantitative, standards based methodology by measuring the camera system modulation transfer function (MTF), the most common and accepted metric of res- olution in the optical science community. Because HTVL calculations are easily misinterpreted or poorly defined, we present several scenarios in which HTVL is frequently reported, and discuss their problems. The MTF metric is discussed, and scenarios are presented with calculations showing the application of such a metric.


Optics and Photonics for Information Processing XII | 2018

Optical systems for task-specific compressive classification

Gabriel C. Birch; Tu-Thach Quach; Meghan Galiardi; Amber L. Dagel; Charles F. LaCasse

Advancements in machine learning (ML) and deep learning (DL) have enabled imaging systems to perform complex classification tasks, opening numerous problem domains to solutions driven by high quality imagers coupled with algorithmic elements. However, current ML and DL methods for target classification typically rely upon algorithms applied to data measured by traditional imagers. This design paradigm fails to enable the ML and DL algorithms to influence the sensing device itself, and treats the optimization of the sensor and algorithm as separate sequential elements. Additionally, this current paradigm narrowly investigates traditional images, and therefore traditional imaging hardware, as the primary means of data collection. We investigate alternative architectures for computational imaging systems optimized for specific classification tasks, such as digit classification. This involves a holistic approach to the design of the system from the imaging hardware to algorithms. Techniques to find optimal compressive representations of training data are discussed, and most-useful object-space information is evaluated. Methods to translate task-specific compressed data representations into non-traditional computational imaging hardware are described, followed by simulations of such imaging devices coupled with algorithmic classification using ML and DL techniques. Our approach allows for inexpensive, efficient sensing systems. Reduced storage and bandwidth are achievable as well since data representations are compressed measurements which is especially important for high data volume systems.


international carnahan conference on security technology | 2017

Computational optical physical unclonable functions

Gabriel C. Birch; Bryana L. Woo; Charles F. LaCasse; Jaclynn J. Stubbs; Amber L. Dagel

Physical unclonable functions (PUFs) are devices which are easily probed but difficult to predict. Optical PUFs have been discussed within the literature, with traditional optical PUFs typically using spatial light modulators, coherent illumination, and scattering volumes; however, these systems can be large, expensive, and difficult to maintain alignment in practical conditions. We propose and demonstrate a new kind of optical PUF based on computational imaging and compressive sensing to address these challenges with traditional optical PUFs. This work describes the design, simulation, and prototyping of this computational optical PUF (COPUF) that utilizes incoherent polychromatic illumination passing through an additively manufactured refracting optical polymer element. We demonstrate the ability to pass information through a COPUF using a variety of sampling methods, including the use of compressive sensing. The sensitivity of the COPUF system is also explored. We explore non-traditional PUF configurations enabled by the COPUF architecture. The double COPUF system, which employees two serially connected COPUFs, is proposed and analyzed as a means to authenticate and communicate between two entities that have previously agreed to communicate. This configuration enables estimation of a message inversion key without the calculation of individual COPUF inversion keys at any point in the PUF life cycle. Our results show that it is possible to construct inexpensive optical PUFs using computational imaging. This could lead to new uses of PUFs in places where electrical PUFs cannot be utilized effectively, as low cost tags and seals, and potentially as authenticating and communicating devices.


SPIE Commercial + Scientific Sensing and Imaging | 2017

Lensless computational imaging using 3D printed transparent elements

Gabriel C. Birch; Charles F. LaCasse; Amber L. Dagel; Bryana L. Woo

Lensless imaging systems have the potential to provide new capabilities for lower size and weight configuration than traditional imaging systems. Lensless imagers frequently utilize computational imaging techniques, which moves the complexity of the system away from optical subcomponents and into a calibration process whereby the measurement matrix is estimated. We report on the design, simulation, and prototyping of a lensless imaging system that utilizes a 3D printed optically transparent random scattering element. Development of end-to-end system simulations, which includes simulations of the calibration process, as well as the data processing algorithm used to generate an image from the raw data are presented. These simulations utilize GPU-based raytracing software, and parallelized minimization algorithms to bring complete system simulation times down to the order of seconds. Hardware prototype results are presented, and practical lessons such as the effect of sensor noise on reconstructed image quality are discussed. System performance metrics are proposed and evaluated to discuss image quality in a manner that is relatable to traditional image quality metrics. Various hardware instantiations are discussed.


Proceedings of SPIE | 2017

Counter unmanned aerial system testing and evaluation methodology

Camron G. Kouhestani; Bryana L. Woo; Gabriel C. Birch

Unmanned aerial systems (UAS) are increasing in flight times, ease of use, and payload sizes. Detection, classification, tracking, and neutralization of UAS is a necessary capability for infrastructure and facility protection. We discuss test and evaluation methodology developed at Sandia National Laboratories to establish a consistent, defendable, and unbiased means for evaluating counter unmanned aerial system (CUAS) technologies. The test approach described identifies test strategies, performance metrics, UAS types tested, key variables, and the necessary data analysis to accurately quantify the capabilities of CUAS technologies. The tests conducted, as defined by this approach, will allow for the determination of quantifiable limitations, strengths, and weaknesses in terms of detection, tracking, classification, and neutralization. Communicating the results of this testing in such a manner informs decisions by government sponsors and stakeholders that can be used to guide future investments and inform procurement, deployment, and advancement of such systems into their specific venues.


Archive | 2015

3D Imaging with Structured Illumination for Advanced Security Applications

Gabriel C. Birch; Amber L. Dagel; Brian A. Kast; Collin S. Smith

Three-dimensional (3D) information in a physical security system is a highly useful dis- criminator. The two-dimensional data from an imaging systems fails to provide target dis- tance and three-dimensional motion vector, which can be used to reduce nuisance alarm rates and increase system effectiveness. However, 3D imaging devices designed primarily for use in physical security systems are uncommon. This report discusses an architecture favorable to physical security systems; an inexpensive snapshot 3D imaging system utilizing a simple illumination system. The method of acquiring 3D data, tests to understand illumination de- sign, and software modifications possible to maximize information gathering capability are discussed.

Collaboration


Dive into the Gabriel C. Birch's collaboration.

Top Co-Authors

Avatar

Bryana L. Woo

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Amber L. Dagel

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Charles F. LaCasse

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Jaclynn J. Stubbs

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Camron G. Kouhestani

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Collin S. Smith

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

John Clark Griffin

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Amber Lynn Young

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Andres L. Sanchez

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Haley Knapp

Sandia National Laboratories

View shared research outputs
Researchain Logo
Decentralizing Knowledge