Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adam R. Nolan is active.

Publication


Featured researches published by Adam R. Nolan.


Proceedings of SPIE | 2012

Performance estimation of SAR using NIIRS techniques

Adam R. Nolan; G. S. Goley; Michael Bakich

In this paper we present an overview of the National Image Interprability Rating Scale (NIIRS) for SAR im- agery. We map basic SAR image formation parameters into the NIIRS via an information theoretic framework. Preliminary results obtained from a pilot study are presented for human interpretablity of various SAR im- ages. Extensions to this work which include sensor exploitation algorithms and integration within the Pursuer environment are outlined .


Proceedings of SPIE | 2014

Laser vibrometry exploitation for vehicle identification

Adam R. Nolan; Andrew Lingg; Steve Goley; Kevin Sigmund; Scott Kangas

Vibration signatures sensed from distant vehicles using laser vibrometry systems provide valuable information that may be used to help identify key vehicle features such as engine type, engine speed, and number of cylinders. Through the use of physics models of the vibration phenomenology, features are chosen to support classification algorithms. Various individual exploitation algorithms were developed using these models to classify vibration signatures into engine type (piston vs. turbine), engine configuration (Inline 4 vs. Inline 6 vs. V6 vs. V8 vs. V12) and vehicle type. The results of these algorithms will be presented for an 8 class problem. Finally, the benefits of using a factor graph representation to link these independent algorithms together will be presented which constructs a classification hierarchy for the vibration exploitation problem.


SAE transactions | 1994

Generalization of an Automated Visual Inspection System (AVIS)

Bryan Everding; Adam R. Nolan; William G. Wee

Efforts have been made to utilize AI constructs to identify flaws in the Space Shuttle Main Engine (SSME) faceplate regions. In order to expand the applicability of these algorithms to a larger problem domain, the automatic visual inspection system(AVIS) has been modified to enable a user with little or no image processing background to define a system capable of identifying flaws on a given set of imagery. This system requires the user to simply identify flawed regions and the selection of processing and feature descriptors is performed automatically. This paper explicates the motivations, definitions, and performance issues associated with the AVIS paradigm.


Image Understanding and the Man-Machine Interface III | 1991

X-ray inspection utilizing knowledge-based feature isolation with a neural network classifier

Adam R. Nolan; Yong-Lin Hu; William G. Wee

This paper describes a generalized flaw detection scheme for a molded and machined turbine blade. The data used are radiograph images. Based on knowledge of the molding and machining process, selective features may be isolated and classified for each possible flaw candidate. The proposed classification system requires the incorporation of many smaller pattern recognition systems. Several of these pattern recognition subsystems have been developed and implemented. Described is the implementation of one such subsystem whose characteristics are best realize utilizing a back propagation neural network. The results of the network are compared with other classification schemes (K nearest neighbor and Bayes classifier).


Proceedings of SPIE | 2012

Performance modeling of a feature-aided tracker

G. Steven Goley; Adam R. Nolan

In order to provide actionable intelligence in a layered sensing paradigm, exploitation algorithms should produce a confidence estimate in addition to the inference variable. This article presents a methodology and results of one such algorithm for feature-aided tracking of vehicles in wide area motion imagery. To perform experiments a synthetic environment was developed, which provided explicit knowledge of ground truth, tracker prediction accuracy, and control of operating conditions. This synthetic environment leveraged physics-based modeling simulations to re-create both traffic flow, reflectance of vehicles, obscuration and shadowing. With the ability to control operating conditions as well as the availability of ground truth, several experiments were conducted to test both the tracker and expected performance. The results show that the performance model produces a meaningful estimate of the tracker performance over the subset of operating conditions.


european conference on parallel processing | 1995

Polynomial Time Scheduling of Low Level Computer Vision Algorithms on Networks of Heterogeneous Machines

Adam R. Nolan; Bryan Everding

Defining an optimal schedule for arbitrary algorithms on a network of heterogeneous machinesis an NP complete problem. This paper focuses on data parallel deterministic neighborhood computer vision algorithms. This focus enables the polynomial time definition of a schedule which minimizes the distributed execution time by overlapping computation and communication cycles on the network. The scheduling model allows for any speed machine to participate in the concurrent computation but makes theassumption of a master/slave control mechanism using a linear communication network. Several vision algorithms are presented and described in terms of the scheduling model parameters. The theoretical speed up of these algorithms is discussed and empirical data is presented and compared to theoretical results.


conference on computer architectures for machine perception | 1995

Scheduling of low level computer vision algorithms on networks of heterogeneous machines

Adam R. Nolan; Bryan Everding; William G. Wee

Defining an optimal schedule for arbitrary algorithms on a network of heterogeneous machines is an NP complete problem. By focusing on data parallel deterministic neighborhood computer vision algorithms, a minimum time schedule can be defined in polynomial time. The scheduling model allows for any speed machine to participate in the concurrent computation but makes the assumption of a master/slave control mechanism using a linear communication network. Several vision algorithms are presented which adhere to the scheduling model. The theoretical speedup of these algorithms is discussed and empirical data is presented and compared to theoretical results.


Proceedings of SPIE | 2017

A novel latent gaussian copula framework for modeling spatial correlation in quantized SAR imagery with applications to ATR

Brian Thelen; Ismael J. Xique; Joseph W. Burns; G. Steven Goley; Adam R. Nolan; Jonathan W. Benson

With all of the new remote sensing modalities available, and with ever increasing capabilities and frequency of collection, there is a desire to fundamentally understand/quantify the information content in the collected image data relative to various exploitation goals, such as detection/classification. A fundamental approach for this is the framework of Bayesian decision theory, but a daunting challenge is to have significantly flexible and accurate multivariate models for the features and/or pixels that capture a wide assortment of distributions and dependen- cies. In addition, data can come in the form of both continuous and discrete representations, where the latter is often generated based on considerations of robustness to imaging conditions and occlusions/degradations. In this paper we propose a novel suite of ”latent” models fundamentally based on multivariate Gaussian copula models that can be used for quantized data from SAR imagery. For this Latent Gaussian Copula (LGC) model, we derive an approximate, maximum-likelihood estimation algorithm and demonstrate very reasonable estimation performance even for the larger images with many pixels. However applying these LGC models to large dimen- sions/images within a Bayesian decision/classification theory is infeasible due to the computational/numerical issues in evaluating the true full likelihood, and we propose an alternative class of novel pseudo-likelihoood detection statistics that are computationally feasible. We show in a few simple examples that these statistics have the potential to provide very good and robust detection/classification performance. All of this framework is demonstrated on a simulated SLICY data set, and the results show the importance of modeling the dependencies, and of utilizing the pseudo-likelihood methods.


Proceedings of SPIE | 2017

Divergences and estimating tight bounds on Bayes error with applications to multivariate Gaussian copula and latent Gaussian copula

Brian J. Thelen; Ismael J. Xique; Joseph W. Burns; G. Steven Goley; Adam R. Nolan; Jonathan W. Benson

In Bayesian decision theory, there has been a great amount of research into theoretical frameworks and information– theoretic quantities that can be used to provide lower and upper bounds for the Bayes error. These include well-known bounds such as Chernoff, Battacharrya, and J-divergence. Part of the challenge of utilizing these various metrics in practice is (i) whether they are ”loose” or ”tight” bounds, (ii) how they might be estimated via either parametric or non-parametric methods, and (iii) how accurate the estimates are for limited amounts of data. In general what is desired is a methodology for generating relatively tight lower and upper bounds, and then an approach to estimate these bounds efficiently from data. In this paper, we explore the so-called triangle divergence which has been around for a while, but was recently made more prominent in some recent research on non-parametric estimation of information metrics. Part of this work is motivated by applications for quantifying fundamental information content in SAR/LIDAR data, and to help in this, we have developed a flexible multivariate modeling framework based on multivariate Gaussian copula models which can be combined with the triangle divergence framework to quantify this information, and provide approximate bounds on Bayes error. In this paper we present an overview of the bounds, including those based on triangle divergence and verify that under a number of multivariate models, the upper and lower bounds derived from triangle divergence are significantly tighter than the other common bounds, and often times, dramatically so. We also propose some simple but effective means for computing the triangle divergence using Monte Carlo methods, and then discuss estimation of the triangle divergence from empirical data based on Gaussian Copula models.


Proceedings of SPIE | 2016

ATR performance modeling concepts

Timothy D. Ross; Hyatt Baker; Adam R. Nolan; Ryan E. McGinnis; Christopher R. Paulson

Performance models are needed for automatic target recognition (ATR) development and use. ATRs consume sensor data and produce decisions about the scene observed. ATR performance models (APMs) on the other hand consume operating conditions (OCs) and produce probabilities about what the ATR will produce. APMs are needed for many modeling roles of many kinds of ATRs (each with different sensing modality and exploitation functionality combinations); moreover, there are different approaches to constructing the APMs. Therefore, although many APMs have been developed, there is rarely one that fits a particular need. Clarified APM concepts may allow us to recognize new uses of existing APMs and identify new APM technologies and components that better support coverage of the needed APMs. The concepts begin with thinking of ATRs as mapping OCs of the real scene (including the sensor data) to reports. An APM is then a mapping from explicit quantized OCs (represented with less resolution than the real OCs) and latent OC distributions to report distributions. The roles of APMs can be distinguished by the explicit OCs they consume. APMs used in simulations consume the true state that the ATR is attempting to report. APMs used online with the exploitation consume the sensor signal and derivatives, such as match scores. APMs used in sensor management consume neither of those, but estimate performance from other OCs. This paper will summarize the major building blocks for APMs, including knowledge sources, OC models, look-up tables, analytical and learned mappings, and tools for signal synthesis and exploitation.

Collaboration


Dive into the Adam R. Nolan's collaboration.

Top Co-Authors

Avatar

William G. Wee

University of Cincinnati

View shared research outputs
Top Co-Authors

Avatar

Bryan Everding

University of Cincinnati

View shared research outputs
Top Co-Authors

Avatar

Andrew Lingg

Wright State University

View shared research outputs
Top Co-Authors

Avatar

Ismael J. Xique

Michigan Technological University

View shared research outputs
Top Co-Authors

Avatar

Joseph W. Burns

Michigan Technological University

View shared research outputs
Top Co-Authors

Avatar

Yong-Lin Hu

University of Cincinnati

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brian J. Thelen

Michigan Technological University

View shared research outputs
Top Co-Authors

Avatar

Brian Thelen

Michigan Technological University

View shared research outputs
Researchain Logo
Decentralizing Knowledge