Gregory Arnold
Air Force Research Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gregory Arnold.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2009
Matthew Ferrara; Gregory Arnold; Mark Stuff
This paper describes an invariant-based shape- and motion reconstruction algorithm for 3D-to-1D orthographically projected range data taken from unknown viewpoints. The algorithm exploits the object-image relation that arises in echo-based range data and represents a simplification and unification of previous work in the literature. Unlike one proposed approach, this method does not require uniqueness constraints, which makes its algorithmic form independent of the translation removal process (centroid removal, range alignment, etc.). The new algorithm, which simultaneously incorporates every projection and does not use an initialization in the optimization process, requires fewer calculations and is more straightforward than the previous approach. Additionally, the new algorithm is shown to be the natural extension of the approach developed by Tomasi and Kanade for 3D-to-2D orthographically projected data and is applied to a realistic inverse synthetic aperture radar imaging scenario, as well as experiments with varying amounts of aperture diversity and noise.
ieee radar conference | 2004
Mark Stuff; Martin Biancalana; Gregory Arnold; Joseph Garbarino
General Dynamics Advanced Information Systems (GDAIS), supported by the USA Air Force, has been investigating exploiting moving targets whose returns are captured by conventional SAR systems. The result is a processing system that can extract the detailed 3D motions of a moving object. This system is called Three-Dimensional Motion and Geometric Information (3DMAGI). This paper reports on work done with a full volume of data from the National Ground Intelligence Center (NGIC) and vehicle trajectories measured by an inertial system on a moving vehicle. Its goal is to determine how to best use the rich data available from advanced processing to produce images and image products that will simplify the task of exploiting the radar image. The data and sample trajectory are described as well as how they are used to emulate the result of 3DMAGI processing. The work consists of investigations into the methods of creating a 3D data volume that matches the NGIC chamber collection, starting from a small subset defined by the data surface which lies in the full volume. How much extrapolation is needed to get acceptable results is the first question posed. From there, the question of just what methods yield the best results is examined. Limitations of various methods are explained with examples. Comparisons of each method of extrapolation to the original data volume are presented to give an indication of progress toward the goal.
Proceedings of SPIE, the International Society for Optical Engineering | 2007
Y. S. Bhat; Gregory Arnold
Understanding and organizing data, in particular understanding the key modes of variation in the data, is a first step toward exploiting and evaluating sensor phenomenology. Spectral theory and manifold learning methods have been recently shown to offer sever powerful tools for many parts of the exploitation problem. We will describe the method of diffusion maps and give some examples with radar (backhoe data dome) data. The so-called diffusion coordinates are kernel based dimensionality reduction techniques that can, for example, organize random data and yield explicit insight into the type and relative importance of the data variation. We will provide sufficient background for others to adopt these tools and apply them to other aspects of exploitation and evaluation.
Archive | 2006
Gregory Arnold; Peter F. Stiller; Kirk Sturtz
Generalized weak perspective is a common camera model describing the geometric projection for many common scenarios (e.g., 3D to 2D). This chapter describes a metric constructed for comparing (matching) configurations of object features to configurations of image features that is invariant to any affine transformation of the object or image. The natural descriptors are the Plucker coordinates because the Grassmann manifold is the natural shape space for invariance of point features under affine transformations in either the object or image. The objectimage equations detail the relation between the object descriptors and the image descriptors, and an algorithm is provided to compute the distances for all cases.
Proceedings of SPIE | 2010
Olga Mendoza-Schrock; James Patrick; Gregory Arnold; Matthew Ferrara
Understanding and organizing data is the first step toward exploiting sensor phenomenology for dismount tracking. What image features are good for distinguishing people and what measurements, or combination of measurements, can be used to classify the dataset by demographics including gender, age, and race? A particular technique, Diffusion Maps, has demonstrated the potential to extract features that intuitively make sense [1]. We want to develop an understanding of this tool by validating existing results on the Civilian American and European Surface Anthropometry Resource (CAESAR) database. This database, provided by the Air Force Research Laboratory (AFRL) Human Effectiveness Directorate and SAE International, is a rich dataset which includes 40 traditional, anthropometric measurements of 4400 human subjects. If we could specifically measure the defining features for classification, from this database, then the future question will then be to determine a subset of these features that can be measured from imagery. This paper briefly describes the Diffusion Map technique, shows potential for dimension reduction of the CAESAR database, and describes interesting problems to be further explored.
Siam Journal on Imaging Sciences | 2009
Matthew Ferrara; Gregory Arnold
This paper presents a new approach for reconstructing both shape and motion from data collected by echo-based ranging sensors. The approach is based on geometric invariant theory and exploits object-image relations for near-field (spherical-wavefront) range data. These object-image equations relate the data to a unique matrix of Euclidean invariants that completely describe the object shape. The object-image relations can be used to determine the shape of a scene viewed from unknown vantage points. Specifically, the object-image equations form a linear system of equations whose solution determines the relevant shape parameters for a configuration of features within the scene. Once the shape parameters are estimated, a single shape exemplar from the point in shape space can be used to determine the relative motion (up to an arbitrary rotation) between the sensor and the object. One advantage of this motion-estimation approach is that the geometric-invariant-based strategy allows us to uniquely solve the optimization problem without the need to introduce coordinate-system-dependent “nuisance” parameters. The theorems stated in this paper hold for any range-measurement sensor scenario. As an example of the utility of the given theorems, the object-image relations are used to augment noisy GPS measurements in a circular synthetic aperture radar geometry.
national aerospace and electronics conference | 2010
James Patrick; Hamilton Scott Clouse; Olga Mendoza-Schrock; Gregory Arnold
Understanding and organizing data is the first step toward exploiting sensor phenomenology. What features are good for distinguishing people and what measurements, or combination of measurements, can be used to classify people by demographic characteristics including gender? Dimension reduction techniques such as Diffusion Maps that intuitively make sense [1] and Principal Component Analysis (PCA) have demonstrated the potential to aid in extracting such features. This paper briefly describes the Diffusion Map technique and PCA. More importantly, it compares two different classifiers, K-Nearest Neighbors (KNN) and Adaptive boost (Adaboost), for gender classification using these two dimension reduction techniques. The results are compared on the Civilian American and European Surface Anthropometry Resource Project (CAESAR) database, provided by the Air Force Research Laboratory (AFRL) Human Effectiveness Directorate and SAE International. We also compare the results described herein with those of other classification work performed on the same dataset, for completeness.
Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications 2008 | 2008
Olga L. Mendoza; Gregory Arnold; Peter F. Stiller
An object-image metric is an extension of standard metrics in that it is constructed for matching and comparing configurations of object features to configurations of image features. For the generalized weak perspective camera, it is invariant to any affine transformation of the object or the image. Recent research in the exploitation of the object-image metric suggests new approaches to Automatic Target Recognition (ATR). This paper explores the object-image metric and its limitations. Through a series of experiments, we specifically seek to understand how the object-image metric could be applied to the image registration problem-an enabling technology for ATR..
Proceedings of SPIE | 2001
Gregory Arnold; Kirk Sturtz; Isaac Weiss
Object-image relations (O-IRs) provide a powerful approach to performing detection and recognition with laser radar (LADAR) sensors. This paper presents the basics of O-I relations and shows how they are derived from invariants. It also explains and shows results of a computationally efficient approach applying covariants to 3-D LADAR data. The approach is especially appealing because the detection and segmentation processes are integrated with recognition into a robust algorithm. Finally, the method provides a straightforward approach to handling articulation and multi-scale decomposition.
Proceedings of SPIE | 2009
Peter F. Stiller; Gregory Arnold; Matthew Ferrara
The ability to reconstruct the three dimensional (3D) shape of an object from multiple images of that object is an important step in certain computer vision and object recognition tasks. The images in question can range from 2D optical images to 1D radar range profiles. In each case, the goal is to use the information (primarily invariant geometric information) contained in several images to reconstruct the 3D data. In this paper we apply a blend of geometric, computational, and statistical techniques to reconstruct the 3D geometry, specifically the shape, from multiple images of an object. Specifically, we deal with a collection of feature points that have been tracked from image (or range profile) to image (or range profile) and we reconstruct the 3D point cloud up to certain transformations-affine transformations in the case of our optical sensor and rigid motions (translations and rotations) in the radar case. Our paper discusses the theory behind the method, outlines the computational algorithm, and illustrates the reconstruction for some simple examples.