Eamon B. Barrett
Lockheed Missiles and Space Company
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Eamon B. Barrett.
Cvgip: Image Understanding | 1991
Eamon B. Barrett; Paul Max Payton; Nils N. Haag; Michael H. Brill
Abstract This paper presents the results of a study of projective invariants and their applications in image analysis and object recognition. The familiar cross-ratio theorem, relating collinear points in the plane to the projections through a point onto a line, provides a starting point for their investigation. Methods are introduced in two dimensions for extending the cross-ratio theorem to relate noncollinear object points to their projections on multiple image lines. The development is further extended to three dimensions. It is well known that, for a set of points distributed in three dimensions, stereo pairs of images can be made and relative distances of the points from the film plane computed from measurements of the disparity of the image points in the stereo pair. These computations require knowledge of the effective focal length and baseline of the imaging system. It is less obvious, but true, that invariant metric relationships among the object points can be derived from measured relationships among the image points. These relationships are a generalization into three dimensions of the invariant cross-ratio of distances between points on a line. In three dimensions the invariants are cross-ratios of areas and volumes defined by the object points. These invariant relationships, which are independent of the parameters of the imaging system, are derived and demonstrated with examples.
Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing | 1982
Michael H. Brill; Eamon B. Barrett
Abstract Extending the notion of anharmonic ratio in one dimension, the cross-ratio of volumes in N-space is shown to be a projective invariant. Projective coordinates in 2-space are then expressed as cross-rations of areas. Whereas true projective invariants exist in N dimensions, quasi-projective properties such as the interlens line constructions are dimension dependent. These ideas are discussed in the application of dot-matrix multispectral picture processing in which the illumination intensity is unconstrained over the picture.
Proceedings of the Second Joint European - US Workshop on Applications of Invariance in Computer Vision | 1993
Eamon B. Barrett; Gregory O. Gheen; Paul Max Payton
A uniform algebraic procedure is presented for deriving both epipolar geometry and three-dimensional object structure from general stereo imagery. The procedure assumes central-projection cameras of unknown interior and exterior orientations. The ability to determine corresponding points in the stereo images is assumed, but no prior knowledge of the scene is required. Epipolar geometry and the fundamental matrix are derived by algebraic elimination of the object-variables from the imaging equations. This provides a transfer procedure to any other perspective as long as 8 or more corresponding points can be identified in the new perspective. Next, invariant coordinates of the scene-points are derived by algebraic elimination of the camera-parameters from the imaging equations. Identical coordinates are obtained from any stereo images of non-occluding scene points as long as the same set of 5 corresponding points can be identified in both stereo pairs. The procedure extends methods utilizing the cross-ratios of determinants and cyclopean vectors, presented in earlier work. A technique for reconstructing the 3-dimensional object from the invariant coordinates is also given.
computer vision and pattern recognition | 1992
Eamon B. Barrett; Michael H. Brill; Nils N. Haag; Paul M. Payton
For several useful tasks in photogrammetry and in model-based vision, noniterative methods that require only the inversion of systems of linear equations are developed. The methods are based on the theory of projective invariants. The tasks addressed are resection, intersection, and transfer, or model matching (with or without ground control points). The following kinds of transfer are examined: (a) coplanar object points (transfer to image 2 done using four reference points in image 1); (b) stereo camera system (transfer to stereo camera pair 2 done using four reference points in stereo pair 1); (c) general multicamera configuration (transfer of a ninth point to image 3 done using eight tie points in images 1 and 2).<<ETX>>
International Journal of Imaging Systems and Technology | 1990
Eamon B. Barrett; Paul Max Payton; Michael H. Brill; Nils N. Haag
The objective of this work is to develop automated techniques for recognizing the same objects in images that differ in scale, tilt, and rotation. Such perspective transformations of images are produced when aerial images of the same scene are taken from different vantage points. The algebraic methods developed previously do not utilize the intensity values of the images, i.e., their pixel gray levels. Since image features essential for object recognition, such as edges and local image textures, may be described in terms of derivatives and integrals of the image intensity, it is necessary to investigate whether certain differential and integral operators applied to different perspective views of the same object are also invariant under the perspective transformation. We proceed to derive new differential operators and their corresponding integral invariants for curves and planar objects. We introduce a variant form of Fourier expansion specially adapted to the projective transformation. Extensions to three dimensions are discussed, as well as applications to other image formation models such as synthetic aperture radar (SAR). These results are steps toward a computational model for perspective‐independent object recognition.
applied imagery pattern recognition workshop | 2009
J. Brandon Laflen; Christopher R. Greco; Glen William Brooksby; Eamon B. Barrett
We present evaluation of the performance of moving object super-resolution (MOSR) through objective image quality metrics. MOSR systems require detection, tracking, and local sub-pixel registration of objects of interest, prior to superresolution. Nevertheless, MOSR can provide additional information otherwise undetected in raw video. We measure the extent of this benefit through the following objective image quality metrics: (1) Modulation Transfer Function (MTF), (2) Subjective Quality Factor (SQF), (3) Image Quality from the Natural Scene (MITRE IQM), and (4) minimum resolvable Rayleigh distance (RD). We also study the impact of non-ideal factors, such as image noise, frame-to-frame jitter, and object rotation, upon this performance. To study these factors, we generated controlled sequences of synthetic images of targets moving against a random field. The targets exemplified aspects of the objective metrics, containing either horizontal, vertical, or circular sinusoidal gratings, or a field of impulses separated by varying distances. High-resolution sequences were rendered and then appropriately filtered assuming a circular aperture and square, filled collector prior to decimation. A fully implemented MOSR system was used to generate super-resolved images of the moving targets. The MTF, SQF, IQM, and RD measures were acquired from each of the high, low, and super-resolved image sequences, and indicate the objective benefit of super-resolution. To contrast with MOSR, the low-resolution sequences were also up-sampled in the Fourier domain, and the objective measures were collected for these Fourier up-sampled sequences, as well. Our study consisted of over 800 different sequences, representing various combinations of non-ideal factors.
SPIE's International Symposium on Optical Science, Engineering, and Instrumentation | 1998
Paul Max Payton; Eamon B. Barrett; Wolfgang Kober; John K. Thomas; Steven E. Johnson
We describe a geometric model of high-resolution radar (HRR), where objects being imaged by the sensor are assumed to consists of a collection of isotropic scattering centers distributed in three dimensions. Three, four, five and six point pure HRR invariant quantities for non-coplanar reflecting centers are presented. New work showing invariants combining HRR and SAR measurements are then presented. All these techniques require matching corresponding features in multiple HRR and/or SAR views. These features are represented using analytic scattering models. Multiple features within the same HRR resolution cell can be individually detected and separated using interference-suppression filters. These features can then be individually tracked to maintain correspondence as the object poise changes. We validate our HRR/SAR invariants using the XPATCH simulation system. Finally, a view-based method for 3D model reconstruction is developed and demonstrated.
IS&T/SPIE's Symposium on Electronic Imaging: Science & Technology | 1995
Eamon B. Barrett; Gregory O. Gheen; Paul Max Payton
Invariant methods for object representation and model matching develop relationships among object and image features that are independent of the quantitative values of the camera parameters of object orientation, hence the term invariant. Three-dimensional models of objects of scenes can be reconstructed and transferred to new images, given a minimum of two reference images and a sufficient number of corresponding points in the images. By using multiple reference images, redundancy can be exploited to increase robustness of the procedure to pixel measurement errors and systematic errors (i.e., discrepancies in the camera model). We present a general method for deriving invariant relationships based on two or more images. Simulations of model transfer and reconstruction demonstrate the positive effect of additional reference images on the robustness of invariant procedures. Pixel measurement error is simulated by adding random noise to coordinate values of the features in the reference images.
Applications of Digital Image Processing XV | 1993
Eamon B. Barrett; Paul Max Payton
We describe in this paper several geometry problems in photogrammetry and machine vision; the geometric methods of projective invariants which we apply to these problems; and some new results and current areas of investigation involving geometric invariants for object structures and non-pinhole-camera imaging systems.
Applications of Digital Image Processing XIII | 1990
Eamon B. Barrett; Paul Max Payton; Michael H. Brill
The objective of this work is to develop automated techniques for recognizing the same objects in images that differ in scale tilt and rotation. Such perspective transformations of images are produced when aerial images of the same scene are taken from different vantage points. In previously reported work we have identified methods for deriving algebraic projective invariants under central projections. These methods generalize the familiar cross-ratio theorems for single images of finite sets of points on the line and in the plane. The algebraic methods do not utilize the intensity values of the images. Since image features essential for object recognition may be described in terms of derivatives and integrals of the image intensity it is necessary to investigate whether certain differential and integral operators applied to different perspective views of the same object are also invariant under the perspective transformation. We proceed to derive new differential operators and their corresponding integral invariants for curves and planar objects. Extensions to other image formation models such as synthetic aperture radar (SAR) are discussed. These results are steps toward a computational model for perspective-independent object recognition.