Paul Max Payton
Lockheed Missiles and Space Company
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Paul Max Payton.
Cvgip: Image Understanding | 1991
Eamon B. Barrett; Paul Max Payton; Nils N. Haag; Michael H. Brill
Abstract This paper presents the results of a study of projective invariants and their applications in image analysis and object recognition. The familiar cross-ratio theorem, relating collinear points in the plane to the projections through a point onto a line, provides a starting point for their investigation. Methods are introduced in two dimensions for extending the cross-ratio theorem to relate noncollinear object points to their projections on multiple image lines. The development is further extended to three dimensions. It is well known that, for a set of points distributed in three dimensions, stereo pairs of images can be made and relative distances of the points from the film plane computed from measurements of the disparity of the image points in the stereo pair. These computations require knowledge of the effective focal length and baseline of the imaging system. It is less obvious, but true, that invariant metric relationships among the object points can be derived from measured relationships among the image points. These relationships are a generalization into three dimensions of the invariant cross-ratio of distances between points on a line. In three dimensions the invariants are cross-ratios of areas and volumes defined by the object points. These invariant relationships, which are independent of the parameters of the imaging system, are derived and demonstrated with examples.
Proceedings of the Second Joint European - US Workshop on Applications of Invariance in Computer Vision | 1993
Eamon B. Barrett; Gregory O. Gheen; Paul Max Payton
A uniform algebraic procedure is presented for deriving both epipolar geometry and three-dimensional object structure from general stereo imagery. The procedure assumes central-projection cameras of unknown interior and exterior orientations. The ability to determine corresponding points in the stereo images is assumed, but no prior knowledge of the scene is required. Epipolar geometry and the fundamental matrix are derived by algebraic elimination of the object-variables from the imaging equations. This provides a transfer procedure to any other perspective as long as 8 or more corresponding points can be identified in the new perspective. Next, invariant coordinates of the scene-points are derived by algebraic elimination of the camera-parameters from the imaging equations. Identical coordinates are obtained from any stereo images of non-occluding scene points as long as the same set of 5 corresponding points can be identified in both stereo pairs. The procedure extends methods utilizing the cross-ratios of determinants and cyclopean vectors, presented in earlier work. A technique for reconstructing the 3-dimensional object from the invariant coordinates is also given.
International Journal of Imaging Systems and Technology | 1990
Eamon B. Barrett; Paul Max Payton; Michael H. Brill; Nils N. Haag
The objective of this work is to develop automated techniques for recognizing the same objects in images that differ in scale, tilt, and rotation. Such perspective transformations of images are produced when aerial images of the same scene are taken from different vantage points. The algebraic methods developed previously do not utilize the intensity values of the images, i.e., their pixel gray levels. Since image features essential for object recognition, such as edges and local image textures, may be described in terms of derivatives and integrals of the image intensity, it is necessary to investigate whether certain differential and integral operators applied to different perspective views of the same object are also invariant under the perspective transformation. We proceed to derive new differential operators and their corresponding integral invariants for curves and planar objects. We introduce a variant form of Fourier expansion specially adapted to the projective transformation. Extensions to three dimensions are discussed, as well as applications to other image formation models such as synthetic aperture radar (SAR). These results are steps toward a computational model for perspective‐independent object recognition.
SPIE's International Symposium on Optical Science, Engineering, and Instrumentation | 1998
Paul Max Payton; Eamon B. Barrett; Wolfgang Kober; John K. Thomas; Steven E. Johnson
We describe a geometric model of high-resolution radar (HRR), where objects being imaged by the sensor are assumed to consists of a collection of isotropic scattering centers distributed in three dimensions. Three, four, five and six point pure HRR invariant quantities for non-coplanar reflecting centers are presented. New work showing invariants combining HRR and SAR measurements are then presented. All these techniques require matching corresponding features in multiple HRR and/or SAR views. These features are represented using analytic scattering models. Multiple features within the same HRR resolution cell can be individually detected and separated using interference-suppression filters. These features can then be individually tracked to maintain correspondence as the object poise changes. We validate our HRR/SAR invariants using the XPATCH simulation system. Finally, a view-based method for 3D model reconstruction is developed and demonstrated.
IS&T/SPIE's Symposium on Electronic Imaging: Science & Technology | 1995
Eamon B. Barrett; Gregory O. Gheen; Paul Max Payton
Invariant methods for object representation and model matching develop relationships among object and image features that are independent of the quantitative values of the camera parameters of object orientation, hence the term invariant. Three-dimensional models of objects of scenes can be reconstructed and transferred to new images, given a minimum of two reference images and a sufficient number of corresponding points in the images. By using multiple reference images, redundancy can be exploited to increase robustness of the procedure to pixel measurement errors and systematic errors (i.e., discrepancies in the camera model). We present a general method for deriving invariant relationships based on two or more images. Simulations of model transfer and reconstruction demonstrate the positive effect of additional reference images on the robustness of invariant procedures. Pixel measurement error is simulated by adding random noise to coordinate values of the features in the reference images.
Applications of Digital Image Processing XV | 1993
Eamon B. Barrett; Paul Max Payton
We describe in this paper several geometry problems in photogrammetry and machine vision; the geometric methods of projective invariants which we apply to these problems; and some new results and current areas of investigation involving geometric invariants for object structures and non-pinhole-camera imaging systems.
Applications of Digital Image Processing XIII | 1990
Eamon B. Barrett; Paul Max Payton; Michael H. Brill
The objective of this work is to develop automated techniques for recognizing the same objects in images that differ in scale tilt and rotation. Such perspective transformations of images are produced when aerial images of the same scene are taken from different vantage points. In previously reported work we have identified methods for deriving algebraic projective invariants under central projections. These methods generalize the familiar cross-ratio theorems for single images of finite sets of points on the line and in the plane. The algebraic methods do not utilize the intensity values of the images. Since image features essential for object recognition may be described in terms of derivatives and integrals of the image intensity it is necessary to investigate whether certain differential and integral operators applied to different perspective views of the same object are also invariant under the perspective transformation. We proceed to derive new differential operators and their corresponding integral invariants for curves and planar objects. Extensions to other image formation models such as synthetic aperture radar (SAR) are discussed. These results are steps toward a computational model for perspective-independent object recognition.
Proceedings of SPIE | 1998
Eamon B. Barrett; Paul Max Payton; Peter J. Marra; Michael H. Brill
Suppose we have two or more images of a 3D scene. From these views alone, we would like to infer the (x,y,z) coordinates of the object-points in the scene (to reconstruct the scene). The most general standard methods require either prior knowledge of the camera models (intersection methods) or prior knowledge of the (x,y,z) coordinates of some of the object points, from which the camera models can be inferred (resection, followed by intersection). When neither alternative is available, a special technique called relative orientation enables a scale model of a scene to be reconstructed from two images, but only when the internal parameters of both cameras are identical. In this paper, we discuss alternatives to relative orientation that does not require knowledge of the internal parameters of the imaging systems. These techniques, which we call view- based relative reconstruction, determine the object-space coordinates up to a 3D projective transformation. The reconstructed points are then exemplars of a projective orbit of representations that are chosen to reside in a particular representation called a canonical frame. Two strategies will be described to choose this canonical frame: (1) projectively simplify the object model and the imaging equations; and (2) projectively simplify the camera model and the imaging equations. In each case, we solve the resulting simplified system of imaging equations to retrieve exemplar points. Both strategies are successful in synthetic imagery, but may be differently suited to various real-world applications.
Remote Sensing for Geography, Geology, Land Planning, and Cultural Heritage | 1996
Eamon B. Barrett; Paul Max Payton; Peter J. Marra
The reference data consists of two or more central- projection images of a three-dimensional distribution of object points, i.e., a 3D scene. The positions and orientations of the cameras which generated the reference images are unknown, as are the coordinates of all the object points. We derive and demonstrate invariant methods for synthesizing nadir views of the object points, i.e., 2D maps of the 3D scene. The techniques we demonstrate depart from standard methods of resection and intersection to recover the camera geometry and reconstruct object points from the reference images, followed by back-projection to create the nadir view. Our approach will be to perform the image measurements and computations required to estimate the image invariant relationships linking the reference images to one another and to the nadir view. The empirically estimated invariant relationships can thereafter be used to transfer conjugate points from the reference images to their synthesized conjugates in the nadir view. Computation of the object model -- the digital elevation model (DEM) -- is not required in this approach. The method also differs from interpolation in that the 3D structure of the scene is preserved, including the effects of partial occlusion. Algorithms are validated, initially with synthetic CAD models and subsequently with real data consisting of uncontrolled aerial imagery and maps with occasional missing or inaccurately delineated features.
Proceedings of SPIE | 1993
Eamon B. Barrett; Paul Max Payton
Invariant relationships have been derived from the mathematical models of image formation for several types of sensors; from the collinearity equations of pinhole camera systems and separately, from the condition equations of strip-mapped SAR. In the present paper, we extend these results by combining the collinearity and condition equations of photographic and SAR systems. The resulting invariants enable us to transfer points and three-dimensional models from multiple photographic to SAR images and vice-versa. Geometric integrity of the different imaging systems is preserved by the technique. The method will facilitate synergistic, model- based interpretation of different sensor types.