Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew Edie Johnson is active.

Publication


Featured researches published by Andrew Edie Johnson.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1999

Using spin images for efficient object recognition in cluttered 3D scenes

Andrew Edie Johnson; Martial Hebert

We present a 3D shape-based object recognition system for simultaneous recognition of multiple objects in scenes containing clutter and occlusion. Recognition is based on matching surfaces by matching points using the spin image representation. The spin image is a data level shape descriptor that is used to match surfaces represented as surface meshes. We present a compression scheme for spin images that results in efficient multiple object recognition which we verify with results showing the simultaneous recognition of multiple objects from a library of 20 models. Furthermore, we demonstrate the robust performance of recognition in the presence of clutter and occlusion through analysis of recognition trials on 100 scenes.


Image and Vision Computing | 1999

Registration and integration of textured 3D data

Andrew Edie Johnson; Sing Bing Kang

In general, multiple views are required to create a complete 3D model of an object or of a multi-roomed indoor scene. In this work, we address the problem of merging multiple textured 3D data sets, each of which corresponds to a different view of a scene. There are two steps to the merging process: registration and integration. To register, or align, data sets we use a modified version of the iterative closest point (ICP) algorithm; our version, which we call color ICP, considers not only 3D information, but color as well. We show that the use of color decreases registration error significantly when using omnidirectional stereo data sets. Once the 3D data sets have been registered, we integrate them to produce a seamless, composite 3D textured model. Our approach to integration uses a 3D occupancy grid to represent likelihood of spatial occupancy through voting. In addition to occupancy information, we store surface normal in each voxel of the occupancy grid. Surface normal is used to robustly extract a surface from the occupancy grid; on that surface we blend textures from multiple views.


IEEE Transactions on Robotics | 2009

Vision-Aided Inertial Navigation for Spacecraft Entry, Descent, and Landing

Anastasios I. Mourikis; Nikolas Trawny; Stergios I. Roumeliotis; Andrew Edie Johnson; Adnan Ansar; Larry H. Matthies

In this paper, we present the vision-aided inertial navigation (VISINAV) algorithm that enables precision planetary landing. The vision front-end of the VISINAV system extracts 2-D-to-3-D correspondences between descent images and a surface map (mapped landmarks), as well as 2-D-to-2-D feature tracks through a sequence of descent images (opportunistic features). An extended Kalman filter (EKF) tightly integrates both types of visual feature observations with measurements from an inertial measurement unit. The filter computes accurate estimates of the landers terrain-relative position, attitude, and velocity, in a resource-adaptive and hence real-time capable fashion. In addition to the technical analysis of the algorithm, the paper presents validation results from a sounding-rocket test flight, showing estimation errors of only 0.16 m/s for velocity and 6.4 m for position at touchdown. These results vastly improve current state of the art for terminal descent navigation without visual updates, and meet the requirements of future planetary exploration missions.


international conference on robotics and automation | 2002

Augmenting inertial navigation with image-based motion estimation

Stergios I. Roumeliotis; Andrew Edie Johnson; James F. Montgomery

Numerous upcoming NASA missions need to land safely and precisely on planetary bodies. Accurate and robust state estimation during the descent phase is necessary. Towards this end, we have developed an approach for improved state estimation by augmenting traditional inertial navigation techniques with image-based motion estimation (IBME). A Kalman filter that processes rotational velocity and linear acceleration measurements provided from an inertial measurement unit has been enhanced to accommodate relative pose measurements from the IBME. In addition to increased state estimation accuracy, IBME convergence time is reduced while robustness of the overall approach is improved. The methodology is described in detail and experimental results with a 5 DOF gantry testbed are presented.


digital identity management | 1997

Surface registration by matching oriented points

Andrew Edie Johnson; Martial Hebert

For registration of 3-D free-form surfaces we have developed a representation which requires no knowledge of the transformation between views. The representation comprises descriptive images associated with oriented points on the surface of an object. Constructed using single point bases, these images are data level shape descriptions that are used for efficient matching of oriented points. Correlation of images is used to establish point correspondences between two views; from these correspondences a rigid transformation that aligns the views is calculated. The transformation is then refined and verified using a modified iterative closest point algorithm. To demonstrate the generality of our approach, we present results from multiple sensing domains.


Image and Vision Computing | 1998

Surface matching for object recognition in complex three-dimensional scenes

Andrew Edie Johnson; Martial Hebert

We present an approach to recognition of complex objects in cluttered three-dimensional (3D) scenes that does not require feature extraction or segmentation. Our object representation comprises descriptive images associated with oriented points on the surface of an object. Using a single point basis constructed from an oriented point, the position of other points on the surface of the object can be described by two parameters. The accumulation of these parameters for many points on the surface of the object results in an image at each oriented point. These images, localized descriptions of the global shape of the object, are invariant to rigid transformations. Through correlation of images, point correspondences between a model and scene data are established. Geometric consistency is used to group the correspondences from which plausible rigid transformations that align the model with the scene are calculated. The transformations are then refined and verified using a modified iterative closest point algorithm. The effectiveness of our representation comes from its ability to combine the descriptive nature of global object properties with the robustness to partial views and clutter of local shape descriptions. The wide applicability of our algorithm is demonstrated with results showing recognition of complex objects in cluttered scenes with occlusion.


computer vision and pattern recognition | 1997

Recognizing objects by matching oriented points

Andrew Edie Johnson; Martial Hebert

We present an approach to recognition of complex objects in cluttered 3-D scenes that does not require feature extraction or segmentation. Our object representation comprises descriptive images associated with each oriented point on the surface of an object. Using a single point basis constructed from an oriented point, the position of other points on the surface of the object can be described by two parameters. The accumulation of these parameters for many points on the surface of the object results in an image at each oriented point. These images, localized descriptions of the global shape of the object, are invariant to rigid transformations. Through correlation of images, point correspondences between a model and scene data are established and then grouped using geometric consistency. The effectiveness of our algorithm is demonstrated with results showing recognition of complex objects in cluttered scenes with occlusion.


International Journal of Computer Vision | 2007

Computer Vision on Mars

Larry H. Matthies; Mark W. Maimone; Andrew Edie Johnson; Yang Cheng; Reg G. Willson; Carlos Y. Villalpando; Steve B. Goldberg; Andres Huertas; Andrew Neil Stein; Anelia Angelova

Increasing the level of spacecraft autonomy is essential for broadening the reach of solar system exploration. Computer vision has and will continue to play an important role in increasing autonomy of both spacecraft and Earth-based robotic vehicles. This article addresses progress on computer vision for planetary rovers and landers and has four main parts. First, we review major milestones in the development of computer vision for robotic vehicles over the last four decades. Since research on applications for Earth and space has often been closely intertwined, the review includes elements of both. Second, we summarize the design and performance of computer vision algorithms used on Mars in the NASA/JPL Mars Exploration Rover (MER) mission, which was a major step forward in the use of computer vision in space. These algorithms did stereo vision and visual odometry for rover navigation and feature tracking for horizontal velocity estimation for the landers. Third, we summarize ongoing research to improve vision systems for planetary rovers, which includes various aspects of noise reduction, FPGA implementation, and vision-based slip perception. Finally, we briefly survey other opportunities for computer vision to impact rovers, landers, and orbiters in future solar system exploration missions.


digital identity management | 1997

Registration and integration of textured 3-D data

Andrew Edie Johnson; Sing Bing Kang

In general, multiple views are required to create a complete 3-D model of an object or of a multi-roomed indoor scene. In this work, we address the problem of merging multiple textured 3-D data sets, each of which corresponds to a different view of a scene or object. There are two steps to the merging process: registration and integration. To register, or align, data sets we use a modified version of the Iterative Closest Point algorithm; our version, which we call color ICP, considers not only 3-D information, but color as well. We show that the use of color decreases registration error by an order of magnitude. Once the 3-D data sets have been registered we integrate them to produce a seamless, composite 3-D textured model. Our approach to integration uses a 3-D occupancy grid to represent likelihood of spatial occupancy through voting. In addition to occupancy information, we store surface normal in each voxel of the occupancy grid. Surface normal is used to robustly extract a surface from the occupancy grid; on that surface we blend textures from multiple views.


Journal of Field Robotics | 2007

Vision-aided inertial navigation for pin-point landing using observations of mapped landmarks

Nikolas Trawny; Anastasios I. Mourikis; Stergios I. Roumeliotis; Andrew Edie Johnson; James F. Montgomery

In this paper we describe an extended Kalman filter algorithm for estimating the pose and velocity of a spacecraft during entry, descent, and landing. The proposed estimator combines measurements of rotational velocity and acceleration from an inertial measurement unit (IMU) with observations of a priori mapped landmarks, such as craters or other visual features, that exist on the surface of a planet. The tight coupling of inertial sensory information with visual cues results in accurate, robust state estimates available at a high bandwidth. The dimensions of the landing uncertainty ellipses achieved by the proposed algorithm are three orders of magnitude smaller than those possible when relying exclusively on IMU integration. Extensive experimental and simulation results are presented, which demonstrate the applicability of the algorithm on real-world data and analyze the dependence of its accuracy on several system design parameters.

Collaboration


Dive into the Andrew Edie Johnson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Martial Hebert

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Yang Cheng

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James F. Montgomery

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Andres Huertas

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mark W. Maimone

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carlos Y. Villalpando

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge