Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James F. Montgomery is active.

Publication


Featured researches published by James F. Montgomery.


international conference on robotics and automation | 2003

Visually guided landing of an unmanned aerial vehicle

Srikanth Saripalli; James F. Montgomery; Gaurav S. Sukhatme

We present the design and implementation of a real-time, vision-based landing algorithm for an autonomous helicopter. The landing algorithm is integrated with algorithms for visual acquisition of the target (a helipad) and navigation to the target, from an arbitrary initial position and orientation. We use vision for precise target detection and recognition, and a combination of vision and Global Positioning System for navigation. The helicopter updates its landing target parameters based on vision and uses an onboard behavior-based controller to follow a path to the landing site. We present significant results from flight trials in the field which demonstrate that our detection, recognition, and control algorithms are accurate, robust, and repeatable.


international conference on robotics and automation | 2002

Vision-based autonomous landing of an unmanned aerial vehicle

Srikanth Saripalli; James F. Montgomery; Gaurav S. Sukhatme

We present the design and implementation of a real-time, vision-based landing algorithm for an autonomous helicopter. The helicopter is required to navigate from an initial position to a final position in a partially known environment based on GPS and vision, locate a landing target (a helipad of a known shape) and land on it. We use vision for precise target detection and recognition. The helicopter updates its landing target parameters based on vision and uses an on-board behavior-based controller to follow a path to the landing site. We present results from flight trials in the field which demonstrate that our detection, recognition and control algorithms are accurate and repeatable.


international conference on robotics and automation | 2002

Augmenting inertial navigation with image-based motion estimation

Stergios I. Roumeliotis; Andrew Edie Johnson; James F. Montgomery

Numerous upcoming NASA missions need to land safely and precisely on planetary bodies. Accurate and robust state estimation during the descent phase is necessary. Towards this end, we have developed an approach for improved state estimation by augmenting traditional inertial navigation techniques with image-based motion estimation (IBME). A Kalman filter that processes rotational velocity and linear acceleration measurements provided from an inertial measurement unit has been enhanced to accommodate relative pose measurements from the IBME. In addition to increased state estimation accuracy, IBME convergence time is reduced while robustness of the overall approach is improved. The methodology is described in detail and experimental results with a 5 DOF gantry testbed are presented.


Journal of Field Robotics | 2007

Vision-aided inertial navigation for pin-point landing using observations of mapped landmarks

Nikolas Trawny; Anastasios I. Mourikis; Stergios I. Roumeliotis; Andrew Edie Johnson; James F. Montgomery

In this paper we describe an extended Kalman filter algorithm for estimating the pose and velocity of a spacecraft during entry, descent, and landing. The proposed estimator combines measurements of rotational velocity and acceleration from an inertial measurement unit (IMU) with observations of a priori mapped landmarks, such as craters or other visual features, that exist on the surface of a planet. The tight coupling of inertial sensory information with visual cues results in accurate, robust state estimates available at a high bandwidth. The dimensions of the landing uncertainty ellipses achieved by the proposed algorithm are three orders of magnitude smaller than those possible when relying exclusively on IMU integration. Extensive experimental and simulation results are presented, which demonstrate the applicability of the algorithm on real-world data and analyze the dependence of its accuracy on several system design parameters.


Robotics and Autonomous Systems | 2002

Towards vision-based safe landing for an autonomous helicopter

Pedro J. Garcia-Pardo; Gaurav S. Sukhatme; James F. Montgomery

Abstract Autonomous landing is a challenging problem for aerial robots. An autonomous landing manoeuver depends largely on two capabilities: the decision of where to land and the generation of control signals to guide the vehicle to a safe landing. We focus on the first capability here by presenting a strategy and an underlying fast algorithm as the computer vision basis to make a safe landing decision. The experimental results obtained from real test flights on a helicopter testbed demonstrate the robustness of the approach under widely different light, altitude and background texture conditions, as well as its feasibility for limited-performance embedded computers.


international conference on robotics and automation | 2005

Vision Guided Landing of an Autonomous Helicopter in Hazardous Terrain

Andrew Edie Johnson; James F. Montgomery; Larry H. Matthies

Future robotic space missions will employ a precision soft-landing capability that will enable exploration of previously inaccessible sites that have strong scientific significance. To enable this capability, a fully autonomous onboard system that identifies and avoids hazardous features such as steep slopes and large rocks is required. Such a system will also provide greater functionality in unstructured terrain to unmanned aerial vehicles. This paper describes an algorithm for landing hazard avoidance based on images from a single moving camera. The core of the algorithm is an efficient application of structure from motion to generate a dense elevation map of the landing area. Hazards are then detected in this map and a safe landing site is selected. The algorithm has been implemented on an autonomous helicopter testbed and demonstrated four times resulting in the first autonomous landing of an unmanned helicopter in unknown and hazardous terrain.


ieee aerospace conference | 2008

Overview of Terrain Relative Navigation Approaches for Precise Lunar Landing

Andrew Edie Johnson; James F. Montgomery

The driving precision landing requirement for the Autonomous Landing and Hazard Avoidance Technology project is to autonomously land within 100 m of a predetermined location on the lunar surface. Traditional lunar landing approaches based on inertial sensing do not have the navigational precision to meet this requirement. The purpose of Terrain Relative Navigation (TRN) is to augment inertial navigation by providing position or bearing measurements relative to known surface landmarks. From these measurements, the navigational precision can be reduced to a level that meets the 100 m requirement. There are three different TRN functions: global position estimation, local position estimation and velocity estimation. These functions can be achieved with active range sensing or passive imaging. This paper gives a survey of many TRN approaches and then presents some high fidelity simulation results for contour matching and area correlation approaches to TRN using active sensors. Since TRN requires an a-priori reference map, the paper concludes by describing past and future lunar imaging and digital elevation map data sets available for this purpose.


ieee aerospace conference | 2008

Analysis of On-Board Hazard Detection and Avoidance for Safe Lunar Landing

Andrew Edie Johnson; Andres Huertas; Robert A. Werner; James F. Montgomery

Landing hazard detection and avoidance technology is being pursued within NASA to improve landing safety and increase access to sites of interest on the lunar surface. The performance of a hazard detection and avoidance system depends on properties of the terrain, sensor performance, algorithm design, vehicle characteristics and the overall all guidance navigation and control architecture. This paper analyzes the size of the region that must be imaged, sensor performance parameters and the impact of trajectory angle on hazard detection performance. The analysis shows that vehicle hazard tolerance is the driving parameter for hazard detection system design.


international symposium on experimental robotics | 2003

An Experimental Study of the Autonomous Helicopter Landing Problem

Srikanth Saripalli; Gaurav S. Sukhatme; James F. Montgomery

We present the design and implementation of a real-time, vision-based landing algorithm for an autonomous helicopter. The landing algorithm is integrated with algorithms for visual acquisition of the target (a helipad), and navigation to the target, from an arbitrary initial position and orientation. We use vision for precise target detection and recognition and a combination of vision and GPS for navigation. The helicopter updates its landing target parameters based on vision and uses an on board behavior-based controller to follow a path to the landing site. We present significant results from flight trials in the field which demonstrate that our detection, recognition and control algorithms are accurate, robust and repeatable.


distributed autonomous robotic systems | 2000

Fly spy: lightweight localization and target tracking for cooperating air and ground robots

Richard T. Vaughan; Gaurav S. Sukhatme; Francisco J. Mesa-Martinez; James F. Montgomery

Motivated by the requirements of micro air vehicles, we present a simple method for estimating the position, heading and altitude of an aerial robot by tracking the image of a communicating GPS-localized ground robot. The image-to-GPS mapping thus generated can be used to localize other objects on the ground. Results from experiments with real robots are described.

Collaboration


Dive into the James F. Montgomery's collaboration.

Top Co-Authors

Avatar

Andrew Edie Johnson

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Gaurav S. Sukhatme

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

George A. Bekey

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Hannah Goldberg

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carl Christian Liebe

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Gary D. Spiers

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

James W. Alexander

California Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge