Matthew C. Deans
Ames Research Center
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Matthew C. Deans.
ieee aerospace conference | 2005
Liam Pedersen; David E. Smith; Matthew C. Deans; Randy Sargent; Clay Kunz; David Lees; Srikanth Rajagopalan
Future planetary rover missions, such as the upcoming Mars Science Laboratory, will require rovers to autonomously navigate to science targets specified from up to 10 meters away, and to place instruments against these targets with up to 1 centimeter precision. The current state of the art, demonstrated by the Mars Exploration Rover (MER) mission, typically requires three sols (Martian days) for approach and placement, with several communication cycles between the rovers and ground operations. The capability for goal level commanding of a rover to visit multiple science targets in a single sol represents a tenfold increase in productivity, and decreases daily operations costs. Such a capability requires a high degree of robotic autonomy: visual target tracking and navigation for the rover to approach the targets, mission planning for determining the most beneficial course of action given a large set of desired goals in the face of uncertainty, and robust execution for coping with variations in time and power consumption, as well as the possibility of failures in tracking or navigation due to occlusion or unexpected obstacles. We have developed a system that provides these features. The system uses a vision-based target tracker that recovers the 6-DOF transformations between the rover and the tracked targets as the rover moves, and an off-board planner that creates plans that are carried out on an on-board robust executive. The tracker comprises a feature based approach that tracks a set of interest points in 3D using stereo, with a shape based approach that registers dense 3D meshes. The off-board planner, in addition to generating a primary activity sequence, creates a large set of contingent, or alternate plans to deal with anticipated failures in tracking and the uncertainty in resource consumption. This paper describes our tracking and planning systems, including the results of experiments carried out using the K9 rover. These systems are part of a larger effort, which includes tools for target specification in 3D, ground-based simulation and plan verification, round-trip data tracking, rover software and hardware, and scientific visualization. The complete system has been shown to provide the capability of multiple instrument placements on rocks within a 10 meter radius, all within a single command cycle.
intelligent robots and systems | 2003
Matthew C. Deans; Clayton Kunz; Randy Sargent; Liam Pedersen
This paper presents an efficient and robust method for registration of terrain models created using stereovision on a planetary rover. Our approach projects two surface models into a virtual depth map, rendering the models, as they would be seen from a single range sensor. Correspondence is established based on which points project to the same location in the virtual range sensor. A robust norm of the deviations in observed depth is used as the objective function, and the algorithm searches for the rigid transformation, which minimizes the norm. An initial coarse search is done using rover pose information from odometry and orientation sensing. A fine search is done using Levenberg-Marquardt. Our method enables a planetary rover to keep track of designated science targets as it moves, and to hand off targets from one set of stereo cameras to another. These capabilities are essential for the rover to autonomously approach a science target and place an instrument in contact in a single command cycle.
ieee aerospace conference | 2005
Matthew C. Deans; Clayton Kunz; Randy Sargent; Eric Park; Liam Pedersen
We have developed a combined feature based and shape based visual tracking system designed to enable a planetary rover to visually track and servo to specific points chosen by a user with centimeter precision. The feature based tracker uses invariant feature detection and matching across a stereo pair, as well as matching pairs before and after robot movement in order to compute an incremental 6-DOF motion at each tracker update. This tracking method is subject to drift over time, which can be compensated by the shape based method. The shape based tracking method consists of 3D model registration, which recovers 6-DOF motion given sufficient shape and proper initialization. By integrating complementary algorithms, the combined tracker leverages the efficiency and robustness of feature based methods with the precision and accuracy of model registration. In this paper, we present the algorithms and their integration into a combined visual tracking system.
international conference on image processing | 2014
Ara V. Nefian; Xavier Bouyssounouse; Laurence J. Edwards; Taemin Kim; Emily Hand; Jared Rhizor; Matthew C. Deans; George Bebis; Terrence Fong
This paper introduces an advanced rover localization system suitable for autonomous planetary exploration in the absence of Global Positioning System (GPS) infrastructure. Given an existing terrain map (image and elevation) obtained from satellite imagery and the images provided by the rover stereo camera system, the proposed method determines the best rover location through visual odometry, 3D terrain and horizon matching. The system is tested on data retrieved from a 3 km traverse of the Basalt Hills quarry in California where the GPS track is used as ground truth. Experimental results show the system presented here reduces by over 60% the localization error obtained by wheel odometry.
intelligent robots and systems | 2007
Céline Meyer; Matthew C. Deans
Planetary missions generate a large quantity of image data. In flight operations or on servers such as Planetary Data Systems (PDS) these data products are only searchable by keys such as the Sol, spacecraft clock, or rover motion counter index, with little connection to the semantic content of the images. During mission science operations the science team typically pores over each and every data product returned, which may not require more sophisticated organization and search tools. However, for analyzing existing image databases with thousands or millions of images, manual searching, matching, or classification is intractable. In this paper, we present a method for matching images based on similarities in visual texture. For every image in a database, a series of filters are used to compute the response to localized frequencies and orientations. Filter responses are turned into a low dimensional descriptor vector, generating a 37 dimensional fingerprint. At query time, fingerprints are quickly matched to find images with similar appearance. Image databases containing several thousand images are pre-processed offline in a matter of hours. Image matches from the database are found in a matter of seconds. We have demonstrated this image matching technique using three sources of data and carried out user tests to evaluate matching performance by hand labeling results. User tests verify approximately 40 % false positive rate within the top 14 matches. This represents a powerful search tool for databases of thousands of images where the a priori match probability for an image might be less than 1%.
ieee aerospace conference | 2005
Randy Sargent; Matthew C. Deans; Clayton Kunz; Michael H. Sims; K. E. Herkenhoff
The Mars Exploration Rovers, spirit and opportunity, have spent several successful months on Mars, returning gigabytes of images and spectral data to scientists on Earth. One of the instruments on the MER rovers, the Athena microscopic imager (MI), is a fixed focus, megapixel camera providing a /spl plusmn/3mm depth of field and a 31/spl times/31 mm field of view at a working distance of 63 mm from the lens to the object being imaged. In order to maximize the science return from this instrument, we developed the Ames MI toolkit and supported its use during the primary mission. The MI toolkit is a set of programs that operate on collections of MI images, with the goal of making the data more understandable to the scientists on the ground. Because of the limited depth of field of the camera, and the often highly variable topography of the terrain being imaged, MI images of a given rock are often taken as a stack, with the instrument deployment device (IDD) moving along a computed normal vector, pausing every few millimeters for the MI to acquire an image. The MI toolkit provides image registration and focal section merging, which combine these images to form a single, maximally in-focus image, while compensating for changes in lighting as well as parallax due to the motion of the camera. The MI toolkit also provides a 3D reconstruction of the surface being imaged using stereo and can embed 2D MI images as texture maps into 3D meshes produced by other imagers on board the rover to provide context. The 2D images and 3D meshes output from the toolkit are easily viewed by scientists using other mission tools, such as Viz or the MI browser. This paper describes the MI toolkit in detail, as well as our experience using it with scientists at JPL during the primary MER mission.
collaboration technologies and systems | 2011
Richard M. Keller; William J. Clancey; Matthew C. Deans; Joan C. Differding; K. Estelle Dodson; Francis Y. Enomoto; Jay Trimble; Michael H. Sims
We describe a set of ten collaborative systems projects developed at NASA Ames Research Center over the past ten years. Our goal is to design new information technologies and collaboration tools that facilitate the process by which NASA engineers, scientists, and mission personnel collaborate in their unique work settings. We employ information management, artificial intelligence, and participatory design practices to build systems that are highly usable, augment human cognition, and support distributed NASA teams. NASA settings and applications serve as valuable testbeds for studying collaboration and developing new collaborative technologies.
ieee aerospace conference | 2017
Matthew C. Deans; Jessica J. Marquez; Tamar Cohen; Matthew J. Miller; Ivonne Deliz; Steven Hillenius; Jeffrey A. Hoffman; Yeon Jin Lee; David Lees; Johannes Norheim; Darlene S. S. Lim
In June of 2016, the Biologic Analog Science Associated with Lava Terrains (BASALT) research project conducted its first field deployment, which we call BASALT-1. BASALT-1 consisted of a science-driven field campaign in a volcanic field in Idaho as a simulated human mission to Mars. Scientists and mission operators were provided a suite of ground software tools that we refer to collectively as Minerva to carry out their work. Minerva provides capabilities for traverse planning and route optimization, timeline generation and display, procedure management, execution monitoring, data archiving, visualization, and search. This paper describes the Minerva architecture, constituent components, use cases, and some preliminary findings from the BASALT-1 campaign.
international conference on image processing | 2016
Xavier Bouyssounouse; Ara V. Nefian; A. Thomas; Laurence J. Edwards; Matthew C. Deans; Terrence Fong
Planetary rovers navigate in extreme environments for which a Global Positioning System (GPS) is unavailable, maps are restricted to relatively low resolution provided by orbital imagery, and compass information is often lacking due to weak or not existent magnetic fields. However, an accurate rover localization is particularly important to achieve the mission success by reaching the science targets, avoiding negative obstacles visible only in orbital maps, and maintaining good communication connections with ground. This paper describes a horizon solution for precise rover orientation estimation. The detected horizon in imagery provided by the on board navigation cameras is matched with the horizon rendered over the existing terrain model. The set of rotation parameters (roll, pitch yaw) that minimize the cost function between the two horizon curves corresponds to the rover estimated pose.
Icarus | 2015
N. Lanza; A. M. Ollila; A. Cousin; Roger C. Wiens; Samuel Michael Clegg; Nicolas Mangold; Nathan T. Bridges; D. I. Cooper; Mariek E. Schmidt; Jeffrey A. Berger; Raymond E. Arvidson; Noureddine Melikechi; Horton E. Newsom; R. L. Tokar; Craig Hardgrove; A. Mezzacappa; Ryan S. Jackson; Benton C. Clark; O. Forni; Sylvestre Maurice; M. Nachon; Ryan Anderson; Jennifer G. Blank; Matthew C. Deans; D. M. Delapp; R. Leveille; Rhonda McInroy; Ronald Martinez; P.-Y. Meslin; P. C. Pinet