Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Arturo L. Rankin is active.

Publication


Featured researches published by Arturo L. Rankin.


ieee aerospace conference | 2007

Global Path Planning on Board the Mars Exploration Rovers

Joseph Carsten; Arturo L. Rankin; Dave Ferguson; Anthony Stentz

In January 2004, NASAs twin Mars exploration rovers (MERs), spirit and opportunity, began searching the surface of Mars for evidence of past water activity. In order to localize and approach scientifically interesting targets, the rovers employ an on-board navigation system. Given the latency in sending commands from Earth to the Martian rovers (and in receiving return data), a high level of navigational autonomy is desirable. Autonomous navigation with hazard avoidance (AutoNav) is currently performed using a local path planner called GESTALT (grid-based estimation of surface traversability applied to local terrain). GESTALT uses stereo cameras to evaluate terrain safety and avoid obstacles. GESTALT works well to guide the rovers around narrow and isolated hazards, however, it is susceptible to failure when clusters of closely spaced, non-traversable rocks form extended obstacles. In May 2005, a new technology task was initiated at the Jet Propulsion Laboratory to address this limitation. A version of the Carnegie Mellon University Field D* global path planner has been integrated into MER flight software, enabling simultaneous local and global planning during AutoNav. A revised version of AutoNav was uploaded to the rovers during the summer of 2006. This paper describes how global planning was integrated into the MER flight software, and presents results of testing the improved AutoNav system using the MER Surface System TestBed rover.


Proceedings of the 24th US Army Science Conference | 2006

Daytime Water Detection by Fusing Multiple Cues for Autonomous Off-Road Navigation

Arturo L. Rankin; Larry H. Matthies; Andres Huertas

Abstract : Detecting water hazards is a significant challenge to unmanned ground vehicle autonomous off-road navigation. This paper focuses on detecting the presence of water during the daytime using color cameras. A multi-cue approach is taken. Evidence of the presence of water is generated from color, texture, and the detection of reflections in stereo range data. A rule base for fusing water cues was developed by evaluating detection results from an extensive archive of data collection imagery containing water. This software has been implemented into a run-time passive perception subsystem and tested thus far under Linux on a Pentium based processor.


international conference on robotics and automation | 2002

Algorithms and sensors for small robot path following

Robert W. Hogg; Arturo L. Rankin; Stergios I. Roumeliotis; Michael McHenry; Daniel M. Helmick; Charles F. Bergh; Larry H. Matthies

Tracked mobile robots in the 20 kg size class are under development for applications in urban reconnaissance. For efficient deployment, it is desirable for teams of robots to be able to automatically execute path following behaviors, with one or more followers tracking the path taken by a leader. The key challenges to enabling such a capability are (1) to develop sensor packages for such small robots that can accurately determine the path of the leader and (2) to develop path following algorithms for the subsequent robots. To date, we have integrated gyros, accelerometers, compass/inclinometers, odometry, and differential GPS into an effective sensing package. The paper describes the sensor package, sensor processing algorithm and path tracking algorithm we have developed for the leader/follower problem in small robots and shows the results of performance characterization of the system. We also document pragmatic lessons learned about design, construction, and electromagnetic interference issues particular to the performance of state sensors on small robots.


intelligent robots and systems | 2003

Negative obstacle detection by thermal signature

Larry H. Matthies; Arturo L. Rankin

Detecting negative obstacles (ditches, potholes, and other depressions) is one of the most difficult problems in perception for autonomous, off-road navigation. Past work has largely relied on range imagery, because that is based on the geometry of the obstacle, is largely insensitive to illumination variables, and because there have not been other reliable alternatives. However, the visible aspect of negative obstacles shrinks rapidly with range, making them impossible to detect in time to avoid them at high speed. To relieve this problem, we show that the interiors of negative obstacles generally remain warmer than the surrounding terrain throughout the night, making thermal signature a stable property for night-time negative obstacle detection. Experimental results to date have achieved detection distances 45% greater by using thermal signature than by using range data alone. Thermal signature is the first known observable with potential to reveal a deep negative obstacle without actually seeing far into it. Modeling solar illumination has potential to extend the usefulness of thermal signature through daylight hours.


intelligent robots and systems | 2010

Daytime water detection based on color variation

Arturo L. Rankin; Larry H. Matthies

Robust water detection is a critical perception requirement for unmanned ground vehicle (UGV) autonomous navigation. This is particularly true in wide open areas where water can collect in naturally occurring terrain depressions during periods of heavy precipitation and form large water bodies (such as ponds). At far range, reflections of the sky provide a strong cue for water. But at close range, the color coming out of a water body dominates sky reflections and the water cue from sky reflections is of marginal use. We model this behavior by using water body intensity data from multiple frames of RGB imagery to estimate the total reflection coefficient contribution from surface reflections and the combination of all other factors. We then describe an algorithm that uses one of the color cameras in a forward-looking, UGV-mounted stereo-vision perception system to detect water bodies in wide open areas. This detector exploits the knowledge that the change in saturation-to-brightness ratio across a water body from the leading to trailing edge is uniform and distinct from other terrain types. In test sequences approaching a pond under clear, overcast, and cloudy sky conditions, the true positive and false negative water detection rates were (95.76%, 96.71%, 98.77%) and (0.45%, 0.60%, 0.62%), respectively. This software has been integrated on an experimental unmanned vehicle and field tested at Ft. Indiantown Gap, PA, USA.


workshop on applications of computer vision | 2005

Stereo-Based Tree Traversability Analysis for Autonomous Off-Road Navigation

Andres Huertas; Larry H. Matthies; Arturo L. Rankin

Autonomous off-road navigation through forested areas is particularly challenging when there exists a mixture of densely distributed thin and thick trees. To make progress through a dense forest, the robot must decide which trees it can push over and which trees it must circumvent. This paper describes a stereo-based tree traversability algorithm implemented and tested on a robotic vehicle under the DARPA PerceptOR program. Edge detection is applied to the left view of the stereo pair to extract long and vertical edge contours. A search step matches anti-parallel line pairs that correspond to the boundaries of individual trees. Stereo ranging is performed and the range data within trunk fragments are averaged. The diameters of each tree is then estimated, based on the average range to the tree, the focal length of the camera, and the distance in pixels between matched contour lines. We use the estimated tree diameters to construct a tree traversability image used in generating a terrain map. In stationary experiments, the average error in estimating the diameter of thirty mature tree trunks (having diameters ranging from 10-65 cm and a distance from the cameras ranging from 2.5-30 meters) was less than 5 cm. Tree traversability results from the daytime for short baseline (9 cm) and wide baseline (30 cm) stereo are presented. Results from nighttime using wide baseline (33.5 cm) thermal infrared stereo are also presented.


Proceedings of SPIE, the International Society for Optical Engineering | 2005

Passive perception system for day/night autonomous off-road navigation

Arturo L. Rankin; Charles F. Bergh; Steven Goldberg; Paolo Bellutta; Andres Huertas; Larry H. Matthies

Passive perception of terrain features is a vital requirement for military related unmanned autonomous vehicle operations, especially under electromagnetic signature management conditions. As a member of Team Raptor, the Jet Propulsion Laboratory developed a self-contained passive perception system under the DARPA funded PerceptOR program. An environmentally protected forward-looking sensor head was designed and fabricated in-house to straddle an off-the-shelf pan-tilt unit. The sensor head contained three color cameras for multi-baseline daytime stereo ranging, a pair of cooled mid-wave infrared cameras for nighttime stereo ranging, and supporting electronics to synchronize captured imagery. Narrow-baseline stereo provided improved range data density in cluttered terrain, while wide-baseline stereo provided more accurate ranging for operation at higher speeds in relatively open areas. The passive perception system processed stereo images and outputted over a local area network terrain maps containing elevation, terrain type, and detected hazards. A novel software architecture was designed and implemented to distribute the data processing on a 533MHz quad 7410 PowerPC single board computer under the VxWorks real-time operating system. This architecture, which is general enough to operate on N processors, has been subsequently tested on Pentium-based processors under Windows and Linux, and a Sparc based-processor under Unix. The passive perception system was operated during FY04 PerceptOR program evaluations at Fort A. P. Hill, Virginia, and Yuma Proving Ground, Arizona. This paper discusses the Team Raptor passive perception system hardware and software design, implementation, and performance, and describes a road map to faster and improved passive perception.


Proceedings of SPIE | 2009

Stereo Vision Based Terrain Mapping for Off-Road Autonomous Navigation

Arturo L. Rankin; Andres Huertas; Larry H. Matthies

Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as nogo regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.


international conference on robotics and automation | 2011

Daytime water detection based on sky reflections

Arturo L. Rankin; Larry H. Matthies; Paolo Bellutta

Robust water detection is a critical perception requirement for unmanned ground vehicle (UGV) autonomous navigation. This is particularly true in wide-open areas where water can collect in naturally occurring terrain depressions during periods of heavy precipitation and form large water bodies. One of the properties of water useful for detecting it is that its surface acts as a horizontal mirror at large incidence angles. Water bodies can be indirectly detected by detecting reflections of the sky below the horizon in color imagery. The Jet Propulsion Laboratory (JPL) has implemented a water detector based on sky reflections that geometrically locates the pixel in the sky that is reflecting on a candidate water pixel on the ground and predicts if the ground pixel is water based on color similarity and local terrain features. This software detects water bodies in wide-open areas on cross-country terrain at mid- to far-range using imagery acquired from a forward-looking stereo pair of color cameras mounted on a terrestrial UGV. In three test sequences approaching a pond under a clear, overcast, and cloudy sky, the true positive detection rate was 100% when the UGV was beyond 7 meters of the waters leading edge and the largest false positive detection rate was 0.58%. The sky reflection based water detector has been integrated on an experimental unmanned vehicle and field tested at Ft. Indiantown Gap, PA, USA.


Proceedings of SPIE | 2011

Unmanned Ground Vehicle Perception Using Thermal Infrared Cameras

Arturo L. Rankin; Andres Huertas; Larry H. Matthies; Max Bajracharya; Christopher Assad; Shane Brennan; Paolo Bellutta; Gary Sherwin

The ability to perform off-road autonomous navigation at any time of day or night is a requirement for some unmanned ground vehicle (UGV) programs. Because there are times when it is desirable for military UGVs to operate without emitting strong, detectable electromagnetic signals, a passive only terrain perception mode of operation is also often a requirement. Thermal infrared (TIR) cameras can be used to provide day and night passive terrain perception. TIR cameras have a detector sensitive to either mid-wave infrared (MWIR) radiation (3-5μm) or long-wave infrared (LWIR) radiation (7-14μm). With the recent emergence of high-quality uncooled LWIR cameras, TIR cameras have become viable passive perception options for some UGV programs. The Jet Propulsion Laboratory (JPL) has used a stereo pair of TIR cameras under several UGV programs to perform stereo ranging, terrain mapping, tree-trunk detection, pedestrian detection, negative obstacle detection, and water detection based on object reflections. In addition, we have evaluated stereo range data at a variety of UGV speeds, evaluated dual-band TIR classification of soil, vegetation, and rock terrain types, analyzed 24 hour water and 12 hour mud TIR imagery, and analyzed TIR imagery for hazard detection through smoke. Since TIR cameras do not currently provide the resolution available from megapixel color cameras, a UGVs daytime safe speed is often reduced when using TIR instead of color cameras. In this paper, we summarize the UGV terrain perception work JPL has performed with TIR cameras over the last decade and describe a calibration target developed by General Dynamics Robotic Systems (GDRS) for TIR cameras and other sensors.

Collaboration


Dive into the Arturo L. Rankin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andres Huertas

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew W. Howard

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ashitey Trebi-Ollennu

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Joseph Carsten

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael McHenry

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Paolo Bellutta

Jet Propulsion Laboratory

View shared research outputs
Top Co-Authors

Avatar

Robert W. Hogg

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Terrance L. Huntsberger

California Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge