Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Carlos Y. Villalpando is active.

Publication


Featured researches published by Carlos Y. Villalpando.


International Journal of Computer Vision | 2007

Computer Vision on Mars

Larry H. Matthies; Mark W. Maimone; Andrew Edie Johnson; Yang Cheng; Reg G. Willson; Carlos Y. Villalpando; Steve B. Goldberg; Andres Huertas; Andrew Neil Stein; Anelia Angelova

Increasing the level of spacecraft autonomy is essential for broadening the reach of solar system exploration. Computer vision has and will continue to play an important role in increasing autonomy of both spacecraft and Earth-based robotic vehicles. This article addresses progress on computer vision for planetary rovers and landers and has four main parts. First, we review major milestones in the development of computer vision for robotic vehicles over the last four decades. Since research on applications for Earth and space has often been closely intertwined, the review includes elements of both. Second, we summarize the design and performance of computer vision algorithms used on Mars in the NASA/JPL Mars Exploration Rover (MER) mission, which was a major step forward in the use of computer vision in space. These algorithms did stereo vision and visual odometry for rover navigation and feature tracking for horizontal velocity estimation for the landers. Third, we summarize ongoing research to improve vision systems for planetary rovers, which includes various aspects of noise reduction, FPGA implementation, and vision-based slip perception. Finally, we briefly survey other opportunities for computer vision to impact rovers, landers, and orbiters in future solar system exploration missions.


ieee aerospace conference | 2011

FPGA implementation of stereo disparity with high throughput for mobility applications

Carlos Y. Villalpando; Arin Morfopolous; Larry H. Matthies; Steven J. Goldberg

High speed stereo vision can allow unmanned robotic systems to navigate safely in unstructured terrain, but the computational cost can exceed the capacity of typical embedded CPUs. 1 2 In this paper, we describe an end-to-end stereo computation co-processing system optimized for fast throughput that has been implemented on a single Virtex 4 LX160 FPGA. This system is capable of operating on images from a 1024x768 3CCD (true RGB) camera pair at 15Hz. Data enters the FPGA directly from the cameras via Camera Link and is rectified, pre-filtered and converted into a disparity image all within the FPGA, incurring no CPU load. Once complete, a rectified image and the final disparity image are read out over the PCI bus, for a bandwidth cost of 68MB/sec. Within the FPGA there are 4 distinct algorithms: Camera Link capture, Bilinear rectification, Bilateral subtraction pre-filtering and the Sum of Absolute Difference (SAD) disparity. Each module will be described in brief along with the data flow and control logic for the system. The system has been successfully fielded upon the Carnegie Mellon Universitys National Robotics Engineering Center (NREC) Crusher system during extensive field trials in 2007 and 2008 and is being implemented for other surface mobility systems at JPL.


ieee aerospace conference | 2010

Investigation of the Tilera processor for real time hazard detection and avoidance on the Altair Lunar Lander

Carlos Y. Villalpando; Andrew Edie Johnson; Raphael R. Some; Jacob Oberlin; Steven Goldberg

The High Performance Processor (HPP) Task of the Advanced Avionics and Processor Systems (AAPS) Project, part of the Exploration Technology Development Program (ETDP), was to evaluate several high performance multicore processor architectures with respect to their ability to provide real time hazard detection and avoidance for the Constellation Programs Altair Lunar Lander. 12In this paper we review the Tilera Tile64 processor, the hazard detection and avoidance algorithm, strategies for parallelizing these algorithms, and preliminary performance study results. We were presented with the requirements of 30 Hz LIDAR frame processing rate and 10 second processing time for ALHAT HDA processing and were able to meet that requirement with the Tile64. We then project the performance of these algorithms on the OPERA MAESTRO Processor, a radiation tolerant version of the Tile 64 being developed by the Boeing Company.


ieee aerospace conference | 2013

A hybrid FPGA/Tilera compute element for autonomous hazard detection and navigation

Carlos Y. Villalpando; Robert A. Werner; John M. Carson; Garen Khanoyan; Ryan A. Stern; Nikolas Trawny

To increase safety for future missions landing on other planetary or lunar bodies, the Autonomous Landing and Hazard Avoidance Technology (ALHAT) program is developing an integrated sensor for autonomous surface analysis and hazard determination. The ALHAT Hazard Detection System (HDS) consists of a Flash LIDAR for measuring the topography of the landing site, a gimbal to scan across the terrain, and an Inertial Measurement Unit (IMU), along with terrain analysis algorithms to identify the landing site and the local hazards. An FPGA and Manycore processor system was developed to interface all the devices in the HDS, to provide high-resolution timing to accurately measure system state, and to run the surface analysis algorithms quickly and efficiently. In this paper, we will describe how we integrated COTS components such as an FPGA evaluation board, a TILExpress64, and multi-threaded/multi-core aware software to build the HDS Compute Element (HDSCE). The ALHAT program is also working with the NASA Morpheus Project and has integrated the HDS as a sensor on the Morpheus Lander. This paper will also describe how the HDS is integrated with the Morpheus lander and the results of the initial test flights with the HDS installed. We will also describe future improvements to the HDSCE.


AIAA Guidance, Navigation, and Control Conference | 2015

Flight Testing a Real-Time Hazard Detection System for Safe Lunar Landing on the Rocket-Powered Morpheus Vehicle

Nikolas Trawny; Andres Huertas; Michael E. Luna; Carlos Y. Villalpando; Keith E. Martin; John M. Carson; Andrew Edie Johnson; Carolina I. Restrepo; Vincent E. Roback

The Hazard Detection System (HDS) is a component of the ALHAT (Autonomous Landing and Hazard Avoidance Technology) sensor suite, which together provide a lander Guidance, Navigation and Control (GN&C) system with the relevant measurements necessary to enable safe precision landing under any lighting conditions. The HDS consists of a stand-alone compute element (CE), an Inertial Measurement Unit (IMU), and a gimbaled flash LIDAR sensor that are used, in real-time, to generate a Digital Elevation Map (DEM) of the landing terrain, detect candidate safe landing sites for the vehicle through Hazard Detection (HD), and generate hazard-relative navigation (HRN) measurements used for safe precision landing. Following an extensive ground and helicopter test campaign, ALHAT was integrated onto the Morpheus rocket-powered terrestrial test vehicle in March 2014. Morpheus and ALHAT then performed five successful free flights at the simulated lunar hazard field constructed at the Shuttle Landing Facility (SLF) at Kennedy Space Center, for the first time testing the full system on a lunar-like approach geometry in a relevant dynamic environment. During these flights, the HDS successfully generated DEMs, correctly identified safe landing sites and provided HRN measurements to the vehicle, marking the first autonomous landing of a NASA rocket-powered vehicle in hazardous terrain. This paper provides a brief overview of the HDS architecture and describes its in-flight performance.


AIAA SPACE 2015 Conference and Exposition | 2015

Flight Testing of Terrain-Relative Navigation and Large-Divert Guidance on a VTVL Rocket

Nikolas Trawny; Joel Benito; Brent Tweddle; Charles F. Bergh; Garen Khanoyan; Geoffrey M. Vaughan; Jason X. Zheng; Carlos Y. Villalpando; Yang Cheng; Daniel P. Scharf; Charles D. Fisher; Phoebe M. Sulzen; James F. Montgomery; Andrew Edie Johnson; MiMi Aung; Martin W. Regehr; Daniel Dueri; Behcet Acikmese; David Masten; Travis V. O'Neal; Scott Nietfeld

Since 2011, the Autonomous Descent and Ascent Powered-Flight Testbed (ADAPT) has been used to demonstrate advanced descent and landing technologies onboard the Masten Space Systems (MSS) Xombie vertical-takeoff, vertical-landing suborbital rocket. The current instantiation of ADAPT is a stand-alone payload comprising sensing and avionics for terrain-relative navigation and fuel-optimal onboard planning of large divert trajectories, thus providing complete pin-point landing capabilities needed for planetary landers. To this end, ADAPT combines two technologies developed at JPL, the Lander Vision System (LVS), and the Guidance for Fuel Optimal Large Diverts (G-FOLD) software. This paper describes the integration and testing of LVS and G-FOLD in the ADAPT payload, culminating in two successful free flight demonstrations on the Xombie vehicle conducted in December 2014.


AIAA Guidance, Navigation, and Control Conference | 2015

Simulations of the Hazard Detection System for Approach Trajectories of the Morpheus Lunar Lander

Michael E. Luna; Andres Huertas; Nikolas Trawny; Carlos Y. Villalpando; Keith E. Martin; William Wilson; Carolina I. Restrepo

The Hazard Detection System is part of a suite of sensors and algorithms designed to autonomously land a vehicle on unknown terrain while avoiding any hazards. This paper describes the simulations built to predict the performance of the Hazard Detection System to support flight testing onboard the Morpheus Lander testbed at the Kennedy Space Center. The paper describes a hardware-in-the-loop simulation that was used to predict system performance under nominal operating conditions, and also a Monte Carlo simulation to predict command timing performance bounds under a wide range of varying conditions.


adaptive hardware and systems | 2010

Reconfigurable machine vision systems using FPGAs

Carlos Y. Villalpando; Raphael R. Some

FPGAs provide a flexible architecture for implementing many different types of machine vision algorithms. They allow heavily parallel portions of those algorithms to be accelerated and optimized for high specific performance (MIPS:Watt ratio). In comparison to ASICS, FPGAs enable low cost, quick turn prototyping and algorithm development as well as lower production costs for small quantity and one off applications. FPGAs also have the ability to be reprogrammed in flight, allowing them to be configured for different applications as mission needs evolve. JPL has developed a suite of machine vision IP cores to accelerate many common machine vision tasks used in robotic mobility applications. Modules such as stereo correlation for ranging, filtering, optical flow, area based correlation, feature detection, and image homography and rectification allow the real-time processing of image data using much smaller systems with much less power draw then an appropriately sized general purpose processor. These modules, along with a vision processing framework, are being re-cast in a generic plug and play form to allow rapid, low cost configuration, reconfiguration, evolution and adaptation of next generation machine vision systems for mobile robotics.


ieee aerospace conference | 2011

Implementation of pin point landing vision components in an FPGA system

Arin Morfopolous; Brandon Metz; Carlos Y. Villalpando; Larry H. Matthies; Navid Serrano

Pin-point landing is required to enable missions to land close, typically within 10 meters, to scientifically important targets in generally hazardous terrain. In Pin Point Landing both high accuracy and high speed estimation of position and orientation is needed to provide input to the control system to safely choose and navigate to a safe landing site. A proposed algorithm called VISion aided Inertial NAVigation (VISINAV) has shown that the accuracy requirements can be met. [2][3] VISINAV was shown in software only, and was expected to use FPGA enhancements in the future to improve the computational speed needed for pin point landing during Entry Descent and Landing (EDL). Homography, feature detection and spatial correlation are computationally intensive parts of VISINAV. Homography aligns the map image with the descent image so that small correlation windows can be used, and feature detection provides regions that spatial correlation can track from frame to frame in order to estimate vehicle motion. On MER the image Homography, Feature Detection and Correlation would take approximately 650ms tracking 75 features between frames. We implemented Homography, Feature detection and Correlation on a Virtex 4 LX160 FPGA to run in under 25ms while tracking 500 features to improve algorithm reliability and throughput. 1 2


Space 2004 Conference and Exhibit | 2004

Advanced mobility avionics : a reconfigurable mirco-avionics platform for the future needs of small planetary rovers and micospacecraft

Gary S. Bolotin; Kevin R. Watson; Rich Petras; Stephane Taft; Mandy Wang; Carlos Y. Villalpando; Michael McHenry; Steven Goldberg

Future small and micro-missions, such as Mars Scouts and Deep Space probes, require a new look at highly integrated, re-configurable, low power avionics. This paper will present our plans for developing a scalable, configurable, and highly integrated 32-bit embedded platform capable of implementing computationally intensive signal processing and control algorithms in space flight instruments and systems. This platform is designed to service the need of both small and large spacecraft and planetary rovers that will operate within moderate radiation environments. Some of the key characteristics of this platform are its small size, low power, high performance, and flexibility. This estimated 10 fold reduction in both size and power over state-of-the-art processing platforms will enable this new product to act as the core of a low-cost mobility system for a wide range of missions.

Collaboration


Dive into the Carlos Y. Villalpando's collaboration.

Top Co-Authors

Avatar

Andrew Edie Johnson

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andres Huertas

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

John M. Carson

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Larry H. Matthies

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Raphael R. Some

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Garen Khanoyan

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Arin Morfopolous

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Arin Morfopoulos

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael McHenry

California Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge