Roberto Manduchi
University of California, Santa Cruz
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Roberto Manduchi.
international conference on computer vision | 1998
Carlo Tomasi; Roberto Manduchi
Bilateral filtering smooths images while preserving edges, by means of a nonlinear combination of nearby image values. The method is noniterative, local, and simple. It combines gray levels or colors based on both their geometric closeness and their photometric similarity, and prefers near values to distant values in both domain and range. In contrast with filters that operate on the three bands of a color image separately, a bilateral filter can enforce the perceptual metric underlying the CIE-Lab color space, and smooth colors and preserve edges in a way that is tuned to human perception. Also, in contrast with standard filtering, bilateral filtering produces no phantom colors along edges in color images, and reduces phantom colors where they appear in the original image.
Autonomous Robots | 2005
Roberto Manduchi; Andres Castano; Ashit Talukder; Larry H. Matthies
Autonomous navigation in cross-country environments presents many new challenges with respect to more traditional, urban environments. The lack of highly structured components in the scene complicates the design of even basic functionalities such as obstacle detection. In addition to the geometric description of the scene, terrain typing is also an important component of the perceptual system. Recognizing the different classes of terrain and obstacles enables the path planner to choose the most efficient route toward the desired goal.This paper presents new sensor processing algorithms that are suitable for cross-country autonomous navigation. We consider two sensor systems that complement each other in an ideal sensor suite: a color stereo camera, and a single axis ladar. We propose an obstacle detection technique, based on stereo range measurements, that does not rely on typical structural assumption on the scene (such as the presence of a visible ground plane); a color-based classification system to label the detected obstacles according to a set of terrain classes; and an algorithm for the analysis of ladar data that allows one to discriminate between grass and obstacles (such as tree trunks or rocks), even when such obstacles are partially hidden in the grass. These algorithms have been developed and implemented by the Jet Propulsion Laboratory (JPL) as part of its involvement in a number of projects sponsored by the US Department of Defense, and have enabled safe autonomous navigation in high-vegetated, off-road terrain.
ieee intelligent vehicles symposium | 2000
P Bellutta; Roberto Manduchi; Larry H. Matthies; Kieran Owens; A. Rankin
The Demo III program has as its primary focus the development of autonomous mobility for a small rugged cross country vehicle. Enabling vision based terrain perception technology for classification of scene geometry and material is currently under development at JPL. In this paper we report recent progress on both stereo-based obstacle detection and terrain cover color-based classification. Our experiments show that the integration of geometric description and terrain cover characterization may be the key to enabling successful autonomous navigation in cross-country vegetated terrain.
computer vision and pattern recognition | 2004
Amin P. Charaniya; Roberto Manduchi; Suresh K. Lodha
In this work, we classify 3D aerial LiDAR height data into roads, grass, buildings, and trees using a supervised parametric classification algorithm. Since the terrain is highly undulating, we subtract the terrain elevations using digital elevation models (DEMs, easily available from the United States Geological Survey (USGS)) to obtain the height of objects from a flat level. In addition to this height information, we use height texture (variation in height), intensity (amplitude of lidar response), and multiple (two) returns from lidar to classify the data. Furthermore, we have used luminance (measured in the visible spectrum) from aerial imagery as the fifth feature for classification. We have used mixture of Gaussian models for modeling the training data. Model parameters and the posterior probabilities are estimated using Expectation-Maximization (EM) algorithm. We have experimented with different number of components per model and found that four components per model yield satisfactory results. We have tested the results using leave-one-out as well as random \frac{n}{2} test. Classification results are in the range of 66%-84% depending upon the combination of features used that compares very favorably with. train-all-test-all results of 85%. Further improvement is achieved using spatial coherence.
international conference on robotics and automation | 2005
Xiaoye Lu; Roberto Manduchi
We present algorithms to detect and precisely localize curbs and stairways for autonomous navigation. These algorithms combine brightness information (in the form of edgels) with 3-D data from a commercial stereo system. The overall system (including stereo computation) runs at about 4 Hz on a 1 GHz laptop. We show experimental results and discuss advantages and shortcomings of our approach.
Communications of The ACM | 2012
Roberto Manduchi; James M. Coughlan
Computer vision holds the key for the blind or visually impaired to explore the visual world.
wireless personal multimedia communications | 2002
Katia Obraczka; Roberto Manduchi; J.J. Garcia-Luna-Aveces
Sensor networks, or sensor webs, which consist of a large number of interconnected sensing devices, have recently been the subject of extensive research. Typical applications of sensor networks include monitoring of possibly very large, remote and/or inaccessible areas, surveillance, and smart environments, like meeting rooms, buildings, homes, and highways. Our focus is on visual sensor networks, which are networks of cameras equipped with enough processing power to support local image analysis. The paper describes ongoing research at UCSC in visual sensor networks and highlights the research challenges to be addressed. It motivates the need for tight coupling between vision techniques and communication protocols for more effective monitoring/tracking capabilities (by having sensors operate in a coordinated manner), as well as energy- and bandwidth-efficient protocols which prolong the operational life of the sensor network.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 1998
Carlo Tomasi; Roberto Manduchi
We propose a representation of images, called intrinsic curves, that transforms stereo matching from a search problem into a nearest-neighbor problem. Intrinsic curves are the paths that a set of local image descriptors trace as an image scanline is traversed from left to right. Intrinsic curves are ideally invariant with respect to disparity. Stereo correspondence then becomes a trivial lookup problem in the ideal case. We also show how to use intrinsic curves to match real images in the presence of noise, brightness bias, contrast fluctuations, moderate geometric distortion, image ambiguity, and occlusions. In this case, matching becomes a nearest-neighbor problem, even for very large disparity values.
international conference on computer vision | 1999
Roberto Manduchi; Javier Portilla
A common method for texture representation is to use the marginal probability densities over the outputs of a set of multi-orientation, multi-scale filters as a description of the texture. We propose a technique, based on independent component analysis, for choosing the set of filters that yield the most informative marginals, meaning that the product over the marginals most closely approximates the joint probability density function of the filter outputs. The algorithm is implemented using a steerable filter space. Experiments involving both texture classification and synthesis show that compared to principal components analysis, ICA provides superior performance for modeling of natural and synthetic textures.
international symposium on experimental robotics | 2000
Jose Macedo; Roberto Manduchi; Larry H. Matthies
Autonomous navigation in vegetated terrain requires the ability to discriminate obstacles from grass, a non-trivial problem when the sensorial world of the robot is based only on range information as provided, for example, by a laser rangefinder (ladar). We present a statistical analysis of the range data produced by a single-axis ladar in different situations, including the case of an obstacle partially occluded by grass. Such analysis inspired a simple classification algorithm, which has been tested on real range data acquired by JPL’s urban robot.