Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert Mandelbaum is active.

Publication


Featured researches published by Robert Mandelbaum.


The International Journal of Robotics Research | 2006

Toward Reliable Off Road Autonomous Vehicles Operating in Challenging Environments

Alonzo Kelly; Anthony Stentz; Omead Amidi; Mike Bode; David M. Bradley; Antonio Diaz-Calderon; Michael Happold; Herman Herman; Robert Mandelbaum; Thomas Pilarski; Peter Rander; Scott M. Thayer; Nick Vallidis; Randy Warner

The DARPA PerceptOR program has implemented a rigorous evaluative test program which fosters the development of field relevant outdoor mobile robots. Autonomous ground vehicles were deployed on diverse test courses throughout the USA and quantitatively evaluated on such factors as autonomy level, waypoint acquisition, failure rate, speed, and communications bandwidth. Our efforts over the three year program have produced new approaches in planning, perception, localization, and control which have been driven by the quest for reliable operation in challenging environments. This paper focuses on some of the most unique aspects of the systems developed by the CMU PerceptOR team, the lessons learned during the effort, and the most immediate challenges that remain to be addressed.


Expert Systems With Applications | 1996

Cooperative material handling by human and robotic agents: Module development and system synthesis

Julie A. Adams; Ruzena Bajcsy; Jana Kosecka; Vijay Kumar; Max Mintz; Robert Mandelbaum; Chau-Chang Wang; Yoshio Yamamoto; Xiaoping Yun

Abstract In this paper we present a collaborative effort to design and implement a cooperative material handling system by a small team of human and robotic agents in an unstructured indoor environment. Our approach makes fundamental use of the human agents expertise for aspects of task planning, task monitoring and error recovery. Our system is neither fully autonomous nor fully teleoperated. It is designed to make effective use of the humans abilities within the present state of the art of autonomous systems. Our robotic agents refer to systems which are each equipped with at least one sensing modality and which possess some capability for self-orientation and/or mobility. Our robotic agents are not required to be homogeneous with respect to either capabilities or function. Our research stresses both paradigms and testbed experimentation. Theory issues include the requisite coordination principles and techniques which are fundamental to a cooperative multi-agent systems basic functioning. We have constructed an experimental distributed multi-agent architecture testbed facility. The required modular components of this testbed are currently operational and have been tested individually. Our current research focuses on the agents integration in a scenario for cooperative material handling.


international conference on computer vision | 1999

Correlation-based estimation of ego-motion and structure from motion and stereo

Robert Mandelbaum; Garbis Salgian; Harpreet S. Sawhney

This paper describes a correlation-based, iterative, multi-resolution algorithm which estimates both scene structure and the motion of the camera rig through an environment from the stream(s) of incoming images. Both single-camera rigs and multiple-camera rigs can be accommodated. The use of multiple synchronized cameras results in more rapid convergence of the iterative approach. The algorithm uses a global ego-motion constraint to refine estimates of inter-frame camera rotation and translation. It uses local window-based correlation to refine the current estimate of scene structure. All analysis is performed at multiple resolutions. In order to combine, in a straightforward way, the correlation surfaces from multiple viewpoints and from multiple pixels in a support region, each pixels correlation surface is modeled as a quadratic. This parameterization allows direct, explicit computation of incremental refinements for ego-motion and structure using linear algebra. Batches can be of arbitrary size, allowing a trade-off between accuracy and latency. Batches can also be daisy-chained for extended sequences. Results of the algorithm are shown on synthetic and real outdoor image sequences.


computer vision and pattern recognition | 1998

Image alignment for precise camera fixation and aim

Lambert E. Wixson; Jayakrishnan Eledath; Michael W. Hansen; Robert Mandelbaum; Deepam Mishra

Two important problems in camera control are how to keep a moving camera fixated on a target point, and how to precisely aim a camera, whose approximate pose is known, towards a given 3D position. This paper describes how electronic image alignment techniques can be used to solve these problems, as well as provide other benefits such as stabilized video. Hence, stabilized, fixated imagery is obtained despite large latencies in the control loop, even for simple control strategies. These techniques have been tested using an airborne camera and real-time affine image alignment.


IEEE Intelligent Systems & Their Applications | 1998

Techniques tor autonomous, off-road navigation

Stefan Baten; Michael Lützeler; Ernst D. Dickmanns; Robert Mandelbaum; Peter J. Burt

To extend autonomous navigation to unpaved and off-road scenarios, vehicles in the AutoNav program bolster a 4D perception and control architecture with area-based vision techniques. This article describes how an autonomous vehicle uses directed real-time, area-based, stereo processing to determine the vertical profile of its path.


international conference on computer vision | 1998

Stereo depth estimation: a confidence interval approach

Robert Mandelbaum; Gerda Kamberova; Max Mintz

We describe an estimation technique which, given a measurement of the depth of a target from a wide-field-of-view (WFOV) stereo camera pair, produces a minimax risk fixed-size confidence interval estimate for the target depth. This work constitutes the first application to the computer vision domain of optimal fixed-size confidence-interval decision theory. The approach is evaluated in terms of theoretical capture probability and empirical capture frequency during actual experiments with a target on an optical bench. The method is compared to several other procedures including the Kalman Filter. The minimax approach is found to dominate all the other methods in performance. In particular for the minimax approach, a very close agreement is achieved between theoretical capture probability and empirical capture frequency. This allows performance to be accurately predicted, greatly facilitating the system design, and delineating the tasks that may be performed with a given system.


Information Visualization | 2002

Stereo perception on an off-road vehicle

A. Rieder; B. Southall; Garbis Salgian; Robert Mandelbaum; Herman Herman; Peter Rander; T. Stentz

This paper presents a vehicle for autonomous off-road navigation built in the framework of DARPAs PerceptOR program. Special emphasis is given to the perception system. A set of three stereo camera pairs provide color and 3D data in a wide field of view (greater 100 degree) at high resolution (2160 by 480 pixel) and high frame rates (5 Hz). This is made possible by integrating a powerful image processing hardware called Acadia. These high data rates require efficient sensor fusion, terrain reconstruction and path planning algorithms. The paper quantifies sensor performance and shows examples of successful obstacle avoidance.


ieee intelligent vehicles symposium | 2004

Stereo-based vision system for automotive imminent collision detection

Peng Chang; Theodore Camus; Robert Mandelbaum

Imminent collision detection is an important functionality in the area of automotive safety. In the event that an unavoidable collision can be detected in advance of the actual impact, various measures can be taken to mitigate injury and damage. In this paper, we demonstrate that stereo vision is a promising solution to this problem. Our prototype system has been rigorously tested for different colliding scenarios (e.g., different intersection angles and different travelling speeds), including live tests in an industrial crash-test facility. We explain the novel algorithms behind the system, including an algorithm for detecting objects in depth images, and algorithms for estimating the travelling velocity of detected vehicles. Quantitative results and representative examples are also included.


computational intelligence in robotics and automation | 1998

Vision for autonomous mobility: image processing on the VFE-200

Robert Mandelbaum; Michael W. Hansen; P. Burt; Stefan Baten

The task of autonomous navigation comprises several subtasks, each of which has very specific requirements for the perception system. We analyze several of these subtasks from the perspective of computer vision, including: (i) obstacle detection, (ii) terrain reconstruction, (iii) convoying, (iv) collision detection, and (v) road recognition. We delineate the issues associated with each subtask and outline algorithms to address each issue. Finally, we address the issue of implementation. We present the VFE-200 as a very powerful image processing platform specifically designed for the implementation of autonomous mobility vision algorithms. We describe the architecture of the system, and how several key features make it ideally suited for hosting several government-sponsored autonomous mobility platforms such as MDARS-E, AUTONAV and Demo III.


international conference on computer vision systems | 2001

Combining EMS-Vision and Horopter Stereo for Obstacle Avoidance of Autonomous Vehicles

Karl-Heinz Siedersberger; Martin Pellkofer; Michael Lützeler; Ernst D. Dickmanns; André Rieder; Robert Mandelbaum; Luca Bogoni

A novel perception system for autonomous navigation on low level roads and open terrain is presented. Built within the framework of the US-German AutoNav project, it combines UBMs object oriented techniques, known as the 4D approach to machine perception (EMS-Vision), with Sarnoffs hierarchical stereo processing.The Vision Front End 200, a specially designed hardware device for real-time image processing, computes and evaluates 320×240 pixel disparity maps at 25 frames per second. A key element for this step is the calculation of the horopter, a virtual plane that is automatically locked to the ground plane. For improved reliability, the VFE 200 results are integrated over time in a grid-based terrain representation. Obstacle information can then be extracted. The systems situation assessment generates a situation representation that consists of so-called situation aspects assigning symbolic attributes to scene objects. The behavior decision module combines this information with knowledge about its own body and behavioral capabilities to autonomously control the vehicle.The system has been integrated into the experimental vehicle VaMoRs for autonomous mobility through machine perception. In a series of experiments, both positive and negative obstacles could be avoided at speeds of up to 16km/h (10mph).

Collaboration


Dive into the Robert Mandelbaum's collaboration.

Top Co-Authors

Avatar

Max Mintz

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ruzena Bajcsy

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Herman Herman

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge