Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter Rander is active.

Publication


Featured researches published by Peter Rander.


IEEE MultiMedia | 1997

Virtualized reality: constructing virtual worlds from real scenes

Takeo Kanade; Peter Rander; P. J. Narayanan

A new visual medium, Virtualized Reality, immerses viewers in a virtual reconstruction of real-world events. The Virtualized Reality world model consists of real images and depth information computed from these images. Stereoscopic reconstructions provide a sense of complete immersion, and users can select their own viewpoints at view time, independent of the actual camera positions used to capture the event.


international conference on computer vision | 1998

Constructing virtual worlds using dense stereo

P. J. Narayanan; Peter Rander; Takeo Kanade

We present Virtualized Reality, a technique to create virtual worlds out of dynamic events using densely distributed stereo views. The intensity image and depth map for each camera view at each time instant are combined to form a Visible Surface Model. Immersive interaction with the virtualized event is possible using a dense collection of such models. Additionally, a Complete Surface Model of each instant can be built by merging the depth maps from different cameras into a common volumetric space. The corresponding model is compatible with traditional virtual models and can be interacted with immersively using standard tools. Because both VSMs and CSMs are fully three-dimensional, virtualized models can also be combined and modified to build larger, more complex environments, an important capability for many non-trivial applications. We present results from 3D Dome, our facility to create virtualized models.


international conference on computer vision | 1999

Three-dimensional scene flow

Sundar Vedula; Simon Baker; Peter Rander; Robert T. Collins; Takeo Kanade

Scene flow is the three-dimensional motion field of points in the world, just as optical flow is the two-dimensional motion field of points in an image. Any optical flow is simply the projection of the scene flow onto the image plane of a camera. We present a framework for the computation of dense, non-rigid scene flow from optical flow. Our approach leads to straightforward linear algorithms and a classification of the task into three major scenarios: complete instantaneous knowledge of the scene structure; knowledge only of correspondence information; and no knowledge of the scene structure. We also show that multiple estimates of the normal flow cannot be used to estimate dense scene flow directly without some form of smoothing or regularization.


The International Journal of Robotics Research | 2006

Toward Reliable Off Road Autonomous Vehicles Operating in Challenging Environments

Alonzo Kelly; Anthony Stentz; Omead Amidi; Mike Bode; David M. Bradley; Antonio Diaz-Calderon; Michael Happold; Herman Herman; Robert Mandelbaum; Thomas Pilarski; Peter Rander; Scott M. Thayer; Nick Vallidis; Randy Warner

The DARPA PerceptOR program has implemented a rigorous evaluative test program which fosters the development of field relevant outdoor mobile robots. Autonomous ground vehicles were deployed on diverse test courses throughout the USA and quantitatively evaluated on such factors as autonomy level, waypoint acquisition, failure rate, speed, and communications bandwidth. Our efforts over the three year program have produced new approaches in planning, perception, localization, and control which have been driven by the quest for reliable operation in challenging environments. This paper focuses on some of the most unique aspects of the systems developed by the CMU PerceptOR team, the lessons learned during the effort, and the most immediate challenges that remain to be addressed.


The International Journal of Robotics Research | 2011

Real-time photorealistic virtualized reality interface for remote mobile robot control

Alonzo Kelly; Nicholas Chan; Herman Herman; Daniel Huber; Roberty Meyers; Peter Rander; Randy Warner; Jason Ziglar; Erin Capstick

The task of teleoperating a robot over a wireless video link is known to be very difficult. Teleoperation becomes even more difficult when the robot is surrounded by dense obstacles, or speed requirements are high, or video quality is poor, or wireless links are subject to latency. Due to high-quality lidar data, and improvements in computing and video compression, virtualized reality has the capacity to dramatically improve teleoperation performance — even in high-speed situations that were formerly impossible. In this paper, we demonstrate the conversion of dense geometry and appearance data, generated on-the-move by a mobile robot, into a photorealistic rendering model that gives the user a synthetic exterior line-of-sight view of the robot, including the context of its surrounding terrain. This technique converts teleoperation into virtual line-of-sight remote control. The underlying metrically consistent environment model also introduces the capacity to remove latency and enhance video compression. Display quality is sufficiently high that the user experience is similar to a driving video game where the surfaces used are textured with live video.


international conference on multisensor fusion and integration for intelligent systems | 1996

Recovery of dynamic scene structure from multiple image sequences

Peter Rander; P. J. Narayanan; Takeo Kanade

Despite significant progress in automatic recovery of static scene structure from range images, little effort has been made toward extending these approaches to dynamic scenes. This disparity is in large part due to the lack of range sensors with the high sampling rates needed to accurately capture dynamic scenes. We have developed a system that overcomes this problem by exploiting video cameras, which easily capture images of dynamic scenes, and image-based stereo, which estimates scene structure based on correspondences among the images from different cameras. Our system uses a synchronized multi-camera recording system to capture live video of the scene and a software implementation of image-based stereo to compute range images off-line. By combining this system with multi-image fusion, we created a novel system for dynamic structure recovery, with many applications including telepresence, training, and entertainment. Development of this system has also revealed the potential use effusion as both a multi-view and multi-resolution integration process for stereo.


Information Visualization | 2002

Stereo perception on an off-road vehicle

A. Rieder; B. Southall; Garbis Salgian; Robert Mandelbaum; Herman Herman; Peter Rander; T. Stentz

This paper presents a vehicle for autonomous off-road navigation built in the framework of DARPAs PerceptOR program. Special emphasis is given to the perception system. A set of three stereo camera pairs provide color and 3D data in a wide field of view (greater 100 degree) at high resolution (2160 by 480 pixel) and high frame rates (5 Hz). This is made possible by integrating a powerful image processing hardware called Acadia. These high data rates require efficient sensor fusion, terrain reconstruction and path planning algorithms. The paper quantifies sensor performance and shows examples of successful obstacle avoidance.


international conference on robotics and automation | 2011

Monocular visual odometry for robot localization in LNG pipes

Peter Hansen; Hatem Alismail; Peter Rander; Brett Browning

Regular inspection for corrosion of the pipes used in Liquified Natural Gas (LNG) processing facilities is critical for safety. We argue that a visual perception system equipped on a pipe crawling robot can improve on existing techniques (Magnetic Flux Leakage, radiography, ultrasound) by producing high resolution registered appearance maps of the internal surface. To achieve this capability, it is necessary to estimate the pose of sensors as the robot traverses the pipes. We have explored two monocular visual odometry algorithms (dense and sparse) that can be used to estimate sensor pose. Both algorithms use a single easily made measurement of the scene structure to resolve the monocular scale ambiguity in their visual odometry estimates. We have obtained pose estimates using these algorithms with image sequences captured from cameras mounted on different robots as they moved through two pipes having diameters of 152mm (6″) and 406mm (16″), and lengths of 6 and 4 meters respectively. Accurate pose estimates were obtained whose errors were consistently less than 1 percent for distance traveled down the pipe.


intelligent robots and systems | 2013

Pipe mapping with monocular fisheye imagery

Peter Hansen; Hatem Alismail; Peter Rander; Brett Browning

We present a vision-based mapping and localization system for operations in pipes such as those found in Liquified Natural Gas (LNG) production. A forward facing fisheye camera mounted on a prototype robot collects imagery as it is teleoperated through a pipe network. The images are processed offline to estimate camera pose and sparse scene structure where the results can be used to generate 3D renderings of the pipe surface. The method extends state of the art visual odometry and mapping for fisheye systems to incorporate geometric constraints based on prior knowledge of the pipe components into a Sparse Bundle Adjustment framework. These constraints significantly reduce inaccuracies resulting from the limited spatial resolution of the fisheye imagery, limited image texture, and visual aliasing. Preliminary results are presented for datasets collected in our fiberglass pipe network which demonstrate the validity of the approach.


computer vision and pattern recognition | 2012

Online continuous stereo extrinsic parameter estimation

Peter Hansen; Hatem Alismail; Peter Rander; Brett Browning

Stereo visual odometry and dense scene reconstruction depend critically on accurate calibration of the extrinsic (relative) stereo camera poses. We present an algorithm for continuous, online stereo extrinsic re-calibration operating only on sparse stereo correspondences on a per-frame basis. We obtain the 5 degree of freedom extrinsic pose for each frame, with a fixed baseline, making it possible to model time-dependent variations. The initial extrinsic estimates are found by minimizing epipolar errors, and are refined via a Kalman Filter (KF). Observation covariances are derived from the Cramer-Rao lower bound of the solution uncertainty. The algorithm operates at frame rate with unoptimized Matlab code with over 1000 correspondences per frame. We validate its performance using a variety of real stereo datasets and simulations.

Collaboration


Dive into the Peter Rander's collaboration.

Top Co-Authors

Avatar

Takeo Kanade

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Brett Browning

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Hatem Alismail

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Herman Herman

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Peter Hansen

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

P. J. Narayanan

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Alonzo Kelly

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Anthony Stentz

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

David M. Bradley

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Omead Amidi

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge