Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Garbis Salgian is active.

Publication


Featured researches published by Garbis Salgian.


international conference on computer vision | 1999

Correlation-based estimation of ego-motion and structure from motion and stereo

Robert Mandelbaum; Garbis Salgian; Harpreet S. Sawhney

This paper describes a correlation-based, iterative, multi-resolution algorithm which estimates both scene structure and the motion of the camera rig through an environment from the stream(s) of incoming images. Both single-camera rigs and multiple-camera rigs can be accommodated. The use of multiple synchronized cameras results in more rapid convergence of the iterative approach. The algorithm uses a global ego-motion constraint to refine estimates of inter-frame camera rotation and translation. It uses local window-based correlation to refine the current estimate of scene structure. All analysis is performed at multiple resolutions. In order to combine, in a straightforward way, the correlation surfaces from multiple viewpoints and from multiple pixels in a support region, each pixels correlation surface is modeled as a quadratic. This parameterization allows direct, explicit computation of incremental refinements for ego-motion and structure using linear algebra. Batches can be of arbitrary size, allowing a trade-off between accuracy and latency. Batches can also be daisy-chained for extended sequences. Results of the algorithm are shown on synthetic and real outdoor image sequences.


Information Visualization | 2002

Stereo perception on an off-road vehicle

A. Rieder; B. Southall; Garbis Salgian; Robert Mandelbaum; Herman Herman; Peter Rander; T. Stentz

This paper presents a vehicle for autonomous off-road navigation built in the framework of DARPAs PerceptOR program. Special emphasis is given to the perception system. A set of three stereo camera pairs provide color and 3D data in a wide field of view (greater 100 degree) at high resolution (2160 by 480 pixel) and high frame rates (5 Hz). This is made possible by integrating a powerful image processing hardware called Acadia. These high data rates require efficient sensor fusion, terrain reconstruction and path planning algorithms. The paper quantifies sensor performance and shows examples of successful obstacle avoidance.


international conference on robotics and automation | 2002

Sensor fusion of structure-from-motion, bathymetric 3D, and beacon-based navigation modalities

Hanumant Singh; Garbis Salgian; Ryan M. Eustice; Robert Mandelbaum

Describes an approach for the fusion of 3D data underwater obtained from multiple sensing modalities. In particular, we examine the combination of image-based structure-from-motion (SFM) data with bathymetric data obtained using pencil-beam underwater sonar, in order to recover the shape of the seabed terrain. We also combine image-based egomotion estimation with acoustic-based and inertial navigation data on board the underwater vehicle. When fusion is performed at the data level, each modality is used to extract 3D information independently. The 3D representations are then aligned and compared. In this case, we use the bathymetric data as ground truth to measure the accuracy and drift of the SFM approach. Similarly we use the navigation data as ground truth against which we measure the accuracy of the image-based ego-motion estimation. We examine how low-resolution bathymetric data can be used to seed the higher-resolution SFM algorithm, improving convergence rates, and reducing drift error. Similarly, acoustic-based and inertial navigation data improves the convergence and drift properties of egomotion estimation.


international conference on robotics and automation | 2000

Terrain reconstruction for ground and underwater robots

Robert Mandelbaum; Garbis Salgian; Harpreet S. Sawhney; Michael W. Hansen

We describe a new image-processing algorithm for estimating both the egomotion of an outdoor robotic platform and the structure of the surrounding terrain. The algorithm is based on correlation, and is embedded in an iterative, multi-resolution framework. As such, it is suited to outdoor ground-based and underwater scenes. Both single-camera rigs and multiple-camera rigs can be accommodated. The use of multiple synchronized cameras results in more rapid convergence of the iterative approach. We describe how the algorithm operates, and give examples of its application to three robotic domains: 1) autonomous mobility of ground-based outdoor robots, 2) reconnaissance tasks on ground-based vehicles, and 3) underwater robotics.


Enhanced and synthetic vision : proceedings of SPIE. Vol. 4023 | 2000

EXTENDED TERRAIN RECONSTRUCTION FOR AUTONOMOUS VEHICLES

Garbis Salgian; Robert Mandelbaum; Harpreet S. Sawhney; Michael W. Hansen

This paper presents an image-processing algorithm for estimating both the egomotion of an outdoor robotic platform and the structure of the surrounding terrain. The algorithm is based on correlation, and is embedded in an iterative, multi-resolution framework. As such, it is suited to outdoor ground-based and underwater scenes. Both single-camera rigs and multiple-camera rigs can be accommodated. The use of multiple synchronized cameras results in more rapid convergence of the iterative approach. We describe how the algorithm operates and gives examples of its application in several robotic domains: Autonomous mobility of outdoor robots and Underwater robots.


international conference on computer vision | 2009

Monocular structure from motion for near to long ranges

John Richard Fields; Garbis Salgian; Supun Samarasekera; Rakesh Kumar

This paper describes a sensing system for estimating range and detecting the shape of objects from a few meters to a few kilometers away. Such distances are too large for currentactive methods (e.g.LADAR)or fixed baseline stereo. A sensing systemconsisting of a single camera mounted on a groundvehicle equipped with a precision inertial navigation system (INS) is used.The vehicle travel is usedto synthesize baselines of different lengths. The system uses visual odometry (VO) techniques to refine the camera orientation information derived from the INS and camera-to-vehiclecalibration.Range informationis obtained through motion stereo analysis of rectified image pairs and the use of multiple baselines in each range image. In addition, range images are combined as the vehicle travel createsnew views. Results are compared with ground truth in open terrain with ranges up to several km.


Defense and Security Symposium | 2007

On-the-move independently moving target detection

Garbis Salgian; Jiangjian Xiao; Supun Samarasekera; Rakesh Kumar

This paper describes a system for automatically detecting potential targets (that pop-up or move into view) and to cue the operator to potential threats. Detection of independently moving targets from a moving ground vehicle is challenging due to the strong parallax effects caused by the camera motion close to the 3D structure in the environment. We present a 3D approach for detecting and tracking such independently moving targets with multiple monocular cameras. In our approach, we first recover the camera position and orientation by employing a visual odometry method. Next, using multiple consecutive frames with the estimated camera poses, the structure of the scene at the reference frame is explicitly recovered by a motion stereo approach, and corresponding optical flow fields between the reference frame and other frames are also estimated. Third, an advanced filter is designed by combining second order differences between 3D warping and optical flow warping to distinguish the moving object from parallax regions. We present results of the algorithm on data collected with an eight-camera system mounted on a vehicle under multiple scenarios that include moving and pop-up targets.


Archive | 2000

Method and apparatus for estimating scene structure and ego-motion from multiple images of a scene using correlation

Robert Mandelbaum; Garbis Salgian; Harpreet S. Sawhney


international conference on computer vision | 1995

Electronically directed "focal" stereo

Peter J. Burt; Lambert E. Wixson; Garbis Salgian


Archive | 2003

Real-Time, Multi-Perspective Perception for Unmanned Ground Vehicles

Anthony Stentz; Alonzo Kelly; Peter Rander; Herman Herman; Omead Amidi; Robert Mandelbaum; Garbis Salgian; Jorgen Pedersen

Collaboration


Dive into the Garbis Salgian's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Herman Herman

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Rander

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alonzo Kelly

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge