Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert Sim is active.

Publication


Featured researches published by Robert Sim.


international conference on robotics and automation | 2005

Global A-Optimal Robot Exploration in SLAM

Robert Sim; Nicholas Roy

It is well-known that the Kalman filter for simultaneous localization and mapping (SLAM) converges to a fully correlated map in the limit of infinite time and data [1]. However, the rate of convergence of the map has a strong dependence on the order of the observations. We show that conventional exploration algorithms for collecting map data are sub-optimal in both the objective function and choice of optimization procedure. We show that optimizing the a-optimal information measure results in a more accurate map than existing approaches, using a greedy, closed-loop strategy. Secondly, we demonstrate that by restricting the planning to an appropriate policy class, we can tractably find non-greedy, global planning trajectories that produce more accurate maps, explicitly planning to close loops even in open-loop scenarios.


international conference on robotics and automation | 2006

/spl sigma/SLAM: stereo vision SLAM using the Rao-Blackwellised particle filter and a novel mixture proposal distribution

Pantelis Elinas; Robert Sim; James J. Little

We consider the problem of simultaneous localization and mapping (SLAM) using the Rao-Blackwellised particle filter (RBPF) for the class of indoor mobile robots equipped only with stereo vision. Our goal is to construct dense metric maps of natural 3D point landmarks for large cyclic environments in the absence of accurate landmark position measurements and motion estimates. Our work differs from other approaches because landmark estimates are derived from stereo vision and motion estimates are based on sparse optical flow. We distinguish between landmarks using the scale invariant feature transform (SIFT). This is in contrast to current popular approaches that rely on reliable motion models derived from odometric hardware and accurate landmark measurements obtained with laser sensors. Since our approach depends on a particle filter whose main component is the proposal distribution, we develop and evaluate a novel mixture proposal distribution that allows us to robustly close large loops. We validate our approach experimentally for long camera trajectories processing thousands of images at reasonable frame rates


international conference on robotics and automation | 1999

Learning visual landmarks for pose estimation

Robert Sim; Gregory Dudek

We present an approach to vision-based mobile robot localization, even without an a-priori pose estimate. This is accomplished by learning a set of visual features called image-domain landmarks. The landmark learning mechanism is designed to be applicable to a wide range of environments. Each landmark is detected as a focal extremum of a measure of uniqueness and represented by an appearance-based encoding. Localization is performed using a method that matches observed landmarks to learned prototypes and generates independent position estimates for each match. The independent estimates are then combined to obtain a final position estimate, with an associated uncertainty. Quantitative experimental evidence is presented that demonstrates that accurate pose estimates can be obtained, despite changes to the environment.


intelligent robots and systems | 2004

AQUA: an aquatic walking robot

Christina Georgiades; Andrew German; Andrew Hogue; Hui Liu; Chris Prahacs; Arlene Ripsman; Robert Sim; Luz-Abril Torres; Pifu Zhang; Martin Buehler; Gregory Dudek; Michael Jenkin; Evangelos E. Milios

This paper describes an underwater walking robotic system being developed under the name AQUA, the goals of the AQUA project, the overall hardware and software design, the basic hardware and sensor packages that have been developed, and some initial experiments. The robot is based on the RHex hexapod robot and uses a suite of sensing technologies, primarily based on computer vision and INS, to allow it to navigate and map clear shallow-water environments. The sensor-based navigation and mapping algorithms are based on the use of both artificial floating visual and acoustic landmarks as well as on naturally occurring underwater landmarks and trinocular stereo.


intelligent robots and systems | 1998

Mobile robot localization from learned landmarks

Robert Sim; Gregory Dudek

Presents an approach to vision-based mobile robot localization. In an attempt to capitalize on the benefits of both image and landmark-based methods, we describe a method that combines their strengths. Images are encoded as a set of visual features called landmarks. Potential landmarks are detected using an attention mechanism implemented as a measure of uniqueness. They are then selected and represented by an appearance-based encoding. Localization is performed using a landmark tracking and interpolation method which obtains an estimate accurate to a fraction of the environment sampling density. Experimental results are shown to confirm the feasibility and accuracy of the method.


intelligent robots and systems | 2006

Autonomous vision-based exploration and mapping using hybrid maps and Rao-Blackwellised particle filters

Robert Sim; James J. Little

This paper addresses the problem of exploring and mapping an unknown environment using a robot equipped with a stereo vision sensor. The main contribution of our work is a fully automatic mapping system that operates without the use of active ranger sensors (such as laser or sonic transducers), can operate in real-time and can consistently produce accurate maps of large-scale environments. Our approach implements a Rao-Blackwellised particle filter (RBPF) to solve the simultaneous localization and mapping problem and uses efficient data structures for real-time data association, mapping, and spatial reasoning. We employ a hybrid map representation that infers 3D point landmarks from image features to achieve precise localization, coupled with occupancy grids for safe navigation. This paper describes our framework and implementation, and presents our exploration method, and experimental results illustrating the functionality of the system


canadian conference on computer and robot vision | 2006

Design and analysis of a framework for real-time vision-based SLAM using Rao-Blackwellised particle filters

Robert Sim; Pantelis Elinas; Matthew Joseph Griffin; Alex Shyr; James J. Little

This paper addresses the problem of simultaneous localization and mapping (SLAM) using vision-based sensing. We present and analyse an implementation of a Rao- Blackwellised particle filter (RBPF) that uses stereo vision to localize a camera and 3D landmarks as the camera moves through an unknown environment. Our implementation is robust, can operate in real-time, and can operate without odometric or inertial measurements. Furthermore, our approach supports a 6-degree-of-freedom pose representation, vision-based ego-motion estimation, adaptive resampling, monocular operation, and a selection of odometry-based, observation-based, and mixture (combining local and global pose estimation) proposal distributions. This paper also examines the run-time behavior of efficiently designed RBPFs, providing an extensive empirical analysis of the memory and processing characteristics of RBPFs for vision-based SLAM. Finally, we present experimental results demonstrating the accuracy and efficiency of our approach.


International Journal of Computer Vision | 2007

A Study of the Rao-Blackwellised Particle Filter for Efficient and Accurate Vision-Based SLAM

Robert Sim; Pantelis Elinas; James J. Little

With recent advances in real-time implementations of filters for solving the simultaneous localization and mapping (SLAM) problem in the range-sensing domain, attention has shifted to implementing SLAM solutions using vision-based sensing. This paper presents and analyses different models of the Rao-Blackwellised particle filter (RBPF) for vision-based SLAM within a comprehensive application architecture. The main contributions of our work are the introduction of a new robot motion model utilizing structure from motion (SFM) methods and a novel mixture proposal distribution that combines local and global pose estimation. In addition, we compare these under a wide variety of operating modalities, including monocular sensing and the standard odometry-based methods. We also present a detailed study of the RBPF for SLAM, addressing issues in achieving real-time, robust and numerically reliable filter behavior. Finally, we present experimental results illustrating the improved accuracy of our proposed models and the efficiency and scalability of our implementation.


international conference on computer vision | 1999

Learning and evaluating visual features for pose estimation

Robert Sim; Gregory Dudek

We present a method for learning a set of visual landmarks which are useful for pose estimation. The landmark learning mechanism is designed to be applicable to a wide range of environments, and generalized for different approaches to computing a pose estimate. Initially, each landmark is detected as a focal extremum of a measure of distinctiveness and represented by a principal components encoding which is exploited for matching. Attributes of the observed landmarks can be parameterized using a generic parameterization method and then evaluated in terms of their utility for pose estimation. We present experimental evidence that demonstrates the utility of the method.


international conference on robotics and automation | 2004

Online control policy optimization for minimizing map uncertainty during exploration

Robert Sim; Gregory Dudek; Nicholas Roy

Tremendous progress has been made recently in simultaneous localization and mapping of unknown environments. Using sensor and odometry data from an exploring mobile robot, it has become much easier to build high-quality globally consistent maps of many large, real-world environments. To date, however, relatively little attention has been paid to the controllers used to build these maps. Existing exploration strategies usually attempt to cover the largest amount of unknown space as quickly as possible. Few strategies exist for building the most reliable map possible, but the particular control strategy can have a substantial impact on the quality of the resulting map. In this paper, we devise a control algorithm for exploring unknown space that explicitly tries to build as large a map as possible while maintaining as accurate a map as possible. We make use of a parameterized class of spiral trajectory policies, choosing a new parameter setting at every time step to maximize the expected reward of the policy. We do this in the context of building a visual map of an unknown environment, and show that our strategy leads to a higher accuracy map faster than other candidate controllers, including any single choice in our policy class.

Collaboration


Dive into the Robert Sim's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

James J. Little

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Ioannis M. Rekleitis

University of South Carolina

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pantelis Elinas

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Nicholas Roy

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge