Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christopher Mei is active.

Publication


Featured researches published by Christopher Mei.


international conference on robotics and automation | 2007

Single View Point Omnidirectional Camera Calibration from Planar Grids

Christopher Mei; Patrick Rives

This paper presents a flexible approach for calibrating omnidirectional single viewpoint sensors from planar grids. These sensors are increasingly used in robotics where accurate calibration is often a prerequisite. Current approaches in the field are either based on theoretical properties and do not take into account important factors such as misalignment or camera-lens distortion or over-parametrised which leads to minimisation problems that are difficult to solve. Recent techniques based on polynomial approximations lead to impractical calibration methods. Our model is based on an exact theoretical projection function to which we add well identified parameters to model real-world errors. This leads to a full methodology from the initialisation of the intrinsic parameters to the general calibration. We also discuss the validity of the approach for fish-eye and spherical models. An implementation of the method is available as OpenSource software on the authors Web page. We validate the approach with the calibration of parabolic, hyperbolic, folded mirror, wide-angle and spherical sensors.


International Journal of Computer Vision | 2011

RSLAM: A System for Large-Scale Mapping in Constant-Time Using Stereo

Christopher Mei; Gabe Sibley; Mark Cummins; Paul Newman; Ian D. Reid

Large scale exploration of the environment requires a constant time estimation engine. Bundle adjustment or pose relaxation do not fulfil these requirements as the number of parameters to solve grows with the size of the environment. We describe a relative simultaneous localisation and mapping system (RSLAM) for the constant-time estimation of structure and motion using a binocular stereo camera system as the sole sensor. Achieving robustness in the presence of difficult and changing lighting conditions and rapid motion requires careful engineering of the visual processing, and we describe a number of innovations which we show lead to high accuracy and robustness. In order to achieve real-time performance without placing severe limits on the size of the map that can be built, we use a topo-metric representation in terms of a sequence of relative locations. When combined with fast and reliable loop-closing, we mitigate the drift to obtain highly accurate global position estimates without any global minimisation. We discuss some of the issues that arise from using a relative representation, and evaluate our system on long sequences processed at a constant 30–45 Hz, obtaining precisions down to a few meters over distances of a few kilometres.


The International Journal of Robotics Research | 2010

Vast-scale Outdoor Navigation Using Adaptive Relative Bundle Adjustment

Gabe Sibley; Christopher Mei; Ian D. Reid; Paul Newman

In this paper we describe a relative approach to simultaneous localization and mapping, based on the insight that a continuous relative representation can make the problem tractable at large scales. First, it is well known that bundle adjustment is the optimal non-linear least-squares formulation for this problem, in that its maximum-likelihood form matches the definition of the Cramer—Rao lower bound. Unfortunately, computing the maximum-likelihood solution is often prohibitively expensive: this is especially true during loop closures, which often necessitate adjusting all parameters in a loop. In this paper we note that it is precisely the choice of a single privileged coordinate frame that makes bundle adjustment costly, and that this expense can be avoided by adopting a completely relative approach. We derive a new relative bundle adjustment which, instead of optimizing in a single Euclidean space, works in a metric space defined by a manifold. Using an adaptive optimization strategy, we show experimentally that it is possible to solve for the full maximum-likelihood solution incrementally in constant time, even at loop closure. Our approach is, by definition, everywhere locally Euclidean, and we show that the local Euclidean estimate matches that of traditional bundle adjustment. Our system operates online in realtime using stereo data, with fast appearance-based loop closure detection. We show results on over 850,000 images that indicate the accuracy and scalability of the approach, and process over 330 GB of image data into a relative map covering 142 km of Southern England. To demonstrate a baseline sufficiency for navigation, we show that it is possible to find shortest paths in the relative maps we build, in terms of both time and distance. Query images from the web of popular landmarks around London, such as the London Eye or Trafalgar Square, are matched to the relative map to provide route planning goals.


robotics: science and systems | 2009

Adaptive relative bundle adjustment.

Dieter Sibley; Christopher Mei; Ian D. Reid; Paul Newman

It is well known that bundle adjustment is the optimal non-linear least-squares formulation of the simultaneous localization and mapping problem, in that its maximum likelihood form matches the definition of the Cramer Rao Lower Bound. Unfortunately, computing the ML solution is often prohibitively expensive – this is especially true during loop closures, which often necessitate adjusting all parameters in a loop. In this paper we note that it is precisely the choice of a single privileged coordinate frame that makes bundle adjustment costly, and that this expense can be avoided by adopting a completely relative approach. We derive a new relative bundle adjustment, which instead of optimizing in a single Euclidean space, works in a metric-space defined by a connected Riemannian manifold. Using an adaptive optimization strategy, we show experimentally that it is possible to solve for the full ML solution incrementally in constant time – even at loop closure. Our system also operates online in real-time using stereo data, with fast appearance-based loop closure detection. We show results for sequences of 23k frames over 1.08km that indicate the accuracy of the approach.


british machine vision conference | 2009

A Constant-Time Efficient Stereo SLAM System.

Christopher Mei; Gabe Sibley; Mark Cummins; Paul Newman; Ian D. Reid

Continuous, real-time mapping of an environment using a camera requires a constanttime estimation engine. This rules out optimal global solving such as bundle adjustment. In this article, we investigate the precision that can be achieved with only local estimation of motion and structure provided by a stereo pair. We introduce a simple but novel representation of the environment in terms of a sequence of relative locations. We demonstrate precise local mapping and easy navigation using the relative map, and importantly show that this can be done without requiring a global minimisation after loop closure. We discuss some of the issues that arise from using a relative representation, and evaluate our system on long sequences processed at a constant 30-45 Hz, obtaining precisions down to a few metres over distances of a few kilometres.


international conference on robotics and automation | 2006

Calibration between a central catadioptric camera and a laser range finder for robotic applications

Christopher Mei; Patrick Rives

This paper presents several methods for estimating the relative position of a central catadioptric camera (including perspective cameras) and a laser range finder in order to obtain depth information in the panoramic image. The problem is analysed from a robotic perspective and according to the available information (visible or invisible laser beam, partial calibration, drift of laser data,...). The feasibility of the calibration is also discussed. The feature extraction process and real data are presented


IEEE Transactions on Robotics | 2008

Efficient Homography-Based Tracking and 3-D Reconstruction for Single-Viewpoint Sensors

Christopher Mei; Selim Benhimane; Ezio Malis; Patrick Rives

This paper addresses the problem of motion estimation and 3-D reconstruction through visual tracking with a single-viewpoint sensor and, in particular, how to generalize tracking to calibrated omnidirectional cameras. We analyze different minimization approaches for the intensity-based cost function (sum of squared differences). In particular, we propose novel variants of the efficient second-order minimization (ESM) with better computational complexities and compare these algorithms with the inverse composition (IC) and the hyperplane approximation (HA). Issues regarding the use of the IC and HA for 3-D tracking are discussed. We show that even though an iteration of ESM is computationally more expensive than an iteration of IC, the faster convergence rate makes it globally faster. The tracking algorithm was validated by using an omnidirectional sensor mounted on a mobile robot.


computer vision and pattern recognition | 2010

Growing semantically meaningful models for visual SLAM

Alex Flint; Christopher Mei; Ian D. Reid; David W. Murray

Though modern Visual Simultaneous Localisation and Mapping (vSLAM) systems are capable of localising robustly and efficiently even in the case of a monocular camera, the maps produced are typically sparse point–clouds that are difficult to interpret and of little use for higher–level reasoning tasks such as scene understanding or human– machine interaction. In this paper we begin to address this deficiency, presenting progress on expanding the competency of visual SLAM systems to build richer maps. Specifically, we concentrate on modelling indoor scenes using semantically meaningful surfaces and accompanying labels, such as “floor”, “wall”, and “ceiling” — an important step towards a representation that can support higher-level reasoning and planning. We leverage the Manhattan world assumption and show how to extract vanishing directions jointly across a video stream. We then propose a guided line detector that utilises known vanishing points to extract extremely subtle axis– aligned edges. We utilise recent advances in single view structure recovery to building geometric scene models and demonstrate our system operating on–line.


intelligent robots and systems | 2010

Closing loops without places

Christopher Mei; Gabe Sibley; Paul Newman

This paper proposes a new topo-metric representation of the world based on co-visibility that simplifies data association and improves the performance of appearance-based recognition. We introduce the concept of dynamic bagof-words, which is a novel form of query expansion based on finding cliques in the landmark co-visibility graph. The proposed approach avoids the - often arbitrary - discretisation of space from the robots trajectory that is common to most image-based loop closure algorithms. Instead we show that reasoning on sets of co-visible landmarks leads to a simple model that out-performs pose-based or view-based approaches. Using real and simulated imagery, we demonstrate that dynamic bag-of-words query expansion can improve precision and recall for appearance-based localisation.


intelligent robots and systems | 2006

Homography-based Tracking for Central Catadioptric Cameras

Christopher Mei; Selim Benhimane; Ezio Malis; Patrick Rives

This paper presents a parametric approach for tracking piecewise planar scenes with central catadioptric cameras (including perspective cameras). We extend the standard notion of homography to this wider range of devices through the unified projection model on the sphere. We avoid unwarping the image to a perspective view and take into account the non-uniform pixel resolution specific to non-perspective central catadioptric sensors. The homography is parametrised by the Lie algebra of the special linear group SL(3) to ensure that only eight free parameters are estimated. With this model, we use an efficient second-order minimisation technique leading to a fast tracking algorithm with a complexity similar to a first-order approach. The developed algorithm was tested on the estimation of the displacement of a mobile robot in a real application and proved to be very precise

Collaboration


Dive into the Christopher Mei's collaboration.

Top Co-Authors

Avatar

Ian D. Reid

University of Adelaide

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gabe Sibley

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge