Andrew D Calway
University of Bristol
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Andrew D Calway.
british machine vision conference | 2005
Mark Pupilli; Andrew D Calway
We describe a particle filtering method for vision based tracking of a hand held calibrated camera in real-time. The ability of the particle filter to deal with non-linearities and non-Gaussian statistics suggests the potential to provide improved robustness over existing approaches, such as those based on the Kalman filter. In our approach, the particle filter provides recursive approximations to the posterior density for the 3-D motion parameters. The measurements are inlier/outlier counts of likely correspondence matches for a set of salient points in the scene. The algorithm is simple to implement and we present results illustrating good tracking performance using a ‘live’ camera. We also demonstrate the potential robustness of the method, including the ability to recover from loss of track and to deal with severe occlusion.
international symposium on visual computing | 2006
Denis Chekhlov; Mark Pupilli; Walterio W. Mayol-Cuevas; Andrew D Calway
We describe a robust system for vision-based SLAM using a single camera which runs in real-time, typically around 30 fps. The key contribution is a novel utilisation of multi-resolution descriptors in a coherent top-down framework. The resulting system provides superior performance over previous methods in terms of robustness to erratic motion, camera shake, and the ability to recover from measurement loss. SLAM itself is implemented within an unscented Kalman filter framework based on a constant position motion model, which is also shown to provide further resilience to non-smooth camera motion. Results are presented illustrating successful SLAM operation for challenging hand-held camera movement within desktop environments.
IEEE Transactions on Robotics | 2008
Andrew P. Gee; Denis Chekhlov; Andrew D Calway; Walterio W. Mayol-Cuevas
In this paper, we describe a novel method for discovering and incorporating higher level map structure in a real-time visual simultaneous localization and mapping (SLAM) system. Previous approaches use sparse maps populated by isolated features such as 3-D points or edgelets. Although this facilitates efficient localization, it yields very limited scene representation and ignores the inherent redundancy among features resulting from physical structure in the scene. In this paper, higher level structure, in the form of lines and surfaces, is discovered concurrently with SLAM operation, and then, incorporated into the map in a rigorous manner, attempting to maintain important cross-covariance information and allow consistent update of the feature parameters. This is achieved by using a bottom-up process, in which subsets of low-level features are ldquofolded inrdquo to a parameterization of an associated higher level feature, thus collapsing the state space as well as building structure into the map. We demonstrate and analyze the effects of the approach for the cases of line and plane discovery, both in simulation and within a real-time system operating with a handheld camera in an office environment.
intelligent robots and systems | 2012
Dima Damen; Andrew P. Gee; Walterio W. Mayol-Cuevas; Andrew D Calway
We describe an integrated system for personal workspace monitoring based around an RGB-D sensor. The approach is egocentric, facilitating full flexibility, and operates in real-time, providing object detection and recognition, and 3D trajectory estimation whilst the user undertakes tasks in the workspace. A prototype on-body system developed in the context of work-flow analysis for industrial manipulation and assembly tasks is described. The system is evaluated on two tasks with multiple users, and results indicate that the method is effective, giving good accuracy performance.
computer vision and pattern recognition | 2007
Denis Chekhlov; Mark Pupilli; Walterio W. Mayol; Andrew D Calway
Two major limitations of real-time visual SLAM algorithms are the restricted range of views over which they can operate and their lack of robustness when faced with erratic camera motion or severe visual occlusion. In this paper we describe a visual SLAM algorithm which addresses both of these problems. The key component is a novel feature description method which is both fast and capable of repeat-able correspondence matching over a wide range of viewing angles and scales. This is achieved in real-time by using a SIFT-like spatial gradient descriptor in conjunction with efficient scale prediction and exemplar based feature representation. Results are presented illustrating robust realtime SLAM operation within an office environment.
international symposium on mixed and augmented reality | 2007
Denis Chekhlov; Andrew P. Gee; Andrew D Calway; Walterio W. Mayol-Cuevas
Most work in visual augmented reality (AR) employs predefined markers or models that simplify the algorithms needed for sensor positioning and augmentation but at the cost of imposing restrictions on the areas of operation and on interactivity. This paper presents a simple game in which an AR agent has to navigate using real planar surfaces on objects that are dynamically added to an unprepared environment. An extended Kalman filter (EKF) simultaneous localisation and mapping (SLAM) framework with automatic plane discovery is used to enable the player to interactively build a structured map of the game environment using a single, agile camera. By using SLAM, we are able to achieve real-time interactivity and maintain rigorous estimates of the systems uncertainty, which enables the effects of high quality estimates to be propagated to other features (points and planes) even if they are outside the cameras current field of view.
british machine vision conference | 2002
David Tweed; Andrew D Calway
We describe a novel extension to the CONDENSATION algorithm for tracking multiple objects of the same type. Previous extensions for multiple object tracking do not scale effectively to large numbers of objects. The new approach – subordinated CONDENSATION – deals effectively with arbitrary numbers of objects in an efficient manner, providing a robust means of tracking individual objects across heavily populated and cluttered scenes. The key innovation is the introduction of bindings (subordination) amongst particles which enables multiple occlusions to be handled in a natural way within the standard CONDENSATION framework. The effectiveness of the approach is demonstrated by tracking multiple animals of the same species in cluttered wildlife footage.
international conference on pattern recognition | 2006
Mark Pupilli; Andrew D Calway
We present an algorithm which can track the 3D pose of a hand held camera in real-time using predefined models of objects in the scene. The technique utilises and extends recently developed techniques for 3D tracking with a particle filter. The novelty is in the use of edge information for 3D tracking which has not been achieved before within a realtime Bayesian sampling framework. We develop a robust tracker by carefully designing the particle filter observation model: grouping line segments from a known model into 3D junctions and performing fast inlier/outlier counts on projected junction branches. Results demonstrate the ability to track full 3D pose in dense clutter whilst using a minimal number of junctions
british machine vision conference | 2012
Dima Damen; Pished Bunnun; Andrew D Calway; Walterio W. Mayol-Cuevas
The goal of this paper is to evaluate several extensions of Wei and Levoys algorithm for the synthesis of laminar volumetric textures constrained only by a single 2D sample. Hence, we shall also review in a unified form the improved algorithm proposed by Kopf et al. and the particular histogram matching approach of Chen and Wang. Developing a genuine quantitative study we are able to compare the performances of these algorithms that we have applied to the synthesis of volumetric structures of dense carbons. The 2D samples are lattice fringe images obtained by high resolution transmission electronic microscopy (HRTEM).We present a method for the learning and detection of multiple rigid texture-less 3D objects intended to operate at frame rate speeds for video input. The method is geared for fast and scalable learning and detection by combining tractable extraction of edgelet constellations with library lookup based on rotationand scale-invariant descriptors. The approach learns object views in real-time, and is generative enabling more objects to be learnt without the need for re-training. During testing, a random sample of edgelet constellations is tested for the presence of known objects. We perform testing of single and multi-object detection on a 30 objects dataset showing detections of any of them within milliseconds from the object’s visibility. The results show the scalability of the approach and its framerate performance.
computer vision and pattern recognition | 2006
Mark Pupilli; Andrew D Calway
Simultaneous localisation and mapping using a single camera becomes difficult when erratic motions violate predictive motion models. This problem needs to be addressed when visual SLAM algorithms are transferred from robots or mobile vehicles onto hand-held or wearable devices. In this paper we describe a novel SLAM extension to a camera localisation algorithm based on particle filtering which provides resilience to erratic motion. The mapping component is based on auxiliary unscented Kalman filters coupled to the main particle filter via measurement covariances. This coupling allows the system to survive unpredictable motions such as camera shake, and enables a return to full SLAM operation once normal motion resumes. We present results demonstrating the effectiveness of the approach when operating within a desktop environment.