Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ben Tordoff is active.

Publication


Featured researches published by Ben Tordoff.


european conference on computer vision | 2002

Guided Sampling and Consensus for Motion Estimation

Ben Tordoff; David W. Murray

We present techniques for improving the speed of robust motion estimation based on random sampling of image features. Starting from Torr and Zissermans MLESAC algorithm, we address some of the problems posed from both practical and theoretical standpoints and in doing so allow the random search to be replaced by a guided search. Guidance of the search is based on readily-available information which is usually discarded, but can significantly reduce the search time. This guided-sampling algorithm is further specialised for tracking of multiple motions, for which results are presented.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2005

Guided-MLESAC: faster image transform estimation by using matching priors

Ben Tordoff; David W. Murray

MLESAC is an established algorithm for maximum-likelihood estimation by random sampling consensus, devised for computing multiview entities like the fundamental matrix from correspondences between image features. A shortcoming of the method is that it assumes that little is known about the prior probabilities of the validities of the correspondences. This paper explains the consequences of that omission and describes how the algorithms theoretical standing and practical performance can be enhanced by deriving estimates of these prior probabilities. Using the priors in guided-MLESAC is found to give an order of magnitude speed increase for problems where the correspondences are described by one image transformation and clutter. This paper describes two further modifications to guided-MLESAC. The first shows how all putative matches, rather than just the best, from a particular feature can be taken forward into the sampling stage, albeit at the expense of additional computation. The second suggests how to propagate the output from one frame forward to successive frames. The additional information makes guided-MLESAC computationally realistic at video-rates for correspondence sets modeled by two transformations and clutter.


international symposium on wearable computers | 2000

Wearable visual robots

Walterio W. Mayol; Ben Tordoff; David W. Murray

Abstract: Research work reported in the literature in wearable visual computing has used exclusively static (or non-active) cameras, making the imagery and image measurements dependent on the wearer’s posture and motions. It is assumed that the camera is pointing in a good direction to view relevant parts of the scene at best by virtue of being mounted on the wearer’s head, or at worst wholly by chance. Even when pointing in roughly the correct direction, any visual processing relying on feature correspondence from a passive camera is made more difficult by the large, uncontrolled inter-image movements which occur when the wearer moves, or even breathes. This paper presents a wearable active visual sensor which is able to achieve a level of decoupling of camera movement from the wearer’s posture and motions by a combination of inertial and visual sensor feedback and active control. The issues of sensor placement, robot kinematics and their relation to wearability are discussed. The performance of the prototype robot is evaluated for some essential visual tasks. The paper also discusses potential applications for this kind of wearable robot.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2004

Reactive control of zoom while fixating using perspective and affine cameras

Ben Tordoff; David W. Murray

This paper describes reactive visual methods of controlling the zoom setting of the lens of an active camera while fixating upon an object. The first method assumes a perspective projection and adjusts zoom to preserve the ratio of focal length to scene depth. The active camera is constrained to rotate, permitting self-calibration from the image motion of points on the static background. A planar structure from motion algorithm is used to recover the depth of the foreground. The foreground-background segmentation exploits the properties of the two different interimage homographies which are observed. The fixation point is updated by transfer via the observed planar structure. The planar method is shown to work on real imagery, but results from simulated data suggest that its extension to general 3D structure is problematical under realistic viewing and noise regimes. The second method assumes an affine projection. It requires no self-calibration and the zooming camera may move generally. Fixation is again updated using transfer, but now via the affine structure recovered by factorization. Analysis of the projection matrices allows the relative scale of the affine bases in different views to be found in a number of ways and, hence, controlled to unity. The various ways are compared and the best used on real imagery captured from an active camera fitted with a controllable zoom lens in both look-move and continuous operation.


british machine vision conference | 2005

Hole Filling Through Photomontage

Marta Wilczkowiak; Gabriel J. Brostow; Ben Tordoff; Roberto Cipolla

To fill holes in photographs of structured, man made environments, we propose a technique which automatically adjusts and clones large image patches that have similar structure. These source patches can come from elsewhere in the same image, or from other images shot from different perspectives. Two significant developments of this work are the ability to automatically detect and adjust source patches whose macrostructure is compatible with the hole region, and alternately, to interactively specify a users desired search regions. In contrast to existing photomontage algorithms which either synthesize microstructure or require careful user interaction to fill holes, our approach handles macrostructure with an adjustable degree of automation.


international conference on pattern recognition | 2000

Violating rotating camera geometry: the effect of radial distortion on self-calibration

Ben Tordoff; David W. Murray

We show that radial distortion of images invalidates the geometric constraint on which self-calibration of a rotating camera is based on, that is, 3D lines drawn between matched features all intersect at the rotation centre. We develop a geometric picture showing how radial distortion violates this constraint and discuss the implications for self-calibration of a rotating camera. In particular we show that the behaviour of self-calibration is markedly different for pin-cushion and barrel distortion, the latter causing self-calibration to be unreliable or to fail completely. A method is presented for automatically estimating the radial distortion over a sequence of images, when both distortion and camera internal parameters vary. We discuss when such an approach will work and whether accurate automatic calibration of a rotating camera is really possible.


Computer Vision and Image Understanding | 2007

A method of reactive zoom control from uncertainty in tracking

Ben Tordoff; David W. Murray

The tuning of a constant velocity Kalman filter, used for tracking by a camera fitted with a variable focal-length lens, is shown to be preserved under a scale change in process noise if accompanied by an inverse scaling in the focal length, provided the image measurement error is of fixed size in image coordinates. Based on this observation, a practical method of zoom control has been built by setting an upper limit on the probability that the innovation (and hence fixation error) exceeds the image half-width. The innovation covariance matrix used to determine the innovation limit is derived over two timescales, which enables a rapid zooming out response and slower zooming in. Experimental simulations are presented, before results are given from a video-rate implementation using a camera with two motorized orientation axes and fitted with a computer-controlled zoom lens. The delays in the feedback loops, comprising image capture delay, platform response lag and zoom lens response lag, are carefully calibrated by fitting to their frequency responses. It is found that the cumulative uncertainty in delay gives rise to an image error which is part constant and part proportional to focal length, resulting in a beneficial adaptation of the filter.


british machine vision conference | 2004

Interaction between hand and wearable camera in 2D and 3D environments

Walterio W. Mayol; Andrew J. Davison; Ben Tordoff; Nicholas Molton; David W. Murray

This paper is concerned with allowing the user of a wearable, portable, vision system to interact with the visual information using hand movements and gestures. Two example scenarios are explored. The first, in 2D, uses the wearer’s hand to both guide an active wearable camera and to highlight objects of interest using a grasping vector. The second is based in 3D, and builds on earlier work which recovers 3D scene structure at video-rate, allowing real-time purposive redirection of the camera to any scene point. Here, a range of hand gestures are used to highlight and select 3D points within the structure and in this instance used to insert 3D graphical objects into the scene. Structure recovery, gesture recognition, scene annotation and augmentation are achieved in parallel and at video-rate.


Springer Tracts in Advanced Robotics (15) | 2005

Applying Active Vision and SLAM to Wearables

Walterio W. Mayol; Andrew J. Davison; Ben Tordoff; David W. Murray

This paper reviews aspects of the design and construction of an active wearable camera, and describes progress in equipping it with visual processing for reactive tasks like orientation stabilisation, slaving from head movements, and 2D tracking. The paper goes on to describe a first application of frame-rate simultaneous localisation and mapping (SLAM) to the wearable camera. Though relevant for any single camera undergoing general motion, the approach has particular benefits in wearable vision, allowing extended periods of purposive fixation followed by controlled redirection of gaze to other parts of the scene.


international conference on robotics and automation | 2002

Designing a miniature wearable visual robot

Walterio W. Mayol; Ben Tordoff; David W. Murray

We report on two methods we have developed to aid in the design of a wearable visual robot-a body mounted robot for which the main sensor is a camera. Specifically, we have first refined the analysis of sensor placement through the computation of the field of view and body motion using a 3D model of the human form. Second we have improved the design of the robots morphology with the help of an optimization algorithm based on the Pareto front, within constraints set by the overall choice of robot kinematic chain and the need to specify obtainable actuators and sensors. The methods could be of use for the design and performance evaluation of rather different kinds of wearable robots and devices.

Collaboration


Dive into the Ben Tordoff's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge