Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bernard F. Buxton is active.

Publication


Featured researches published by Bernard F. Buxton.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1987

Scene Segmentation from Visual Motion Using Global Optimization

David W. Murray; Bernard F. Buxton

This paper presents results from computer experiments with an algorithm to perform scene disposition and motion segmentation from visual motion or optic flow. The maximum a posteriori (MAP) criterion is used to formulate what the best segmentation or interpretation of the scene should be, where the scene is assumed to be made up of some fixed number of moving planar surface patches. The Bayesian approach requires, first, specification of prior expectations for the optic flow field, which here is modeled as spatial and temporal Markov random fields; and, secondly, a way of measuring how well the segmentation predicts the measured flow field. The Markov random fields incorporate the physical constraints that objects and their images are probably spatially continuous, and that their images are likely to move quite smoothly across the image plane. To compute the flow predicted by the segmentation, a recent method for reconstructing the motion and orientation of planar surface facets is used. The search for the globally optimal segmentation is performed using simulated annealing.


Image and Vision Computing | 1984

Computation of optic flow from the motion of edge features in image sequences

Bernard F. Buxton; Hilary Buxton

Abstract Three-dimensional scene information relating to the depth and orientations of the visible surfaces may be obtained from the optic flow field in time varying imagery. The computation of optic flow is therefore an important step in computer vision. We review our work on calculating optic flow from the motion of edge features in an image sequence. The method is based on a spatiotemporal extension of the Marr-Hildreth edge detection scheme that smooths the data over time as well as over the spatial, image, coordinates. Edge features are defined as the zero crossings of the resultant convolution signal and their motion obtained to subpixel accuracy by a leastsquares interpolation. The details of the method are described and some computational examples are given, including a brief description of how the algorithms may be implemented on a single-instruction multiple-data machine. Some novel effects associated with the choice of metric in the spatiotemporal convolution operator that may be useful for obtaining the time to contact (depth) of objects in the periphery of the field of view are discussed.


Proceedings of the Royal Society of London. Series B, Biological sciences | 1983

Monocular depth perception from optical flow by space time signal processing

Bernard F. Buxton; Hilary Buxton

A theory of monocular depth determination is presented. The effect of finite temporal resolution is incorporated by generalizing the Marr–Hildreth edge detected operator –∇ 2G(r) where ∇2 is the Laplacian and G (r) is a two-dimensional Gaussian. The constraint that the edge detection operator in space–time should produce zero-crossings at the same place in different channels, i. e. at different resolutions of the Gaussian, led to the conclusion that the Marr–Hildreth operator should be replaced by – □2G(r, t) where □2 is the d’Alembertian ∇2 – (1/u2)(∂2/∂t2) and G(r, t) is a Gaussian in space–time. To ensure that the locations of the zero-crossings are independent of the channel width, G(r, t) has to be isotropic in the sense that the velocity u appearing in the defintion of the d’Alembertian must also be used to relate the scales of length and time in G. However, the new operatior –□2G(r, t) produces two types of zero-crossing for each isolated edge feature in the image I(r, t). One of these, termed the ‘static edge’, corresponds to the position of the image edge at time t as defined by ∇2I(r, t) = 0; the other, called a ‘depth zero’, depends only on the relative motion of the observer and object and is usually found only in the periphery of the field of view. When an edge feature is itself in the periphery of the visual field and these zeros coincide, there is an additional cross-over effect. It is shown how these zero-crossings may be used to infer the depth of an object when the observer and object are in relative motion. If an edge feature is near the centre of the image (i. e. near the focus of expansion), the spatial and temporal slopes of the zeros crossing at the static edge may be used to infer the depth, but, if the edge feature is in the periphery of the image, the cross-over effect enables the depth to be obtained immediately. While the former utilizes sharp spatial and temporal resolution to give detailed three-dimensional information, the cross-over effect relies on longer integration times to give a direct measure of the time-to-contact. We propose that both mechanisms could be used to extract depth information in computer vision systems and speculate on how our theory could be used to model depth perception in early visual processing in humans where there is evidence of both monocular perception of the environment in depth and of looming detection in the periphery of the field of view. In addition it is shown how a number of previous models are included in our theory, in particular the directional sensor proposed by Marr & Ullman and a method of depth determination proposed by Prazdny.


Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing | 1985

Convolution with separable masks for early image processing

J. S. Wiejak; Hilary Buxton; Bernard F. Buxton

Abstract Computational techniques for implementing the Marr-Hildreth edge detection operator, −∇2G(r) and its space-time extensions are considered. It is shown how this kind of convolution operation may be carried out simply and efficiently by factorizing the mask and performing the multidimensional convolution as a sequence of one-dimensional convolutions. For ad-dimensional mask of sizen, the computational effort required in order to carry out the sequence of 1D convolutions varies roughly asd2n compared tond for a multidimensional convolution. Computational examples carried out on a SIMD machine (an ICL Distributed Array Processor—DAP) are described and it is shown that convolution with masks of radius 8 on 64 × 64 images can be carried out in 13 ms in two dimensions (mask support ≈ 200 pixels) and 21 ms in three dimensions (support ≈ 2000 pixels). A brief comparison is made with the FFT technique for performing the convolution.


Image and Vision Computing | 1985

Optic flow segmentation as an ill-posed and maximum likelihood problem

Bernard F. Buxton; David W. Murray

Abstract It is shown how the segmentation problem encountered in the interpretation of visual motion, for example, may be formulated as an ill-posed problem using the notion of maximum likelihood to provide a general framework and guide the choice of regularizing constraints. The statistical consequences of the segmentation procedure proposed are examined and it is shown how the notion of maximum likelihood leads to a natural way of estimating parameters in the optimization function, especially the noise levels to be assigned. A minimum entropy regularization constraint is then used to ensure that the interpretation of the visual data elicits as much spatial structure as possible. It is shown by means of a ‘toy’ optic flow example how this is achieved when there are several parameter dimensions over which to segment.


Image and Vision Computing | 1988

Matching Canny edgels to compute the principal components of optic flow

David A. Castelow; David W. Murray; Guy L. Scott; Bernard F. Buxton

Abstract A relaxation algorithm for the computation of optic flow at edge elements (edgels) is presented. Flow is estimated only at intensity edges of the image. Edgels, extracted from an intensity image, are used as the basis for the algorithm. A matching strength or weight surface is computed around each edgel and neighbourhood support is obtained to enhance the matching strength. A principal moments method is used to determine the flow from this weight surface. The output of the algorithm is, for each edgel a pair of orthogonal components of the estimate of the flow. Associated with each component is a confidence measure. Examples of the output of the algorithm are given, and tests of its accuracy are discussed.


parallel computing | 1988

Parallel matching and reconstruction algorithms in computer vision

A. Kashko; Hilary Buxton; Bernard F. Buxton; David A. Castelow

Abstract Parallel implementations of a number of computer vision algorithms for visual reconstruction and matching are described. The algorithms chosen range in complexity from those requiring only image correlation and other simple operations to more sophisticated iterative algorithms involving relaxation, simulated annealing and graduated non-convexity. In each case the algorithms are implemented in parallel by mapping the images directly onto an SIMD processor array. Brief descriptions of the algorithms are given together with sample results and timings for their implementation on a 64 × 64 ICL DAP.


Image and Vision Computing | 1988

From an image sequence to a recognized polyhedral object

David W. Murray; David A. Castelow; Bernard F. Buxton

Abstract The paper describes the combination of several novel algorithms into a system that obtains visual motion from a sequence of images and uses it to recover the three-dimensional (3D) geometry and 3D motion of polyhedral objects relative to the sensor. The system goes on to use the recovered geometry to recognize the object from a database, a stage which also resolves the depth/speed scaling ambiguity, resulting in absolute depth and motion recovery. The performance of the system is demonstrated on imagery from a well carpentered constructive solid geometry (CSG) model and on real imagery from a simple wooden model.


alvey vision conference | 1987

Matching Canny Edgels to Compute the Principal Components of Optic Flow.

David A. Castelow; David W. Murray; Guy L. Scott; Bernard F. Buxton

A relaxation algorithm for the computation of optic flow at edge elements (edgels) is presented. Flow is estimated only at intensity edges of the image. Edgels, extracted from an intensity image, are used as the basis for the algorithm. A matching strength or weight surface is computed around each edge1 and neighbourhood support is obtained to enhance the matching strength. A principal moments method is used to determine the flow from this weight surface. The output of the algorithm is, for each edgel, a pair of orthogonal components of the estimate of the flow. Associated with each component is a co@ dence measure. Examples of the output of the algorithm are given, and tests of its accuracy are discussed.


Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences | 1983

Monocular Depth Perception from Optical Flow by Space Time Signal Processing [Abstract Only]

Bernard F. Buxton; Hilary Buxton

A theory of monocular depth determination is presented. The effect of finite temporal resolution is incorporated by generalizing the Marr-Hildreth edge detected operator —∇2G(r) where ∇2 is the Laplacian and G(r) is a two dimensional Gaussian. The constraint that the edge detection operator in space-time should produce zero-crossings at the same place in different channels, i. e. at different resolutions of the Gaussian, led to the conclusion that the Marr-Hildreth operator should be replaced by -□2G(r, t) where □2 is the d’Alembertian ∇2 -(1/u2) (∂2/∂t2) and G(r, t) is a Gaussian in space-time. To ensure that the locations of the zerocrossings are independent o the channel width, G(r, t) has to be isotropic in the sense that the velocity u appearing in the defintion of the d’Alembertian must also be used to relate the scales of length and time in G. However, the new operatior -□2G(r, t) produces two types of zero-crossing for each isolated edge feature in the image I(r, t). One of these, termed the static edge’, corresponds to the position of the image edge at time t as defined by ∇2I(r, t) = 0; the other, called a ‘depth zero’, depends only on the relative motion of the observer and object and is usually found only in the periphery of the field of view. When an edge feature is itself in the periphery of the visual field and these zeros coincide, there is an additional cross-over effect. It is shown how these zero-crossings may be used to infer the depth of an object when the observer and object are in relative motion. If an edge feature is near the centre of the image (i. e. near the focus of expansion), the spatial and temporal slopes of the zeros crossing at the static edge may be used to infer the depth, but, if the edge feature is in the periphery of the image, the cross-over effect enables the depth to be obtained immediately. While the former utilizes sharp spatial and temporal resolution to give detailed three-dimensional information, the cross-over effect relies on longer integration times to give a direct measure of the time-to-contact. We propose that both mechanisms could be used to extract depth information in computer vision systems and speculate on how our theory could be used to model depth perception in early visual processing in humans where there is evidence of both monocular perception of the environment in depth and of looming detection in the periphery of the field of view. In addition it is shown how a number of previous models are included in our theory, in particular the directional sensor proposed by Marr & Ullman and a method of depth determination proposed by Prazdny.

Collaboration


Dive into the Bernard F. Buxton's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A. Kashko

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

N. S. Williams

Queen Mary University of London

View shared research outputs
Researchain Logo
Decentralizing Knowledge