Adrian F. Clark
University of Essex
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Adrian F. Clark.
ieee swarm intelligence symposium | 2005
Owen Holland; John Woods; R. De Nardi; Adrian F. Clark
This paper explores the idea that it may be possible to combine two ideas - UAV flocking, and wireless cluster computing - in a single system, the UltraSwarm. The possible advantages of such a system are considered, and solutions to some of the technical problems are identified. Initial work on constructing such a system based around miniature electric helicopters is described.
Computer Vision and Image Understanding | 2008
Neil A. Thacker; Adrian F. Clark; John L. Barron; J. Ross Beveridge; Patrick Courtney; William R. Crum; Visvanathan Ramesh; Christine Clark
It is frequently remarked that designers of computer vision algorithms and systems cannot reliably predict how algorithms will respond to new problems. A variety of reasons have been given for this situation and a variety of remedies prescribed in literature. Most of these involve, in some way, paying greater attention to the domain of the problem and to performing detailed empirical analysis. The goal of this paper is to review what we see as current best practices in these areas and also suggest refinements that may benefit the field of computer vision. A distinction is made between the historical emphasis on algorithmic novelty and the increasing importance of validation on particular data sets and problems.
IEEE Transactions on Circuits and Systems for Video Technology | 1994
Mohammed Foysol Chowdhury; Adrian F. Clark; Andy C. Downton; Eishi Morimatsu; Donald E. Pearson
The algorithmic architecture and performance of a video coder are described that switches between the model-based and H.261 modes, depending on image content. The switching criterion incorporates measures of both picture quality and bit rate. The model-based mode uses generalized cylindrical models and a 3D matching technique to estimate motion. The coder was tested with four different image sequences in CBR and VBR operational modes: the performance was generally superior, and in some cases significantly superior, to that of a free-running H.261 coder operating on the same sequences. The switched architecture represents a possible route for the integration of model-based coding with H.261 to give improved performance at very low bit rates. >
machine vision applications | 1997
Patrick Courtney; Neil A. Thacker; Adrian F. Clark
Many of the machine vision algorithms described in the literature are tested on a very small number of images. It is generally agreed that algorithms need to be tested on much larger numbers if any statistically meaningful measure of performance is to be obtained. However, these tests are rarely performed; in our opinion this is normally due to two reasons. Firstly, the scale of the testing problem is daunting when high levels of reliability are sought, since it is the proportion of failure cases that allows the reliability to be assessed and a large number of failure cases are needed to form an accurate estimation of reliability. For reliable and robust algorithms, this requires an inordinate number of test cases. Secondly, there is the difficulty of selecting test images to ensure that they are representative. This is aggravated by the fact that the assumptions made may be valid in one application domain but not in another. Hence, it is very difficult to relate the results of one evaluation to other users’ requirements. While it is true that published papers in the vision area must contain some evidence of the successful application of the suggested technique, a whole host of reasons have been put forward as to why researchers do not attempt to evaluate their algorithms more rigorously. These objections are valid only within a closely defined context and do not stand up to critical examination [13]. The real problem seems to be the time required for the various stages of algorithm development. The ratiotheory: implementation: evaluation seems to scale according to the rule of thumb 1 : 10 : 100 [13]. The effort required to get a new idea published is thus far less than an extensive empirical evaluation, which is a considerable demotivation for researchers to do evaluation work, particularly as evaluation is not much valued as publishable material in either conferences or journals. However, the truth of the matter is that unless algorithms are evaluated – and in a manner that can be used to predict the capabilities of a technique on an unseen data set – it is unlikely to be re-implemented and used. Moreover, the subject cannot advance without a well-founded scientific methodology, which it will not have without an acknowledged system for
Image and Vision Computing | 2001
Martin Fleury; Adrian F. Clark; Andy C. Downton
Abstract Algorithmic development of optical-flow routines is hampered by slow turnaround times (to iterate over testing, evaluation, and adjustment of the algorithm). To ease the problem, parallel implementation on a convenient general-purpose parallel machine is possible. A generic parallel pipeline structure, suitable for distributed-memory machines, has enabled parallelisation to be quickly achieved. Gradient, correlation, and phase-based methods of optical-flow detection have been constructed to demonstrate the approach. The prototypes enabled comparisons to be made between the speed when parallelised and (already known) accuracy of the three methods when parallelised, on balance favouring the correlation method.
Image and Vision Computing | 1996
Martin Fleury; L. Hayat; Adrian F. Clark
In this paper, we examine a multi-level thresholding algorithm based on a number of phases including peak-search, fuzzy logic and entropy of the fuzzy membership function. Analysis of the algorithm is presented to show its properties and behaviours at the various cascaded stages. The fuzzy entropy function of the image histogram is computed using S-function membership and Shannons entropy function. To establish a suitable fuzzy region bandwidth, we have used a peak-search method based on successive clipping of the image histogram. Location of the valleys in the entropy function correspond to the certainties within the fuzzy region of the image. These certainties are used to indicate an optimal segmentation pattern for multi-level image thresholding. We compare and contrast this method of thresholding with a maximum entropy method. We have implemented the technique in parallel on a transputer-based machine as well as on a cluster of SUN4 workstations, availing ourselves of the PVM communication kernel. A parallel algorithm for the maximum entropy method is given, which significantly reduces computation times. An objective method is used to evaluate the resulting images.
Human-centric Computing and Information Sciences | 2015
Erkan Bostanci; Nadia Kanwal; Adrian F. Clark
This paper explores the use of data from the Kinect sensor for performing augmented reality, with emphasis on cultural heritage applications. It is shown that the combination of depth and image correspondences from the Kinect can yield a reliable estimate of the location and pose of the camera, though noise from the depth sensor introduces an unpleasant jittering of the rendered view. Kalman filtering of the camera position was found to yield a much more stable view. Results show that the system is accurate enough for in situ augmented reality applications. Skeleton tracking using Kinect data allows the appearance of participants to be augmented, and together these facilitate the development of cultural heritage applications.
IEEE Transactions on Image Processing | 2014
Erkan Bostanci; Nadia Kanwal; Adrian F. Clark
When matching images for applications such as mosaicking and homography estimation, the distribution of features across the overlap region affects the accuracy of the result. This paper uses the spatial statistics of these features, measured by Ripleys K-function, to assess whether feature matches are clustered together or spread around the overlap region. A comparison of the performances of a dozen state-of-the-art feature detectors is then carried out using analysis of variance and a large image database. Results show that SFOP introduces significantly less aggregation than the other detectors tested. When the detectors are rank-ordered by this performance measure, the order is broadly similar to those obtained by other means, suggesting that the ordering reflects genuine performance differences. Experiments on stitching images into mosaics confirm that better coverage values yield better quality outputs.
international conference on image analysis and recognition | 2011
Shoaib Ehsan; Nadia Kanwal; Adrian F. Clark; Klaus D. McDonald-Maier
Repeatability is widely used as an indicator of the performance of an image feature detector but, although useful, it does not convey all the information that is required to describe performance. This paper explores the spatial distribution of interest points as an alternative indicator of performance, presenting a metric that is shown to concur with visual assessments. This metric is then extended to provide a measure of complementarity for pairs of detectors. Several state-of-the-art detectors are assessed, both individually and in combination. It is found that Scale Invariant Feature Operator (SFOP) is dominant, both when used alone and in combination with other detectors.
Sensors | 2013
Shoaib Ehsan; Adrian F. Clark; Klaus D. McDonald-Maier
A vision system that can assess its own performance and take appropriate actions online to maximize its effectiveness would be a step towards achieving the long-cherished goal of imitating humans. This paper proposes a method for performing an online performance analysis of local feature detectors, the primary stage of many practical vision systems. It advocates the spatial distribution of local image features as a good performance indicator and presents a metric that can be calculated rapidly, concurs with human visual assessments and is complementary to existing offline measures such as repeatability. The metric is shown to provide a measure of complementarity for combinations of detectors, correctly reflecting the underlying principles of individual detectors. Qualitative results on well-established datasets for several state-of-the-art detectors are presented based on the proposed measure. Using a hypothesis testing approach and a newly-acquired, larger image database, statistically-significant performance differences are identified. Different detector pairs and triplets are examined quantitatively and the results provide a useful guideline for combining detectors in applications that require a reasonable spatial distribution of image features. A principled framework for combining feature detectors in these applications is also presented. Timing results reveal the potential of the metric for online applications.