Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Steven D. Hordley is active.

Publication


Featured researches published by Steven D. Hordley.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2001

Color by correlation: a simple, unifying framework for color constancy

Graham D. Finlayson; Steven D. Hordley; Paul M. Hubel

The paper considers the problem of illuminant estimation: how, given an image of a scene, recorded under an unknown light, we can recover an estimate of that light. Obtaining such an estimate is a central part of solving the color constancy problem. Thus, the work presented will have applications in fields such as color-based object recognition and digital photography. Rather than attempting to recover a single estimate of the illuminant, we instead set out to recover a measure of the likelihood that each of a set of possible illuminants was the scene illuminant. We begin by determining which image colors can occur (and how these colors are distributed) under each of a set of possible lights. We discuss how, for a given camera, we can obtain this knowledge. We then correlate this information with the colors in a particular image to obtain a measure of the likelihood that each of the possible lights was the scene illuminant. Finally, we use this likelihood information to choose a single light as an estimate of the scene illuminant. Computation is expressed and performed in a generic correlation framework which we develop. We propose a new probabilistic instantiation of this correlation framework and show that it delivers very good color constancy on both synthetic and real images. We further show that the proposed framework is rich enough to allow many existing algorithms to be expressed within it: the gray-world and gamut-mapping algorithms are presented in this framework and we also explore the relationship of these algorithms to other probabilistic and neural network approaches to color constancy.


european conference on computer vision | 2002

Removing Shadows from Images

Graham D. Finlayson; Steven D. Hordley; Mark S. Drew

Illumination conditions cause problems for many computer vision algorithms. In particular, shadows in an image can cause segmentation, tracking, or recognition algorithms to fail. In this paper we propose a method to process a 3-band colour image to locate, and subsequently remove shadows. The result is a 3-band colour image which contains all the original salient information in the image, except that the shadows are gone.We use the method set out in [1] to derive a 1-d illumination invariant shadow-free image. We then use this invariant image together with the original image to locate shadow edges. By setting these shadow edges to zero in an edge representation of the original image, and by subsequently re-integrating this edge representation by a method paralleling lightness recovery, we are able to arrive at our sought after full colour, shadow free image. Preliminary results reported in the paper show that the method is effective.A caveat for the application of the method is that we must have a calibrated camera. We show in this paper that a good calibration can be achieved simply by recording a sequence of images of a fixed outdoor scene over the course of a day. After calibration, only a single image is required for shadow removal. It is shown that the resulting calibration is close to that achievable using measurements of the cameras sensitivity functions.


Pattern Recognition | 2005

Illuminant and Device Invariant Colour Using Histogram Equalisation

Graham D. Finlayson; Steven D. Hordley; Gerald Schaefer; Gui Yun Tian

Colour can potentially provide useful information for a variety of computer vision tasks such as image segmentation, image retrieval, object recognition and tracking. However, for it to be helpful in practice, colour must relate directly to the intrinsic properties of the imaged objects and be independent of imaging conditions such as scene illumination and the imaging device. To this end many invariant colour representations have been proposed in the literature. Unfortunately, recent work (Second Workshop on Content-based Multimedia Indexing) has shown that none of them provides good enough practical performance. In this paper we propose a new colour invariant image representation based on an existing grey-scale image enhancement technique: histogram equalisation. We show that provided the rank ordering of sensor responses are preserved across a change in imaging conditions (lighting or device) a histogram equalisation of each channel of a colour image renders it invariant to these conditions. We set out theoretical conditions under which rank ordering of sensor responses is preserved and we present empirical evidence which demonstrates that rank ordering is maintained in practice for a wide range of illuminants and imaging devices. Finally, we apply the method to an image indexing application and show that the method out performs all previous invariant representations, giving close to perfect illumination invariance and very good performance across a change in device.


International Journal of Computer Vision | 2006

Gamut Constrained Illuminant Estimation

Graham D. Finlayson; Steven D. Hordley; Ingeborg Tastl

This paper presents a novel solution to the illuminant estimation problem: the problem of how, given an image of a scene taken under an unknown illuminant, we can recover an estimate of that light. The work is founded on previous gamut mapping solutions to the problem which solve for a scene illuminant by determining the set of diagonal mappings which take image data captured under an unknown light to a gamut of reference colours taken under a known light. Unfortunately, a diagonal model is not always a valid model of illumination change and so previous approaches sometimes return a null solution. In addition, previous methods are difficult to implement. We address these problems by recasting the problem as one of illuminant classification: we define a priori a set of plausible lights thus ensuring that a scene illuminant estimate will always be found. A plausible light is represented by the gamut of colours observable under it and the illuminant in an image is classified by determining the plausible light whose gamut is most consistent with the image data. We show that this step (the main computational burden of the algorithm) can be performed simply and efficiently by means of a non-negative least-squares optimisation. We report results on a large set of real images which show that it provides excellent illuminant estimation, outperforming previous algorithms.


Journal of The Optical Society of America A-optics Image Science and Vision | 2006

Reevaluation of color constancy algorithm performance

Steven D. Hordley; Graham D. Finlayson

The relative performance of color constancy algorithms is evaluated. We highlight some problems with previous algorithm evaluation and define more appropriate testing procedures. We discuss how best to measure algorithm accuracy on a single image as well as suitable methods for summarizing errors over a set of images. We also discuss how the relative performance of two or more algorithms should best be compared, and we define an experimental framework for testing algorithms. We reevaluate the performance of six color constancy algorithms using the procedures that we set out and show that this leads to a significant change in the conclusions that we draw about relative algorithm performance as compared with those from previous work.


IEEE Transactions on Image Processing | 2000

Improving gamut mapping color constancy

Graham D. Finlayson; Steven D. Hordley

The color constancy problem, that is, estimating the color of the scene illuminant from a set of image data recorded under an unknown light, is an important problem in computer vision and digital photography. The gamut mapping approach to color constancy is, to date, one of the most successful solutions to this problem. In this algorithm the set of mappings taking the image colors recorded under an unknown illuminant to the gamut of all colors observed under a standard illuminant is characterized. Then, at a second stage, a single mapping is selected from this feasible set. In the first version of this algorithm Forsyth (1990) mapped sensor values recorded under one illuminant to those recorded under a second, using a three-dimensional (3-D) diagonal matrix. However because the intensity of the scene illuminant cannot be recovered Finlayson (see IEEE Trans. Pattern Anal. Machine Intell. vol.18, no.10, p.1034-38, 1996) modified Forsyths algorithm to work in a two-dimensional (2-D) chromaticity space and set out to recover only 2-D chromaticity mappings. While the chromaticity mapping overcomes the intensity problem it is not clear that something has not been lost in the process. The first result of this paper is to show that only intensity information is lost. Formally, we prove that the feasible set calculated by Forsyths original algorithm, projected into 2-D, is the same as the feasible set calculated by the 2-D algorithm. Thus, there is no advantage in using the 3-D algorithm and we can use the simpler, 2-D version of the algorithm to characterize the set of feasible illuminants. Another problem with the chromaticity mapping is that it is perspective in nature and so chromaticities and chromaticity maps are perspectively distorted. Previous work demonstrated that the effects of perspective distortion were serious for the 2-D algorithm. Indeed, in order to select a sensible single mapping from the feasible set this set must first be mapped back up to 3-D. We extend this work to the case where a constraint on the possible color of the illuminant is factored into the gamut mapping algorithm. We show here that the illumination constraint can be enforced during selection without explicitly intersecting the two constraint sets. In the final part of this paper we reappraise the selection task. Gamut mapping returns the set of feasible illuminant maps. Our new algorithm is tested using real and synthetic images. The results of these tests show that the algorithm presented delivers excellent color constancy.


computer vision and pattern recognition | 2005

A combined physical and statistical approach to colour constancy

Gerald Schaefer; Steven D. Hordley; Graham D. Finlayson

Computational colour constancy tries to recover the colour of the scene illuminant of an image. Colour constancy algorithms can, in general, be divided into two groups: statistics-based approaches that exploit statistical knowledge of common lights and surfaces, and physics-based algorithms which are based on an understanding of how physical processes such as highlights manifest themselves in images. A combined physical and statistical colour constancy algorithm that integrates the advantages of the statistics-based colour by correlation method with those of a physics-based technique based on the dichromatic reflectance model is introduced. In contrast to other approaches not only a single illuminant estimate is provided but a set of likelihoods for a given illumination set. Experimental results on the benchmark Simon Fraser image database show the combined method to clearly outperform purely statistical and purely physical algorithms.


international conference on computer vision | 1999

Colour by correlation: a simple, unifying approach to colour constancy

Graham D. Finlayson; Steven D. Hordley; Paul M. Hubel

In this paper we consider the problem of colour constancy; how given an image of a scene under an unknown illuminant can we recover an estimate of that light? Rather than recovering a single estimate of the illuminant as many previous authors have done, in the first instance we recover a measure of the likelihood that each possible illuminant was the scene illuminant. We do this by correlating image colours with the colours that can occur under each of a set of possible lights. We then recover an estimate of the scene illuminant based on these likelihoods. Computation is expressed and performed in a generic correlation framework which we develop in this paper. We develop a new probabilistic instantiation of this framework which delivers very good colour constancy on synthetic and real images. We show that the proposed framework is rich enough to allow many existing algorithms to be expressed within it; e.g. the grey-world and gamut mapping algorithms. We explore too the relationship of these algorithms to other probabilistic and neural network approaches.


british machine vision conference | 2000

Colour invariance at a pixel

Graham D. Finlayson; Steven D. Hordley; John A. Marchant; Christine M. Onyango

This paper addresses the question of what can be said about the colours in images that is independent of illumination. We make two main assumptions: Firstly, the illumination can be characterised as Planckian (a realistic assumption for most real scenes). Secondly, the camera behaves as if it were equipped with narrow band sensors (true for a large number of cameras). The resulting physics-based method results in a transformation of the original colour image to a grey-scale one which does not vary with illumination. We give results showing invariance under a range of illumination conditions.


Journal of the Optical Society of America | 2001

Color constancy at a pixel

Graham D. Finlayson; Steven D. Hordley

Collaboration


Dive into the Steven D. Hordley's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark S. Drew

Simon Fraser University

View shared research outputs
Top Co-Authors

Avatar

Peter Morovic

University of East Anglia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John A. Marchant

University of Bedfordshire

View shared research outputs
Top Co-Authors

Avatar

M. S. Drew

University of East Anglia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge