Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Graham D. Finlayson is active.

Publication


Featured researches published by Graham D. Finlayson.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2001

Color by correlation: a simple, unifying framework for color constancy

Graham D. Finlayson; Steven D. Hordley; Paul M. Hubel

The paper considers the problem of illuminant estimation: how, given an image of a scene, recorded under an unknown light, we can recover an estimate of that light. Obtaining such an estimate is a central part of solving the color constancy problem. Thus, the work presented will have applications in fields such as color-based object recognition and digital photography. Rather than attempting to recover a single estimate of the illuminant, we instead set out to recover a measure of the likelihood that each of a set of possible illuminants was the scene illuminant. We begin by determining which image colors can occur (and how these colors are distributed) under each of a set of possible lights. We discuss how, for a given camera, we can obtain this knowledge. We then correlate this information with the colors in a particular image to obtain a measure of the likelihood that each of the possible lights was the scene illuminant. Finally, we use this likelihood information to choose a single light as an estimate of the scene illuminant. Computation is expressed and performed in a generic correlation framework which we develop. We propose a new probabilistic instantiation of this correlation framework and show that it delivers very good color constancy on both synthetic and real images. We further show that the proposed framework is rich enough to allow many existing algorithms to be expressed within it: the gray-world and gamut-mapping algorithms are presented in this framework and we also explore the relationship of these algorithms to other probabilistic and neural network approaches to color constancy.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2006

On the removal of shadows from images

Graham D. Finlayson; Steven D. Hordley; Cheng Lu; Mark S. Drew

This paper is concerned with the derivation of a progression of shadow-free image representations. First, we show that adopting certain assumptions about lights and cameras leads to a 1D, gray-scale image representation which is illuminant invariant at each image pixel. We show that as a consequence, images represented in this form are shadow-free. We then extend this 1D representation to an equivalent 2D, chromaticity representation. We show that in this 2D representation, it is possible to relight all the image pixels in the same way, effectively deriving a 2D image representation which is additionally shadow-free. Finally, we show how to recover a 3D, full color shadow-free image representation by first (with the help of the 2D representation) identifying shadow edges. We then remove shadow edges from the edge-map of the original image by edge in-painting and we propose a method to reintegrate this thresholded edge map, thus deriving the sought-after 3D shadow-free image.


european conference on computer vision | 2002

Removing Shadows from Images

Graham D. Finlayson; Steven D. Hordley; Mark S. Drew

Illumination conditions cause problems for many computer vision algorithms. In particular, shadows in an image can cause segmentation, tracking, or recognition algorithms to fail. In this paper we propose a method to process a 3-band colour image to locate, and subsequently remove shadows. The result is a 3-band colour image which contains all the original salient information in the image, except that the shadows are gone.We use the method set out in [1] to derive a 1-d illumination invariant shadow-free image. We then use this invariant image together with the original image to locate shadow edges. By setting these shadow edges to zero in an edge representation of the original image, and by subsequently re-integrating this edge representation by a method paralleling lightness recovery, we are able to arrive at our sought after full colour, shadow free image. Preliminary results reported in the paper show that the method is effective.A caveat for the application of the method is that we must have a calibrated camera. We show in this paper that a good calibration can be achieved simply by recording a sequence of images of a fixed outdoor scene over the course of a day. After calibration, only a single image is required for shadow removal. It is shown that the resulting calibration is close to that achievable using measurements of the cameras sensitivity functions.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1996

Color in perspective

Graham D. Finlayson

Simple constraints on the sets of possible surface reflectance and illuminants are exploited in a new color constancy algorithm that builds upon Forsyths (1990) theory of color constancy. Forsyths method invokes the constraint that the surface colors under a canonical illuminant all fall within an established maximal convex gamut of possible colors. However, the method works only when restrictive conditions are imposed on the world: the illumination must be uniform, the surfaces must be planar, and there can be no specularities. To overcome these restrictions, we modify Forsyths algorithm so that it works with the colors under a perspective projection (in a chromaticity space). The new algorithm working in perspective is simpler than Forsyths method and more importantly the restrictions on the illuminant, surface shape and specularities can be relaxed. The algorithm is then extended to include a maximal gamut constraint on a set of illuminants that is analogous to the gamut constraint on surface colors. Tests on real images show that the algorithm provides good color constancy.


european conference on computer vision | 2004

Intrinsic Images by Entropy Minimization

Graham D. Finlayson; Mark S. Drew; Cheng Lu

A method was recently devised for the recovery of an invariant image from a 3-band colour image. The invariant image, originally 1D greyscale but here derived as a 2D chromaticity, is independent of lighting, and also has shading removed: it forms an intrinsic image that may be used as a guide in recovering colour images that are independent of illumination conditions. Invariance to illuminant colour and intensity means that such images are free of shadows, as well, to a good degree. The method devised finds an intrinsic reflectivity image based on assumptions of Lambertian reflectance, approximately Planckian lighting, and fairly narrowband camera sensors. Nevertheless, the method works well when these assumptions do not hold. A crucial piece of information is the angle for an “invariant direction” in a log-chromaticity space. To date, we have gleaned this information via a preliminary calibration routine, using the camera involved to capture images of a colour target under different lights. In this paper, we show that we can in fact dispense with the calibration step, by recognizing a simple but important fact: the correct projection is that which minimizes entropy in the resulting invariant image. To show that this must be the case we first consider synthetic images, and then apply the method to real images. We show that not only does a correct shadow-free image emerge, but also that the angle found agrees with that recovered from a calibration. As a result, we can find shadow-free images for images with unknown camera, and the method is applied successfully to remove shadows from unsourced imagery.


Journal of The Optical Society of America A-optics Image Science and Vision | 1994

Color constancy: generalized diagonal transforms suffice

Graham D. Finlayson; Mark S. Drew; Brian V. Funt

This study’s main result is to show that under the conditions imposed by the Maloney–Wandell color constancy algorithm, whereby illuminants are three dimensional and reflectances two dimensional (the 3–2 world), color constancy can be expressed in terms of a simple independent adjustment of the sensor responses (in other words, as a von Kries adaptation type of coefficient rule algorithm) as long as the sensor space is first transformed to a new basis. A consequence of this result is that any color constancy algorithm that makes 3–2 assumptions, such as the Maloney–Wandell subspace algorithm, Forsyth’s MWEXT, and the Funt–Drew lightness algorithm, must effectively calculate a simple von Kries-type scaling of sensor responses, i.e., a diagonal matrix. Our results are strong in the sense that no constraint is placed on the initial spectral sensitivities of the sensors. In addition to purely theoretical arguments, we present results from simulations of von Kries-type color constancy in which the spectra of real illuminants and reflectances along with the human-cone-sensitivity functions are used. The simulations demonstrate that when the cone sensor space is transformed to its new basis in the appropriate manner a diagonal matrix supports nearly optimal color constancy.


european conference on computer vision | 1998

Comprehensive Colour Image Normalization

Graham D. Finlayson; Bernt Schiele; James L. Crowley

The same scene viewed under two different illuminants induces two different colour images. If the two illuminants are the same colour but are placed at different positions then corresponding rgb pixels are related by simple scale factors. In contrast if the lighting geometry is held fixed but the colour of the light changes then it is the individual colour channels (e.g. all the red pixel values or all the green pixels) that are a scaling apart. It is well known that the image dependencies due to lighting geometry and illuminant colour can be respectively removed by normalizing the magnitude of the rgb pixel triplets (e.g. by calculating chromaticities) and by normalizing the lengths of each colour channel (by running the ‘grey-world’ colour constancy algorithm). However, neither normalization suffices to account for changes in both the lighting geometry and illuminant colour.


International Journal of Computer Vision | 2009

Entropy Minimization for Shadow Removal

Graham D. Finlayson; Mark S. Drew; Cheng Lu

Recently, a method for removing shadows from colour images was developed (Finlayson et al. in IEEE Trans. Pattern Anal. Mach. Intell. 28:59–68, 2006) that relies upon finding a special direction in a 2D chromaticity feature space. This “invariant direction” is that for which particular colour features, when projected into 1D, produce a greyscale image which is approximately invariant to intensity and colour of scene illumination. Thus shadows, which are in essence a particular type of lighting, are greatly attenuated. The main approach to finding this special angle is a camera calibration: a colour target is imaged under many different lights, and the direction that best makes colour patch images equal across illuminants is the invariant direction. Here, we take a different approach. In this work, instead of a camera calibration we aim at finding the invariant direction from evidence in the colour image itself. Specifically, we recognize that producing a 1D projection in the correct invariant direction will result in a 1D distribution of pixel values that have smaller entropy than projecting in the wrong direction. The reason is that the correct projection results in a probability distribution spike, for pixels all the same except differing by the lighting that produced their observed RGB values and therefore lying along a line with orientation equal to the invariant direction. Hence we seek that projection which produces a type of intrinsic, independent of lighting reflectance-information only image by minimizing entropy, and from there go on to remove shadows as previously. To be able to develop an effective description of the entropy-minimization task, we go over to the quadratic entropy, rather than Shannon’s definition. Replacing the observed pixels with a kernel density probability distribution, the quadratic entropy can be written as a very simple formulation, and can be evaluated using the efficient Fast Gauss Transform. The entropy, written in this embodiment, has the advantage that it is more insensitive to quantization than is the usual definition. The resulting algorithm is quite reliable, and the shadow removal step produces good shadow-free colour image results whenever strong shadow edges are present in the image. In most cases studied, entropy has a strong minimum for the invariant direction, revealing a new property of image formation.


Pattern Recognition | 2005

Illuminant and Device Invariant Colour Using Histogram Equalisation

Graham D. Finlayson; Steven D. Hordley; Gerald Schaefer; Gui Yun Tian

Colour can potentially provide useful information for a variety of computer vision tasks such as image segmentation, image retrieval, object recognition and tracking. However, for it to be helpful in practice, colour must relate directly to the intrinsic properties of the imaged objects and be independent of imaging conditions such as scene illumination and the imaging device. To this end many invariant colour representations have been proposed in the literature. Unfortunately, recent work (Second Workshop on Content-based Multimedia Indexing) has shown that none of them provides good enough practical performance. In this paper we propose a new colour invariant image representation based on an existing grey-scale image enhancement technique: histogram equalisation. We show that provided the rank ordering of sensor responses are preserved across a change in imaging conditions (lighting or device) a histogram equalisation of each channel of a colour image renders it invariant to these conditions. We set out theoretical conditions under which rank ordering of sensor responses is preserved and we present empirical evidence which demonstrates that rank ordering is maintained in practice for a wide range of illuminants and imaging devices. Finally, we apply the method to an image indexing application and show that the method out performs all previous invariant representations, giving close to perfect illumination invariance and very good performance across a change in device.


Computer Vision and Image Understanding | 1997

Color Constancy for Scenes with Varying Illumination

Kobus Barnard; Graham D. Finlayson; Brian V. Funt

We present an algorithm which uses information from both surface reflectance and illumination variation to solve for color constancy. Most color constancy algorithms assume that the illumination across a scene is constant, but this is very often not valid for real images. The method presented in this work identifies and removes the illumination variation, and in addition uses the variation to constrain the solution. The constraint is applied conjunctively to constraints found from surface reflectances. Thus the algorithm can provide good color constancy when there is sufficient variation in surface reflectances, or sufficient illumination variation, or a combination of both. We present the results of running the algorithm on several real scenes, and the results are very encouraging.

Collaboration


Dive into the Graham D. Finlayson's collaboration.

Top Co-Authors

Avatar

Mark S. Drew

University of East Anglia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Morovic

University of East Anglia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guoping Qiu

University of Nottingham

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge