Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brian V. Funt is active.

Publication


Featured researches published by Brian V. Funt.


IEEE Transactions on Image Processing | 2002

A comparison of computational color constancy algorithms. I: Methodology and experiments with synthesized data

Kobus Barnard; Vlad C. Cardei; Brian V. Funt

We introduce a context for testing computational color constancy, specify our approach to the implementation of a number of the leading algorithms, and report the results of three experiments using synthesized data. Experiments using synthesized data are important because the ground truth is known, possible confounds due to camera characterization and pre-processing are absent, and various factors affecting color constancy can be efficiently investigated because they can be manipulated individually and precisely. The algorithms chosen for close study include two gray world methods, a limiting case of a version of the Retinex method, a number of variants of Forsyths gamut-mapping method, Cardei et al.s neural net method, and Finlayson et al.s color by correlation method. We investigate the ability of these algorithms to make estimates of three different color constancy quantities: the chromaticity of the scene illuminant, the overall magnitude of that illuminant, and a corrected, illumination invariant, image. We consider algorithm performance as a function of the number of surfaces in scenes generated from reflectance spectra, the relative effect on the algorithms of added specularities, and the effect of subsequent clipping of the data. All data is available on-line at http://www.cs.sfu.ca/(tilde)color/data, and implementations for most of the algorithms are also available (http://www.cs.sfu.ca/(tilde)color/code).


Journal of Electronic Imaging | 2004

Retinex in MATLAB

Brian V. Funt; Florian Ciurea; John J. McCann

Many different descriptions of Retinex methods of light- ness computation exist. We provide concise MATLAB™ implemen- tations of two of the spatial techniques of making pixel comparisons. The code is presented, along with test results on several images and a discussion of the results. We also discuss the calibration of input images and the postRetinex processing required to display the output images.


IEEE Transactions on Image Processing | 2002

A comparison of computational color constancy Algorithms. II. Experiments with image data

Kobus Barnard; Lindsay Martin; Adam Coath; Brian V. Funt

We test a number of the leading computational color constancy algorithms using a comprehensive set of images. These were of 33 different scenes under 11 different sources representative of common illumination conditions. The algorithms studied include two gray world methods, a version of the Retinex method, several variants of Forsyths gamut-mapping method, Cardei et al.s neural net method, and Finlayson et al.s Color by Correlation method. We discuss a number of issues in applying color constancy ideas to image data, and study in depth the effect of different preprocessing strategies. We compare the performance of the algorithms on image data with their performance on synthesized data. All data used for this study are available online at http://www.cs.sfu.ca/(tilde)color/data, and implementations for most of the algorithms are also available (http://www.cs.sfu.ca/(tilde)color/code). Experiments with synthesized data (part one of this paper) suggested that the methods which emphasize the use of the input data statistics, specifically color by correlation and the neural net algorithm, are potentially the most effective at estimating the chromaticity of the scene illuminant. Unfortunately, we were unable to realize comparable performance on real images. Here exploiting pixel intensity proved to be more beneficial than exploiting the details of image chromaticity statistics, and the three-dimensional (3-D) gamut-mapping algorithms gave the best performance.


Journal of The Optical Society of America A-optics Image Science and Vision | 1994

Color constancy: generalized diagonal transforms suffice

Graham D. Finlayson; Mark S. Drew; Brian V. Funt

This study’s main result is to show that under the conditions imposed by the Maloney–Wandell color constancy algorithm, whereby illuminants are three dimensional and reflectances two dimensional (the 3–2 world), color constancy can be expressed in terms of a simple independent adjustment of the sensor responses (in other words, as a von Kries adaptation type of coefficient rule algorithm) as long as the sensor space is first transformed to a new basis. A consequence of this result is that any color constancy algorithm that makes 3–2 assumptions, such as the Maloney–Wandell subspace algorithm, Forsyth’s MWEXT, and the Funt–Drew lightness algorithm, must effectively calculate a simple von Kries-type scaling of sensor responses, i.e., a diagonal matrix. Our results are strong in the sense that no constraint is placed on the initial spectral sensitivities of the sensors. In addition to purely theoretical arguments, we present results from simulations of von Kries-type color constancy in which the spectra of real illuminants and reflectances along with the human-cone-sensitivity functions are used. The simulations demonstrate that when the cone sensor space is transformed to its new basis in the appropriate manner a diagonal matrix supports nearly optimal color constancy.


european conference on computer vision | 1998

Is Machine Colour Constancy Good Enough

Brian V. Funt; Kobus Barnard; Lindsay Martin

This paper presents a negative result: current machine colour constancy algorithms are not good enough for colour-based object recognition. This result has surprised us since we have previously used the better of these algorithms successfully to correct the colour balance of images for display. Colour balancing has been the typical application of colour constancy, rarely has it been actually put to use in a computer vision system, so our goal was to show how well the various methods would do on an obvious machine colour vision task, namely, object recognition. Although all the colour constancy methods we tested proved insufficient for the task, we consider this an important finding in itself. In addition we present results showing the correlation between colour constancy performance and object recognition performance, and as one might expect, the better the colour constancy the better the recognition rate.


Artificial Intelligence | 1980

Problem-solving with diagrammatic representations

Brian V. Funt

Abstract Diagrams are of substantial benefit to WHISPER, a computer problem-solving system, in testing the stability of a “blocks world” structure and predicting the event sequences which occur as that structure collapses. WHISPERs components include a high level reasoner which knows some qualitative aspects of Physics, a simulated parallel processing “retina” to “look at” its diagrams, and a set of re-drawing procedures for modifying these diagrams. Roughly modelled after the human eye, WHISPERs retina can fixate at any diagram location, and its resolution decreases away from its center. Diagrams enable WHISPER to work with objects of arbitrary shape, detect collisions and other motion discontinuities, discover coincidental alignments, and easily update its world model after a state change. A theoretical analysis is made of the role of diagrams interacting with a general deductive mechanism such as WHISPERs high level reasoner.


Computer Vision and Image Understanding | 1997

Color Constancy for Scenes with Varying Illumination

Kobus Barnard; Graham D. Finlayson; Brian V. Funt

We present an algorithm which uses information from both surface reflectance and illumination variation to solve for color constancy. Most color constancy algorithms assume that the illumination across a scene is constant, but this is very often not valid for real images. The method presented in this work identifies and removes the illumination variation, and in addition uses the variation to constrain the solution. The constraint is applied conjunctively to constraints found from surface reflectances. Thus the algorithm can provide good color constancy when there is sufficient variation in surface reflectances, or sufficient illumination variation, or a combination of both. We present the results of running the algorithm on several real scenes, and the results are very encouraging.


european conference on computer vision | 1992

Recovering Shading from Color Images

Brian V. Funt; Mark S. Drew; Michael Brockington

Existing shape-from-shading algorithms assume constant reflectance across the shaded surface. Multi-colored surfaces are excluded because both shading and reflectance affect the measured image intensity. Given a standard RGB color image, we describe a method of eliminating the reflectance effects in order to calculate a shading field that depends only on the relative positions of the illuminant and surface. Of course, shading recovery is closely tied to lightness recovery and our method follows from the work of Land [10, 9], Horn [7] and Blake [1]. In the luminance image, R+G+B, shading and reflectance are confounded. Reflectance changes are located and removed from the luminance image by thresholding the gradient of its logarithm at locations of abrupt chromaticity change. Thresholding can lead to gradient fields which are not conservative (do not have zero curl everywhere and are not integrable) and therefore do not represent realizable shading fields. By applying a new curl-correction technique at the thresholded locations, the thresholding is improved and the gradient fields are forced to be conservative. The resulting Poisson equation is solved directly by the Fourier transform method. Experiments with real images are presented.


International Journal of Computer Vision | 1991

Color constancy from mutual reflection

Brian V. Funt; Mark S. Drew; Jian Ho

Mutual reflection occurs when light reflected from one surface illuminates a second surface. In this situation, the color of one or both surfaces can be modified by a color-bleeding effect. In this article we examine how sensor values (e.g., RGB values) are modified in the mutual reflection region and show that a good approximation of the surface spectral reflectance function for each surface can be recovered by using the extra information from mutual reflection. Thus color constancy results from an examination of mutual reflection. Use is made of finite dimensional linear models for ambient illumination and for surface spectral reflectance. If m and n are the number of basis functions required to model illumination and surface spectral reflectance respectively, then we find that the number of different sensor classes p must satisfy the condition p≥(2 n+m)/3. If we use three basis functions to model illumination and three basis functions to model surface spectral reflectance, then only three classes of sensors are required to carry out the algorithm. Results are presented showing a small increase in error over the error inherent in the underlying finite dimension models.


Journal of The Optical Society of America A-optics Image Science and Vision | 2002

Estimating the scene illumination chromaticity by using a neural network

Vlad C. Cardei; Brian V. Funt; Kobus Barnard

A neural network can learn color constancy, defined here as the ability to estimate the chromaticity of a scenes overall illumination. We describe a multilayer neural network that is able to recover the illumination chromaticity given only an image of the scene. The network is previously trained by being presented with a set of images of scenes and the chromaticities of the corresponding scene illuminants. Experiments with real images show that the network performs better than previous color constancy methods. In particular, the performance is better for images with a relatively small number of distinct colors. The method has application to machine vision problems such as object recognition, where illumination-independent color descriptors are required, and in digital photography, where uncontrolled scene illumination can create an unwanted color cast in a photograph.

Collaboration


Dive into the Brian V. Funt's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark S. Drew

Simon Fraser University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Weihua Xiong

Simon Fraser University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Milan Mosny

Simon Fraser University

View shared research outputs
Top Co-Authors

Avatar

Weihua Xiong

Simon Fraser University

View shared research outputs
Researchain Logo
Decentralizing Knowledge