Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mark S. Drew is active.

Publication


Featured researches published by Mark S. Drew.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2006

On the removal of shadows from images

Graham D. Finlayson; Steven D. Hordley; Cheng Lu; Mark S. Drew

This paper is concerned with the derivation of a progression of shadow-free image representations. First, we show that adopting certain assumptions about lights and cameras leads to a 1D, gray-scale image representation which is illuminant invariant at each image pixel. We show that as a consequence, images represented in this form are shadow-free. We then extend this 1D representation to an equivalent 2D, chromaticity representation. We show that in this 2D representation, it is possible to relight all the image pixels in the same way, effectively deriving a 2D image representation which is additionally shadow-free. Finally, we show how to recover a 3D, full color shadow-free image representation by first (with the help of the 2D representation) identifying shadow edges. We then remove shadow edges from the edge-map of the original image by edge in-painting and we propose a method to reintegrate this thresholded edge map, thus deriving the sought-after 3D shadow-free image.


european conference on computer vision | 2002

Removing Shadows from Images

Graham D. Finlayson; Steven D. Hordley; Mark S. Drew

Illumination conditions cause problems for many computer vision algorithms. In particular, shadows in an image can cause segmentation, tracking, or recognition algorithms to fail. In this paper we propose a method to process a 3-band colour image to locate, and subsequently remove shadows. The result is a 3-band colour image which contains all the original salient information in the image, except that the shadows are gone.We use the method set out in [1] to derive a 1-d illumination invariant shadow-free image. We then use this invariant image together with the original image to locate shadow edges. By setting these shadow edges to zero in an edge representation of the original image, and by subsequently re-integrating this edge representation by a method paralleling lightness recovery, we are able to arrive at our sought after full colour, shadow free image. Preliminary results reported in the paper show that the method is effective.A caveat for the application of the method is that we must have a calibrated camera. We show in this paper that a good calibration can be achieved simply by recording a sequence of images of a fixed outdoor scene over the course of a day. After calibration, only a single image is required for shadow removal. It is shown that the resulting calibration is close to that achievable using measurements of the cameras sensitivity functions.


european conference on computer vision | 2004

Intrinsic Images by Entropy Minimization

Graham D. Finlayson; Mark S. Drew; Cheng Lu

A method was recently devised for the recovery of an invariant image from a 3-band colour image. The invariant image, originally 1D greyscale but here derived as a 2D chromaticity, is independent of lighting, and also has shading removed: it forms an intrinsic image that may be used as a guide in recovering colour images that are independent of illumination conditions. Invariance to illuminant colour and intensity means that such images are free of shadows, as well, to a good degree. The method devised finds an intrinsic reflectivity image based on assumptions of Lambertian reflectance, approximately Planckian lighting, and fairly narrowband camera sensors. Nevertheless, the method works well when these assumptions do not hold. A crucial piece of information is the angle for an “invariant direction” in a log-chromaticity space. To date, we have gleaned this information via a preliminary calibration routine, using the camera involved to capture images of a colour target under different lights. In this paper, we show that we can in fact dispense with the calibration step, by recognizing a simple but important fact: the correct projection is that which minimizes entropy in the resulting invariant image. To show that this must be the case we first consider synthetic images, and then apply the method to real images. We show that not only does a correct shadow-free image emerge, but also that the angle found agrees with that recovered from a calibration. As a result, we can find shadow-free images for images with unknown camera, and the method is applied successfully to remove shadows from unsourced imagery.


Journal of The Optical Society of America A-optics Image Science and Vision | 1994

Color constancy: generalized diagonal transforms suffice

Graham D. Finlayson; Mark S. Drew; Brian V. Funt

This study’s main result is to show that under the conditions imposed by the Maloney–Wandell color constancy algorithm, whereby illuminants are three dimensional and reflectances two dimensional (the 3–2 world), color constancy can be expressed in terms of a simple independent adjustment of the sensor responses (in other words, as a von Kries adaptation type of coefficient rule algorithm) as long as the sensor space is first transformed to a new basis. A consequence of this result is that any color constancy algorithm that makes 3–2 assumptions, such as the Maloney–Wandell subspace algorithm, Forsyth’s MWEXT, and the Funt–Drew lightness algorithm, must effectively calculate a simple von Kries-type scaling of sensor responses, i.e., a diagonal matrix. Our results are strong in the sense that no constraint is placed on the initial spectral sensitivities of the sensors. In addition to purely theoretical arguments, we present results from simulations of von Kries-type color constancy in which the spectra of real illuminants and reflectances along with the human-cone-sensitivity functions are used. The simulations demonstrate that when the cone sensor space is transformed to its new basis in the appropriate manner a diagonal matrix supports nearly optimal color constancy.


IEEE Signal Processing Magazine | 2005

Color image processing pipeline

Rajeev Ramanath; Wesley E. Snyder; Youngjun Yoo; Mark S. Drew

Digital still color cameras (DSCs) have gained significant popularity in recent years, with projected sales in the order of 44 million units by the year 2005. Such an explosive demand calls for an understanding of the processing involved and the implementation issues, bearing in mind the otherwise difficult problems these cameras solve. This article presents an overview of the image processing pipeline, first from a signal processing perspective and later from an implementation perspective, along with the tradeoffs involved.


International Journal of Computer Vision | 2009

Entropy Minimization for Shadow Removal

Graham D. Finlayson; Mark S. Drew; Cheng Lu

Recently, a method for removing shadows from colour images was developed (Finlayson et al. in IEEE Trans. Pattern Anal. Mach. Intell. 28:59–68, 2006) that relies upon finding a special direction in a 2D chromaticity feature space. This “invariant direction” is that for which particular colour features, when projected into 1D, produce a greyscale image which is approximately invariant to intensity and colour of scene illumination. Thus shadows, which are in essence a particular type of lighting, are greatly attenuated. The main approach to finding this special angle is a camera calibration: a colour target is imaged under many different lights, and the direction that best makes colour patch images equal across illuminants is the invariant direction. Here, we take a different approach. In this work, instead of a camera calibration we aim at finding the invariant direction from evidence in the colour image itself. Specifically, we recognize that producing a 1D projection in the correct invariant direction will result in a 1D distribution of pixel values that have smaller entropy than projecting in the wrong direction. The reason is that the correct projection results in a probability distribution spike, for pixels all the same except differing by the lighting that produced their observed RGB values and therefore lying along a line with orientation equal to the invariant direction. Hence we seek that projection which produces a type of intrinsic, independent of lighting reflectance-information only image by minimizing entropy, and from there go on to remove shadows as previously. To be able to develop an effective description of the entropy-minimization task, we go over to the quadratic entropy, rather than Shannon’s definition. Replacing the observed pixels with a kernel density probability distribution, the quadratic entropy can be written as a very simple formulation, and can be evaluated using the efficient Fast Gauss Transform. The entropy, written in this embodiment, has the advantage that it is more insensitive to quantization than is the usual definition. The resulting algorithm is quite reliable, and the shadow removal step produces good shadow-free colour image results whenever strong shadow edges are present in the image. In most cases studied, entropy has a strong minimum for the invariant direction, revealing a new property of image formation.


european conference on computer vision | 1992

Recovering Shading from Color Images

Brian V. Funt; Mark S. Drew; Michael Brockington

Existing shape-from-shading algorithms assume constant reflectance across the shaded surface. Multi-colored surfaces are excluded because both shading and reflectance affect the measured image intensity. Given a standard RGB color image, we describe a method of eliminating the reflectance effects in order to calculate a shading field that depends only on the relative positions of the illuminant and surface. Of course, shading recovery is closely tied to lightness recovery and our method follows from the work of Land [10, 9], Horn [7] and Blake [1]. In the luminance image, R+G+B, shading and reflectance are confounded. Reflectance changes are located and removed from the luminance image by thresholding the gradient of its logarithm at locations of abrupt chromaticity change. Thresholding can lead to gradient fields which are not conservative (do not have zero curl everywhere and are not integrable) and therefore do not represent realizable shading fields. By applying a new curl-correction technique at the thresholded locations, the thresholding is improved and the gradient fields are forced to be conservative. The resulting Poisson equation is solved directly by the Fourier transform method. Experiments with real images are presented.


computer vision and pattern recognition | 2006

Unsupervised Discovery of Action Classes

Yang Wang; Hao Jiang; Mark S. Drew; Ze-Nian Li; Greg Mori

In this paper we consider the problem of describing the action being performed by human figures in still images. We will attack this problem using an unsupervised learning approach, attempting to discover the set of action classes present in a large collection of training images. These action classes will then be used to label test images. Our approach uses the coarse shape of the human figures to match pairs of images. The distance between a pair of images is computed using a linear programming relaxation technique. This is a computationally expensive process, and we employ a fast pruning method to enable its use on a large collection of images. Spectral clustering is then performed using the resulting distances. We present clustering and image labeling results on a variety of datasets.


International Journal of Computer Vision | 1991

Color constancy from mutual reflection

Brian V. Funt; Mark S. Drew; Jian Ho

Mutual reflection occurs when light reflected from one surface illuminates a second surface. In this situation, the color of one or both surfaces can be modified by a color-bleeding effect. In this article we examine how sensor values (e.g., RGB values) are modified in the mutual reflection region and show that a good approximation of the surface spectral reflectance function for each surface can be recovered by using the extra information from mutual reflection. Thus color constancy results from an examination of mutual reflection. Use is made of finite dimensional linear models for ambient illumination and for surface spectral reflectance. If m and n are the number of basis functions required to model illumination and surface spectral reflectance respectively, then we find that the number of different sensor classes p must satisfy the condition p≥(2 n+m)/3. If we use three basis functions to model illumination and three basis functions to model surface spectral reflectance, then only three classes of sensors are required to carry out the algorithm. Results are presented showing a small increase in error over the error inherent in the underlying finite dimension models.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1990

Separating a color signal into illumination and surface reflectance components: theory and applications

Jian Ho; Brian V. Funt; Mark S. Drew

A separation algorithm for achieving color constancy and theorems concerning its accuracy are presented. The algorithm requires extra information, over and above the usual three values mapping human cone responses, from the optical system. However, with this additional information-specifically, a sampling across the visible range of the reflected, color-signal spectrum impinging on the optical sensor-the authors are able to separate the illumination spectrum from the surface reflectance spectrum contained in the color-signal spectrum which is, of course, the product of these two spectra. At the heart of the separation algorithm is a general statistical method for finding the best illumination and reflectance spectra, within a space represented by finite-dimensional linear models of statistically typical spectra, whose product closely corresponds to the spectrum of the actual color signal. Using this method, the authors are able to increase the dimensionality of the finite-dimensional linear model for surfaces to a realistic value. One method of generating the spectral samples required for the separation algorithm is to use the chromatic aberration effects of a lens. An example of this is given. The accuracy achieved in a large range of tests is detailed, and it is shown that agreement with actual surface reflectance is excellent. >

Collaboration


Dive into the Mark S. Drew's collaboration.

Top Co-Authors

Avatar

Ze-Nian Li

Simon Fraser University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hao Jiang

Simon Fraser University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cheng Lu

Simon Fraser University

View shared research outputs
Top Co-Authors

Avatar

Ali Madooei

Simon Fraser University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge