David H. Brainard
University of Pennsylvania
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David H. Brainard.
Spatial Vision | 1997
David H. Brainard
The Psychophysics Toolbox is a software package that supports visual psychophysics. Its routines provide an interface between a high-level interpreted language (MATLAB on the Macintosh) and the video display hardware. A set of example programs is included with the Toolbox distribution.
Journal of The Optical Society of America A-optics Image Science and Vision | 1997
David H. Brainard; William T. Freeman
The problem of color constancy may be solved if we can recover the physical properties of illuminants and surfaces from photosensor responses. We consider this problem within the framework of Bayesian decision theory. First, we model the relation among illuminants, surfaces, and photosensor responses. Second, we construct prior distributions that describe the probability that particular illuminants and surfaces exist in the world. Given a set of photosensor responses, we can then use Bayess rule to compute the posterior distribution for the illuminants and the surfaces in the scene. There are two widely used methods for obtaining a single best estimate from a posterior distribution. These are maximum a posteriori (MAP) and minimum mean-square-error (MMSE) estimation. We argue that neither is appropriate for perception problems. We describe a new estimator, which we call the maximum local mass (MLM) estimate, that integrates local probability density. The new method uses an optimality criterion that is appropriate for perception tasks: It finds the most probable approximately correct answer. For the case of low observation noise, we provide an efficient approximation. We develop the MLM estimator for the color-constancy problem in which flat matte surfaces are uniformly illuminated. In simulations we show that the MLM method performs better than the MAP estimator and better than a number of standard color-constancy algorithms. We note conditions under which even the optimal estimator produces poor estimates: when the spectral properties of the surfaces in the scene are biased.
Journal of The Optical Society of America A-optics Image Science and Vision | 1986
David H. Brainard; Brian A. Wandell
If color appearance is to be a useful feature in identifying an object, then color appearance must remain roughly constant when the object is viewed in different contexts. People maintain approximate color constancy despite variation in the color of nearby objects and despite variation in the spectral power distribution of the ambient light. Lands retinex algorithm is a model of human color constancy. We analyze the retinex algorithm and discuss its general properties. We show that the algorithm is too sensitive to changes in the color of nearby objects to serve as an adequate model of human color constancy.
Journal of The Optical Society of America A-optics Image Science and Vision | 1992
David H. Brainard; Brian A. Wandell
We report the results of matching experiments designed to study the color appearance of objects rendered under different simulated illuminants on a CRT monitor. Subjects set asymmetric color matches between a standard object and a test object that were rendered under illuminants with different spectral power distributions. For any illuminant change, we found that the mapping between the cone coordinates of matching standard and test objects was well approximated by a diagonal linear transformation. In this sense, our results are consistent with von Kriess hypothesis [Handb. Physiol. Menschen 3, 109 (1905) [in Sources of Color Vision, D. L. MacAdam, ed. (MIT Press, Cambridge, Mass., 1970)]] that adaptation simply changes the relative sensitivity of the different cone classes. In addition, we examined the dependence of the diagonal transformation on the illuminant change. For the range of illuminants tested, we found that the change in the diagonal elements of the linear transformation was a linear function of the illuminant change.
Journal of The Optical Society of America A-optics Image Science and Vision | 1998
David H. Brainard
Most empirical work on color constancy is based on simple laboratory models of natural viewing conditions. These typically consist of spots seen against uniform backgrounds or computer simulations of flat surfaces seen under spatially uniform illumination. In this study measurements were made under more natural viewing conditions. Observers used a projection colorimeter to adjust the appearance of a test patch until it appeared achromatic. Observers made such achromatic settings under a variety of illuminants and when the test surface was viewed against a number of different backgrounds. An analysis of the achromatic settings reveals that observers show good color constancy when the illumination is varied. Changing the background surface against which the test patch is seen, on the other hand, has a relatively small effect on the achromatic loci. The results thus indicate that constancy is not achieved by a simple comparison between the test surface and its local surround.
Vision Research | 1996
David R. Williams; Pablo Artal; Rafael Navarro; Matthew J. McMahon; David H. Brainard
Using the double pass procedure, Navarro et al. (1993; Journal of the Optical Society of America A, 10, 201-212) measured the monochromatic modulation transfer function (MTF) of the human eye as a function of retinal eccentricity. They chose conditions as similar as possible to those encountered in natural viewing. We report new measurements obtained with conditions chosen instead to optimize retinal image quality: we paralyzed accommodation, used a 3 mm pupil, and corrected defocus and oblique astigmatism at each retinal location. MTFs were estimated at the tangential focus, circle of least confusion, and sagittal focus produced by oblique astigmatism. Though optical blur is well-known to have little effect on peripheral visual acuity, it can nonetheless substantially reduce aliasing by receptoral and post-receptoral spatial sampling.
Proceedings of the IEEE | 2002
Philippe Longère; Xuemei Zhang; Peter B. Delahunt; David H. Brainard
Demosaicing is an important part of the image-processing chain for many digital color cameras. The demosaicing operation converts a raw image acquired with a single sensor array, overlaid with a color filter array, into a full-color image. In this paper, we report the results of two perceptual experiments that compare the perceptual quality of the output of different demosaicing algorithms. In the first experiment, we found that a Bayesian demosaicing algorithm produced the most preferred images. Detailed examination of the data, however indicated that the good performance of this algorithm was at least in part due to the fact that it sharpened the images while it demosaiced them. In a second experiment, we silenced image sharpness as a factor by applying a sharpening algorithm to the output of each demosaicing algorithm. The optimal amount of sharpening to be applied to each image was chosen using the results of a preliminary experiment. Once sharpness was equated in this way, an algorithm developed by Freeman based on bilinear interpolation combined with median filtering, gave the best results. An analysis of our data suggests that our perceptual results cannot be easily predicted using an image metric.
Journal of Vision | 2011
David H. Brainard; Laurence T. Maloney
Vision provides information about the properties and identity of objects. The ease with which we perceive object properties belies the difficulty of the underlying information-processing task. In the case of object color, retinal information about object reflectance is confounded with information about the illumination as well as about the objects shape and pose. There is no obvious rule that allows transformation of the retinal image to a color representation that depends primarily on object surface reflectance. Under many circumstances, however, object color appearance is remarkably stable across scenes in which the object is viewed. Here, we review a line of experiments and theory that aim to understand how the visual system stabilizes object color appearance. Our emphasis is on models derived from explicit analysis of the computational problem of estimating the physical properties of illuminants and surfaces from the retinal image, and experiments that test these models. We argue that this approach has considerable promise for allowing generalization from simplified laboratory experiments to richer scenes that more closely approximate natural viewing. We discuss the relation between the work we review and other theoretical approaches available in the literature.
Animal Behaviour | 1998
Leo J. Fleishman; William McClintock; Richard B. D'Eath; David H. Brainard; John A. Endler
LEO J. FLEISHMAN*, WILLIAM J. McCLINTOCK†, RICHARD B. D’EATH‡, DAVID H. BRAINARD§ & JOHN A. ENDLER** *Department of Biological Sciences, Union College †Department of Ecology, Evolution and Marine Biology, University of California at Santa Barbara, U.S.A. ‡Animal Biology Division, Scottish Agricultural College, Edinburgh §Department of Psychology, University of California at Santa Barbara, U.S.A. **Department of Zoology and Tropical Ecology, James Cook University, Australia
Journal of The Optical Society of America A-optics Image Science and Vision | 1993
Nobutoshi Sekiguchi; David R. Williams; David H. Brainard
We examined the limitations imposed by neural factors on spatial contrast sensitivity for both isochromatic and isoluminant gratings. We used two strategies to isolate these neural factors. First, we eliminated the effect of blurring by the dioptrics of the eye by using interference fringes. Second, we corrected our data for additional sensitivity losses up to and including the site of photon absorption by applying an ideal-observer analysis described by Geisler [J. Opt. Soc. Am. A 1, 775 (1984)]. Our measurements indicate that the neural visual system modifies the shape of the contrast-sensitivity functions for both isochromatic and isoluminant stimuli at high spatial frequencies. If we assume that the high-spatial-frequency performance of the neural visual system is determined by a low-pass spatial filter followed by additive noise, then the visual system has a spatial bandwidth 1.8 times lower for isoluminant red-green than for isochromatic stimuli. On the other hand, we find no difference in bandwidth or sensitivity of the neural visual system for isoluminant red-green and S-cone-isolated stimuli.