Frederick A. A. Kingdom
McGill University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Frederick A. A. Kingdom.
Perception | 2004
Adriana Olmos; Frederick A. A. Kingdom
We present an algorithm for separating the shading and reflectance images of photographed natural scenes. The algorithm exploits the constraint that in natural scenes chromatic and luminance variations that are co-aligned mainly arise from changes in surface reflectance, whereas near-pure luminance variations mainly arise from shading and shadows. The novel aspect of the algorithm is the initial separation of the image into luminance and chromatic image planes that correspond to the luminance, red–green, and blue–yellow channels of the primate visual system. The red–green and blue–yellow image planes are analysed to provide a map of the changes in surface reflectance, which is then used to separate the reflectance from shading changes in both the luminance and chromatic image planes. The final reflectance image is obtained by reconstructing the chromatic and luminance-reflectance-change maps, while the shading image is obtained by subtracting the reconstructed luminance-reflectance image from the original luminance image. A number of image examples are included to illustrate the successes and limitations of the algorithm.
Perception | 1990
Bernard Moulden; Frederick A. A. Kingdom; Linda F Gatley
Michelsons contrast, C, is an excellent metric for contrast in images with periodic luminance profiles, such as gratings, but is not suitable for images consisting of isolated stimulus elements, eg single bars; other metrics have been devised for such stimuli. But what metric should be used for random-dot images such as are commonly used in stereograms and kinematograms? Previously the standard deviation (SD) of the luminances (equivalent to the root mean square, RMS, of the amplitudes) has been taken as a measure of contrast, but on little more than intuitive grounds. The validity of this speculative usage is tested. Experiments are described in which a wide range of random-dot images of various compositions was used and the adapting power of these images measured. This was taken as an index of their visual effectiveness. The contrast and contrast-reducing effects of the stimuli were expressed in terms of six candidate metrics, including SD, to discover which would give the most lawful description of the experimental data. The usefulness and generality of the SD measure were confirmed. The effects of mean luminance were also measured and a general expression that would take them into account was derived. Finally, on the basis of computational modelling in which spatial filters with properties approximating those of retinal ganglion cells were used, a possible theoretical account for the success of the SD metric is offered.
Proceedings of the National Academy of Sciences of the United States of America | 2011
Steven C. Dakin; Marc S. Tibber; John A. Greenwood; Frederick A. A. Kingdom; Michael J. Morgan
There is considerable interest in how humans estimate the number of objects in a scene in the context of an extensive literature on how we estimate the density (i.e., spacing) of objects. Here, we show that our sense of number and our sense of density are intertwined. Presented with two patches, observers found it more difficult to spot differences in either density or numerosity when those patches were mismatched in overall size, and their errors were consistent with larger patches appearing both denser and more numerous. We propose that density is estimated using the relative response of mechanisms tuned to low and high spatial frequencies (SFs), because energy at high SFs is largely determined by the number of objects, whereas low SF energy depends more on the area occupied by elements. This measure is biased by overall stimulus size in the same way as human observers, and by estimating number using the same measure scaled by relative stimulus size, we can explain all of our results. This model is a simple, biologically plausible common metric for perceptual number and density.
Visual Neuroscience | 2002
Kathy T. Mullen; Frederick A. A. Kingdom
The color vision of Old World primates and humans uses two cone-opponent systems; one differences the outputs of L and M cones forming a red-green (RG) system, and the other differences S cones with a combination of L and M cones forming a blue-yellow (BY) system. In this paper, we show that in human vision these two systems have a differential distribution across the visual field. Cone contrast sensitivities for sine-wave grating stimuli (smoothly enveloped in space and time) were measured for the two color systems (RG & BY) and the achromatic (Ach) system at a range of eccentricities in the nasal field (0-25 deg). We spatially scaled our stimuli independently for each system (RG, BY, & Ach) in order to activate that system optimally at each eccentricity. This controlled for any differential variations in spatial scale with eccentricity and provided a comparison between the three systems under equivalent conditions. We find that while red-green cone opponency has a steep decline away from the fovea, the loss in blue-yellow cone opponency is more gradual, showing a similar loss to that found for achromatic vision. Thus only red-green opponency, and not blue-yellow opponency, can be considered a foveal specialization of primate vision with an overrepresentation at the fovea. In addition, statistical calculations of the level of chance cone opponency in the two systems indicate that selective S cone connections to postreceptoral neurons are essential to maintain peripheral blue-yellow sensitivity in human vision. In the red-green system, an assumption of cone selectivity is not required to account for losses in peripheral sensitivity. Overall, these results provide behavioral evidence for functionally distinct neuro-architectural origins of the two color systems in human vision, supporting recent physiological results in primates.
Vision Research | 1992
Frederick A. A. Kingdom; Bernard Moulden
A model of brightness coding is presented which is shown to predict the appearance of a number of classical brightness phenomena. The model is known as MIDAAS which stands for Multiple Independent Descriptions Averaged Across Scale. In common with many other approaches to brightness perception MIDAAS imputes to local feature detectors a central role in the computation of brightness. It also explicitly recognises the crucial importance to brightness perception of feature detectors operating at different spatial scales. The unique and definitive feature of the model however is the supposition that each scale of spatial filtering operates as if to generate its own description of the pattern of brightness relationships in the image. The final percept is then provided by the composite of those individual brightness descriptions. It is shown that MIDAAS provides a good account of a variety of Mach band phenomena, the conditions under which the Missing Fundamental illusion is observed, the effect of occluding bars on the apparent contrast of step edges, the Chevreul illusion, simultaneous brightness contrast and the non-linear appearance of high contrast sinusoidal gratings. The advantages of MIDAAS over other approaches to brightness perception is discussed, as well as its current limitations.
Vision Research | 1995
Frederick A. A. Kingdom; D.R.T. Keeble; Bernard Moulden
We have measured the sensitivity of the human visual system to sinusoidal modulations of orientation in micropattern-based textured stimuli. The result is the orientation modulation function, or OMF, which describes this sensitivity as a function of the spatial frequency of orientation modulation. We found that the OMF was bandpass with peak sensitivity at spatial frequencies ranging between 0.06 and 0.2 c/deg, depending on the size of the micropatterns. The OMF was found to be scale invariant, that is its position on the spatial frequency axis did not change with viewing distance when spatial frequency was measured in object rather than retinal units. This scale invariance was shown to result from the visual system taking into account the scale rather than the density of the micropatterns as viewing distance was changed. It has been argued by Bergen [(1991) Vision and visual dysfunction (Vol. 10B) New York: Macmillan] that scale invariance in textures is a consequence of the coupling of mechanisms which detect textural features with those which detect local luminance contrasts. We reasoned that Gabor micropattern textures might therefore show narrower OMFs compared to line micropattern textures. However we found no difference in OMF bandwidth between the Gabor and line micropattern textures, suggesting that the line micropatterns were acting as selectively as the Gabor micropatterns for the spatial scale of the mechanisms which detected the orientation modulation. Evidence is presented which suggests that the mechanisms which detected the orientation modulation in our stimuli are non-linear. Finally we showed similar OMFs for sine-wave and square-wave modulations of micropattern orientation, and similar OMFs for modulations of micropattern with orientation about the horizontal and about the vertical, the direction of modulation in both cases being horizontal. The implications of these findings for the mechanisms involved in orientation-defined texture processing is discussed.
Spatial Vision | 1988
Frederick A. A. Kingdom; Bernard Moulden
This paper presents a summary of experimental findings, theoretical models and unresolved issues regarding border effects on brightness, of which the Cornsweet illusion (Cornsweet, 1970 Visual Perception. Academic Press: New York) is the best-known example. It is argued that no current theoretical model completely accounts for the wide variety of effects described. Contrast sensitivity function (CSF) models can explain many low-contrast, but not high-contrast, border effects. Lightness integration models based on Land and McCanns retinex theory (Land and McCann, 1971. J. Opt. Soc. Am. 61, pp. 1-11) have the advantage over CSF models in that they predict transitivity of border effects where they are found to occur. However, they fail to predict the appearance of a variety of Cornsweet-like figures, have never been tested with relatively high contrast versions of those figures, and have only been implemented by qualitative demonstration. It is argued that edge-detector models are potentially the most promising theoretical candidates but, as with lightness-integration models, they have invariably relied on qualitative demonstrations and have only dealt with low-contrast border effects. A computational edge-detector model which predicts the appearance of both high and low contrast Cornsweet figures is proposed and its advantages over other models, as well as its current limitations, are discussed. The final section discusses the neural locus for border effects in brightness.
Vision Research | 2008
Frederick A. A. Kingdom
Humans rarely confuse variations in light intensity, such as shadows, shading, light sources and specular reflections, from variations in material properties, such as albedo or pigment. This review explores the cues, or regularities in the visual world that evidence suggests vision exploits to discriminate light from material. These cues include luminance relations, figural relations, 3D-shape, depth, colour, texture, and motion. On the basis of an examination of the cues together with the behavioural evidence that they are used by vision, I propose a set of heuristics that may guide vision in the task of distinguishing between light and material. I argue that while there is evidence for the use of these heuristics, little is known about their relative importance and the manner in which they are combined in naturalistic situations where there are multiple cues as to what is light and what is material. Finally, I discuss two theoretical frameworks, the generic view principle and Bayesian estimation, that are beginning to help us understand the visual processes involved in distinguishing between light and material.
Vision Research | 2000
Stéphane Rainville; Frederick A. A. Kingdom
We investigated human sensitivity to vertical mirror symmetry in noise patterns filtered for narrow bands of variable orientations. Sensitivity is defined here as the amount of spatial phase randomization corresponding to 75% correct performance in a 2AFC detection task. In Experiment 1, sensitivity was found to be high for tests patterns of all orientations except those parallel to the axis of symmetry. This implies that corresponding mirror-orientations (e.g. -45 and +45 degrees ) are combined prior to symmetry detection. In Experiment 2, observers detected symmetry in tests of variable orientation in the presence of either non-symmetric or symmetric masks filtered for orientations either parallel or perpendicular to the axis. Observers were found to be primarily affected by masks of the same orientation as the test, thus suggesting that symmetry is computed separately in distinct mirror-orientation channels. In Experiment 3, observers detected a symmetric test of variable height and width embedded in random noise. Data revealed that mirror symmetry is computed over a spatial integration region (IR) that remains approximately constant in area but whose height-to-width aspect ratio changes from 20:1 to 2:1 as orientation is varied from parallel to perpendicular to the axis. We compare human data against that of an ideal observer to identify key factors that limit visual performance and discuss the implications for the functional architecture of symmetry perception. We also propose a multi-channel model of symmetry detection that combines the output of oriented spatial filters in a simple and physiologically plausible manner. Particular emphasis is placed on the notion that changes in the shape of the IR with orientation compensate for changes in information density and partially equate performance across orientations.
Vision Research | 2007
Elena Gheorghiu; Frederick A. A. Kingdom
The shape-frequency and shape-amplitude after-effects, or SFAE and SAAE, refer respectively to the shifts observed in the perceived shape-frequency and shape-amplitude of a sinusoidal test contour following adaptation to a similar-shaped contour. As with other shape after-effects the shifts are in a direction away from that of the adapting stimulus. Using a variety of procedures we tested whether the spatial feature that was adapted in the SFAE and SAAE was (a) local orientation, (b) average unsigned curvature, (c) periodicity/density, (d) shape-amplitude and (e) local curvature. Our results suggest that the last of these, local curvature, underlies both the SFAE and SAAE. The evidence in favour of local curvature was that the after-effect reached its maximum value when just half-a-cycle of the test contour, in +/-cosine phase, was present. We suggest that the SFAE and SAAE are mediated by intermediate-level mechanisms that encode the shapes of contour fragments with constant sign of curvature. Given the neurophysiological evidence that neurons in area V4 encode parts of shapes with constant sign of curvature, we suggest V4 is the likely neural substrate for both the SFAE and SAAE.