Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where P. George Lovell is active.

Publication


Featured researches published by P. George Lovell.


Journal of Vision | 2009

Search for gross illumination discrepancies in images of natural objects

P. George Lovell; Iain D. Gilchrist; David J. Tolhurst; Tom Troscianko

Shadows may be discounted in human visual perception because they do not provide stable, lighting-invariant, information about the properties of objects in the environment. Using visual search, R. A. Rensink and P. Cavanagh (2004) found that search for an upright discrepant shadow was less efficient than for an inverted one. Here we replicate and extend this work using photographs of real objects (pebbles) and their shadows. The orientation of the target shadows was varied between 30 and 180 degrees. Stimuli were presented upright (light from above, the usual situation in the world) or inverted (light from below, unnatural lighting). RTs for upright images were slower for shadows angled at 30 degrees, exactly as found by Rensink and Cavanagh. However, for all other shadow angles tested, the RTs were faster for upright images. This suggests, for small discrepancies in shadow orientation, a switch of processing from a relatively coarse-scaled shadow system to other general-purpose visual routines. Manipulations of the visual heterogeneity of the pebbles that cast the shadows differentially influenced performance. For inverted images, heterogeneity had the expected influence: reducing search efficiency and increasing overall search time. This effect was greatly reduced when images were presented upright, presumably when the distractors were processed as shadows. We suggest that shadows may be processed in a functionally separate, spatially coarse, mechanism. The pattern of results suggests that human vision does not use a shadow-suppressing system in search tasks.


tests and proofs | 2006

Evaluation of a multiscale color model for visual difference prediction

P. George Lovell; C. Alejandro Parraga; Tom Troscianko; Caterina Ripamonti; David J. Tolhurst

How different are two images when viewed by a human observer? There is a class of computational models which attempt to predict perceived differences between subtly different images. These are derived from theoretical considerations of human vision and are mostly validated from psychophysical experiments on stimuli, such as sinusoidal gratings. We are developing a model of visual difference prediction, based on multiscale analysis of local contrast, to be tested with psychophysical discrimination experiments on natural-scene stimuli. Here, we extend our model to account for differences in the chromatic domain by modeling differences in the luminance domain and in two opponent chromatic domains. We describe psychophysical measurements of objective (discrimination thresholds) and subjective (magnitude estimations) perceptual differences between visual stimuli derived from colored photographs of natural scenes. We use one set of psychophysical data to determine the best parameters for the model and then determine the extent to which the model generalizes to other experimental data. In particular, we show that the cues from different spatial scales and from the separate luminance and chromatic channels contribute roughly equally to discrimination and that these several cues are combined in a relatively straightforward manner. In general, the model provides good predictions of both threshold and suprathreshold image differences arising from a wide variety of geometrical and optical manipulations. This implies that models of this class can be generally useful in specifying how different two similar images will look to human observers.


applied perception in graphics and visualization | 2005

A multiresolution color model for visual difference prediction

David J. Tolhurst; Caterina Ripamonti; C. Alejandro Parraga; P. George Lovell; Tom Troscianko

How different are two images when viewed by a human observer? Such knowledge is needed in many situations including when one has to judge the degree to which a graphics representation may be similar to a high-quality photograph of the original scene. There is a class of computational models which attempt to predict such perceived differences. These are derived from theoretical considerations of human vision and are mostly validated from experiments on stimuli such as sinusoidal gratings. We are developing a model of visual difference prediction based on multi-scale analysis of local contrast, to be tested with psychophysical discrimination experiments on natural-scene stimuli. Here, we extend our model to account for differences in the chromatic domain. We describe the model, how it has been derived and how we attempt to validate it psychophysically for monochrome and chromatic images.


Seeing and Perceiving | 2010

Magnitude of perceived change in natural images may be linearly proportional to differences in neuronal firing rates

David J. Tolhurst; Michelle To; Mazviita Chirimuuta; Tom Troscianko; Pei-Ying Chua; P. George Lovell

We are studying how people perceive naturalistic suprathreshold changes in the colour, size, shape or location of items in images of natural scenes, using magnitude estimation ratings to characterise the sizes of the perceived changes in coloured photographs. We have implemented a computational model that tries to explain observers ratings of these naturalistic differences between image pairs. We model the action-potential firing rates of millions of neurons, having linear and non-linear summation behaviour closely modelled on real VI neurons. The numerical parameters of the models sigmoidal transducer function are set by optimising the same model to experiments on contrast discrimination (contrast dippers) on monochrome photographs of natural scenes. The model, optimised on a stimulus-intensity domain in an experiment reminiscent of the Weber-Fechner relation, then produces tolerable predictions of the ratings for most kinds of naturalistic image change. Importantly, rating rises roughly linearly with the models numerical output, which represents differences in neuronal firing rate in response to the two images under comparison; this implies that rating is proportional to the neuronal response.


Alzheimers & Dementia | 2009

Visual impairment in posterior cortical atrophy and dementia with lewy bodies

Claudia Metzler-Baddeley; Roland Baddeley; P. George Lovell; Roy W. Jones

Background: Posterior Cortical Atrophy (PCA) is a rare neurodegenerative disease associated with Alzheimers disease pathology that presents with progressive deterioration in visual perception. Patients with PCA present with complex visual impairments such as Balint syndrome, Gerstmann syndrome, simultanagnosia, neglect and topographical disorientation. Dementia with Lewy bodies (DLB) is, similar to PCA, also characterised by profound impairments in visual perception. However, little is known about the nature underlying these complex visual deficits associated with both conditions. Methods: To investigate the nature of visual impairments, the present study compared PCA, DLB, and healthy control participants in visual tasks designed to measure the efficiency of the visual system at different levels of processing. In ascending order of complexity there were tasks of visual acuity, line orientation, contour integration, and rotated object comparison. Results: DLB patients did not differ from controls in visual acuity and line orientation, but were impaired in contour integration and object comparison. PCA patients were impaired in all tasks. Conclusions: In PCA all processing stages were affected, whereas DLB was only associated with deficits in contour integration and object comparison. We conclude that low level impairments affecting processing stages before the dorsal-ventral distinction contribute to visual deficits in PCA. In addition our results suggest that deficits in feature integration thresholds may contribute to object recognition impairments in DLB.


Archive | 2011

Animal Camouflage: Camouflage and visual perception

Tom Troscianko; Christopher P. Benton; P. George Lovell; David J. Tolhurst; Zygmunt Pizlo


Journal of Vision | 2010

Minkowski summation of cues in complex visual discriminations using natural scene stimuli

Michelle To; P. George Lovell; Tom Troscianko; David J. Tolhurst


Journal of Vision | 2010

Predicting search efficiency with a low-level visual difference model

P. George Lovell; Iain D. Gilchrist; David J. Tolhurst; Michelle To; T Troscianko


Journal of Vision | 2010

Crowding effects in central and peripheral vision when viewing natural scenes

Michelle To; Iain D. Gilchrist; Tom Troscianko; P. George Lovell; David J. Tolhurst


Journal of Vision | 2010

Rapid search for gross illumination discrepancies in upright but not inverted images

P. George Lovell; David J. Tolhurst; Michelle To; Tom Troscianko

Collaboration


Dive into the P. George Lovell's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michelle To

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

C. Alejandro Parraga

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge