Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where T Troscianko is active.

Publication


Featured researches published by T Troscianko.


Journal of The Optical Society of America A-optics Image Science and Vision | 1998

Color and luminance information in natural scenes

C.A. Párraga; G. Brelstaff; T Troscianko; I. R. Moorehead

The spatial filtering applied by the human visual system appears to be low pass for chromatic stimuli and band pass for luminance stimuli. Here we explore whether this observed difference in contrast sensitivity reflects a real difference in the components of chrominance and luminance in natural scenes. For this purpose a digital set of 29 hyperspectral images of natural scenes was acquired and its spatial frequency content analyzed in terms of chrominance and luminance defined according to existing models of the human cone responses and visual signal processing. The statistical 1/f amplitude spatial-frequency distribution is confirmed for a variety of chromatic conditions across the visible spectrum. Our analysis suggests that natural scenes are relatively rich in high-spatial-frequency chrominance information that does not appear to be transmitted by the human visual system. This result is unlikely to have arisen from errors in the original measurements. Several reasons may combine to explain a failure to transmit high-spatial-frequency chrominance: (a) its minor importance for primate visual tasks, (b) its removal by filtering applied to compensate for chromatic aberration of the eyes optics, and (c) a biological bottleneck blocking its transmission. In addition, we graphically compare the ratios of luminance to chrominance measured by our hyperspectral camera and those measured psychophysically over an equivalent spatial-frequency range.


Nature Neuroscience | 1998

Complete sparing of high-contrast color input to motion perception in cortical color blindness.

Patrick Cavanagh; Marie-Anne Hénaff; François Michel; Theodor Landis; T Troscianko; James Intriligator

It is widely held that color and motion are processed by separate parallel pathways in the visual system, but this view is difficult to reconcile with the fact that motion can be detected in equiluminant stimuli that are defined by color alone. To examine the relationship between color and motion, we tested three patients who had lost their color vision following cortical damage (central achromatopsia). Despite their profound loss in the subjective experience of color and their inability to detect the motion of faint colors, all three subjects showed surprisingly strong responses to high-contrast, moving color stimuli — equal in all respects to the performance of subjects with normal color vision. The pathway from opponent-color detectors in the retina to the motion analysis areas must therefore be independent of the damaged color centers in the occipitotemporal area. It is probably also independent of the motion analysis area MT/V5, because the contribution of color to motion detection in these patients is much stronger than the color response of monkey area MT.


Frontiers in Human Neuroscience | 2011

Beyond Correlation: Do Color Features Influence Attention in Rainforest?

Hans-Peter Frey; Kerstin Tanja Wirz; Verena Willenbockel; Torsten Betz; Cornell Schreiber; T Troscianko; Peter König

Recent research indicates a direct relationship between low-level color features and visual attention under natural conditions. However, the design of these studies allows only correlational observations and no inference about mechanisms. Here we go a step further to examine the nature of the influence of color features on overt attention in an environment in which trichromatic color vision is advantageous. We recorded eye-movements of color-normal and deuteranope human participants freely viewing original and modified rainforest images. Eliminating red–green color information dramatically alters fixation behavior in color-normal participants. Changes in feature correlations and variability over subjects and conditions provide evidence for a causal effect of red–green color-contrast. The effects of blue–yellow contrast are much smaller. However, globally rotating hue in color space in these images reveals a mechanism analyzing color-contrast invariant of a specific axis in color space. Surprisingly, in deuteranope participants we find significantly elevated red–green contrast at fixation points, comparable to color-normal participants. Temporal analysis indicates that this is due to compensatory mechanisms acting on a slower time scale. Taken together, our results suggest that under natural conditions red–green color information contributes to overt attention at a low-level (bottom-up). Nevertheless, the results of the image modifications and deuteranope participants indicate that evaluation of color information is done in a hue-invariant fashion.


Geographic Information Systems, Photogrammetry, and Geological/Geophysical Remote Sensing | 1995

Hyperspectral camera system: acquisition and analysis

Gavin J. Brelstaff; Alejandro Parraga; T Troscianko; Derek Carr

A low-cost, portable, video-camera system built by University of Bristol for the UK-DRA, RARDE Fort Halstead, permits in-field acquisition of terrestrial hyper-spectral image sets. Each set is captured as a sequence of thirty-one images through a set of different interference filters which span the visible spectrum, at 10 nm intervals: effectively providing a spectrogram of 256 by 256 pixels. The system is customized from off-the-shelf components. A database of twenty-nine hyper-spectral images sets was acquired and analyzed as a sample of natural environment. We report the manifest information capacity with respect to spatial and optical frequency drawing implications for management of hyper-spectral data and visual processing.


SPIE/IS&T 1992 Symposium on Electronic Imaging: Science and Technology | 1992

Information content of natural scenes: implications for neural coding of color and luminance

Gavin J. Brelstaff; T Troscianko

We present a spatial frequency analysis of the luminance and chrominance images derived from 20 scenes representative of natural terrain. Our results weakly support the claim that the human visual system has access to spatial frequencies of luminance and chrominance in relative proportion to their occurrence in natural scenes. The weak effect that we found may be due to the limited gamut of colors present in the scenes.


Proceedings of SPIE, the International Society for Optical Engineering | 2006

Use of a vision model to quantify the significance of factors effecting target conspicuity

M. A. Gilmore; C. K. Jones; A. W. Haynes; David J. Tolhurst; Michelle To; T Troscianko; Paul G. Lovell; C.A. Párraga; K. Pickavance

When designing camouflage it is important to understand how the human visual system processes the information to discriminate the target from the background scene. A vision model has been developed to compare two images and detect differences in local contrast in each spatial frequency channel. Observer experiments are being undertaken to validate this vision model so that the model can be used to quantify the relative significance of different factors affecting target conspicuity. Synthetic imagery can be used to design improved camouflage systems. The vision model is being used to compare different synthetic images to understand what features in the image are important to reproduce accurately and to identify the optimum way to render synthetic imagery for camouflage effectiveness assessment. This paper will describe the vision model and summarise the results obtained from the initial validation tests. The paper will also show how the model is being used to compare different synthetic images and discuss future work plans.


Targets and Backgrounds VI: Characterization, Visualization, and the Detection Process | 2000

Assessment of synthetic image fidelity

Kevin D. Mitchell; Ian R. Moorhead; Marilyn A. Gilmore; Graham H. Watson; Mitch Thomson; T. Yates; T Troscianko; David J. Tolhurst

Computer generated imagery is increasingly used for a wide variety of purposes ranging from computer games to flight simulators to camouflage and sensor assessment. The fidelity required for this imagery is dependent on the anticipated use - for example when used for camouflage design it must be physically correct spectrally and spatially. The rendering techniques used will also depend upon the waveband being simulated, spatial resolution of the sensor and the required frame rate. Rendering of natural outdoor scenes is particularly demanding, because of the statistical variation in materials and illumination, atmospheric effects and the complex geometric structures of objects such as trees. The accuracy of the simulated imagery has tended to be assessed subjectively in the past. First and second order statistics do not capture many of the essential characteristics of natural scenes. Direct pixel comparison would impose an unachievable demand on the synthetic imagery. For many applications, such as camouflage design, it is important that nay metrics used will work in both visible and infrared wavebands. We are investigating a variety of different methods of comparing real and synthetic imagery and comparing synthetic imagery rendered to different levels of fidelity. These techniques will include neural networks (ICA), higher order statistics and models of human contrast perception. This paper will present an overview of the analyses we have carried out and some initial results along with some preliminary conclusions regarding the fidelity of synthetic imagery.


Journal of The Optical Society of America A-optics Image Science and Vision | 2005

Stability of the color-opponent signals under changes of illuminant in natural scenes.

Paul G. Lovell; David J. Tolhurst; C.A. Párraga; Roland Baddeley; Ute Leonards; Jolyon Troscianko; T Troscianko


Vision Research | 2011

Discrimination of natural scenes in central and peripheral vision

Michelle To; Iain D. Gilchrist; T Troscianko; David J. Tolhurst


Proceedings of the Royal Society of London B: Biological Sciences | 2008

Summation of perceptual cues in natural visual scenes.

Michelle To; Paul G. Lovell; T Troscianko; David J. Tolhurst

Collaboration


Dive into the T Troscianko's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michelle To

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge