Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tom Troscianko is active.

Publication


Featured researches published by Tom Troscianko.


Nature | 2005

Disruptive coloration and background pattern matching

Innes C. Cuthill; Martin Stevens; Sheppard J; Maddocks T; Párraga Ca; Tom Troscianko

Effective camouflage renders a target indistinguishable from irrelevant background objects. Two interrelated but logically distinct mechanisms for this are background pattern matching (crypsis) and disruptive coloration: in the former, the animals colours are a random sample of the background; in the latter, bold contrasting colours on the animals periphery break up its outline. The latter has long been proposed as an explanation for some apparently conspicuous coloration in animals, and is standard textbook material. Surprisingly, only one quantitative test of the theory exists, and one experimental test of its effectiveness against non-human predators. Here we test two key predictions: that patterns on the bodys outline should be particularly effective in promoting concealment and that highly contrasting colours should enhance this disruptive effect. Artificial moth-like targets were exposed to bird predation in the field, with the experimental colour patterns on the ‘wings’ and a dead mealworm as the edible ‘body’. Survival analysis supported the predictions, indicating that disruptive coloration is an effective means of camouflage, above and beyond background pattern matching.


Neuroreport | 1999

Mismatch negativity in the visual modality

Andrea Tales; Philip Newton; Tom Troscianko; Stuart Butler

In the auditory system, the automatic detection of stimulus change provides a mechanism for switching attention to biologically significant events. It gives rise to the mismatch negativity (MMN) event related potential. It is unclear whether a similar mechanism exists in vision. To investigate this issue, evoked potentials were recorded to target stimuli in the centre of the visual field, and to frequent standard and infrequent deviant stimuli presented outside the focus of attention, in the peripheral field. Deviants evoked a more negative potential than standards 250-400 ms after the stimulus. The negativity, distributed over supplementary visual areas of occipital and posterior temporal cortex, was associated with the rarity of the deviants and not the physical features which distinguished them from standards. This negativity shares a number of characteristics with auditory MMN.


Quarterly Journal of Experimental Psychology | 2007

Effort during visual search and counting: Insights from pupillometry

Gillian Porter; Tom Troscianko; Iain D. Gilchrist

We investigated the processing effort during visual search and counting tasks using a pupil dilation measure. Search difficulty was manipulated by varying the number of distractors as well as the heterogeneity of the distractors. More difficult visual search resulted in more pupil dilation than did less difficult search. These results confirm a link between effort and increased pupil dilation. The pupil dilated more during the counting task than during target-absent search, even though the displays were identical, and the two tasks were matched for reaction time. The moment-to-moment dilation pattern during search suggests little effort in the early stages, but increasingly more effort towards response, whereas the counting task involved an increased initial effort, which was sustained throughout the trial. These patterns can be interpreted in terms of the differential memory load for item locations in each task. In an additional experiment, increasing the spatial memory requirements of the search evoked a corresponding increase in pupil dilation. These results support the view that search tasks involve some, but limited, memory for item locations, and the effort associated with this memory load increases during the trials. In contrast, counting involves a heavy locational memory component from the start.


Current Biology | 2000

The human visual system is optimised for processing the spatial information in natural visual images

C.A. Párraga; Tom Troscianko; David J. Tolhurst

A fundamental tenet of visual science is that the detailed properties of visual systems are not capricious accidents, but are closely matched by evolution and neonatal experience to the environments and lifestyles in which those visual systems must work. This has been shown most convincingly for fish and insects. For mammalian vision, however, this tenet is based more upon theoretical arguments than upon direct observations. Here, we describe experiments that require human observers to discriminate between pictures of slightly different faces or objects. These are produced by a morphing technique that allows small, quantifiable changes to be made in the stimulus images. The independent variable is designed to give increasing deviation from natural visual scenes, and is a measure of the Fourier composition of the image (its second-order statistics). Performance in these tests was best when the pictures had natural second-order spatial statistics, and degraded when the images were made less natural. Furthermore, performance can be explained with a simple model of contrast coding, based upon the properties of simple cells in the mammalian visual cortex. The findings thus provide direct empirical support for the notion that human spatial vision is optimised to the second-order statistics of the optical environment.


Philosophical Transactions of the Royal Society B | 2009

Camouflage and visual perception

Tom Troscianko; Christopher P. Benton; P. George Lovell; David J. Tolhurst; Zygmunt Pizlo

How does an animal conceal itself from visual detection by other animals? This review paper seeks to identify general principles that may apply in this broad area. It considers mechanisms of visual encoding, of grouping and object encoding, and of search. In most cases, the evidence base comes from studies of humans or species whose vision approximates to that of humans. The effort is hampered by a relatively sparse literature on visual function in natural environments and with complex foraging tasks. However, some general constraints emerge as being potentially powerful principles in understanding concealment—a ‘constraint’ here means a set of simplifying assumptions. Strategies that disrupt the unambiguous encoding of discontinuities of intensity (edges), and of other key visual attributes, such as motion, are key here. Similar strategies may also defeat grouping and object-encoding mechanisms. Finally, the paper considers how we may understand the processes of search for complex targets in complex scenes. The aim is to provide a number of pointers towards issues, which may be of assistance in understanding camouflage and concealment, particularly with reference to how visual systems can detect the shape of complex, concealed objects.


Pattern Recognition | 1997

Interpreting image databases by region classification

Neill W. Campbell; W.P.J. Mackeown; Barry T. Thomas; Tom Troscianko

Abstract This paper addresses automatic interpretation of images of outdoor scenes. The method allows instances of objects from a number of generic classes to be identified: vegetation, buildings; vehicles; roads, etc., thereby enabling image databases to be queried on scene content. The feature set is based, in part, on psychophysical principles and includes measures of colour, texture and shape. Using a large database of ground-truth labelled images, a neural network is trained as a pattern classifier. The method is demonstrated on a large test set to provide highly accurate image interpretations, with over 90% of the image area labelled correctly.


eurographics symposium on rendering techniques | 2000

Comparing Real & Synthetic Scenes using Human Judgements of Lightness

Ann McNamara; Alan Chalmers; Tom Troscianko; Iain D. Gilchrist

Increased application of computer graphics in areas which demand high levels of realism has made it necessary to examine the manner in which images are evaluated and validated. In this paper, we explore the need for including the human observer in any process which attempts to quantify the level of realism achieved by the rendering process, from measurement to display. We introduce a framework for measuring the perceptual equivalence (from a lightness perception point of view) between a real scene and a computer simulation of the same scene. Because this framework is based on psychophysical experiments, results are produced through study of vision from a human rather than a machine vision point of view. This framework can then be used to evaluate, validate and compare rendering techniques.


Journal of The Optical Society of America A-optics Image Science and Vision | 1988

Why do isoluminant stimuli appear slower

Tom Troscianko; Manfred Fahle

There is ample evidence that the perception of movement, both real and apparent, is substantially impaired at isoluminance. Models of movement perception require spatial and temporal information about the stimulus. We ask whether changes at isoluminance result from a spatial or a temporal error or uncertainty. Reaction times to three kinds of stimulus were measured: (a) temporal stimuli, such as the onset of a square in a known location; (b) spatial stimuli, a vernier displacement of two squares; and (c) spatiotemporal stimuli, moving squares either starting or stopping. The results suggest that there is relatively little effect of isoluminance on purely temporal tasks (a). Longer reaction times were, however, obtained for detecting vernier offset (b). The reaction times to moving stimuli (c) were also slower at isoluminance to an extent that implies that perceived velocity at isoluminance is approximately 30% less than that seen at 8% contrast. The slowing of reaction times at isoluminance could be mimicked by adding random positional jitter to a nonisoluminant moving stimulus and also by presenting a low-contrast monochromatic stimulus. A simple explanation of the data is given in terms of a motion-detecting unit coupled to a temporal integrator. It is shown how such a unit can encode perceived velocity. The results of these experiments suggest that the neural coding of isoluminant stimuli is similar to that of low-contrast luminance stimuli and therefore that isoluminance may not be an effective method to find out whether specific visual mechanisms are color-blind.


spring conference on computer graphics | 2005

Visual attention for efficient high-fidelity graphics

Veronica Sundstedt; Kurt Debattista; Peter William Longhurst; Alan Chalmers; Tom Troscianko

High-fidelity rendering of complex scenes at interactive rates is one of the primary goals of computer graphics. Since high-fidelity rendering is computationally expensive, perceptual strategies such as visual attention have been explored to achieve this goal. In this paper we investigate how two models of human visual attention can be exploited in a selective rendering system. We examine their effects both individually, and in combination, through psychophysical experiments to measure savings in computation time while preserving the perceived visual quality for a task-related scene. We adapt the lighting simulation system Radiance to support selective rendering, by introducing a selective guidance system which can exploit attentional processes using an importance map. Our experiments demonstrate that viewers performing a visual task within the environment consistently fail to notice the difference between high quality and selectively rendered images, computed in a significantly reduced time.


International Journal of Neural Systems | 1997

Automatic segmentation and classification of outdoor images using neural networks.

Neill W. Campbell; Barry T. Thomas; Tom Troscianko

The paper describes how neural networks may be used to segment and label objects in images. A self-organising feature map is used for the segmentation phase, and we quantify the quality of the segmentations produced as well as the contribution made by colour and texture features. A multi-layer perception is trained to label the regions produced by the segmentation process. It is shown that 91.1% of the image area is correctly classified into one of eleven categories which include cars, houses, fences, roads, vegetation and sky.

Collaboration


Dive into the Tom Troscianko's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jan Noyes

University of Bristol

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michelle To

University of Cambridge

View shared research outputs
Researchain Logo
Decentralizing Knowledge