Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alasdair Clarke is active.

Publication


Featured researches published by Alasdair Clarke.


european conference on computer vision | 2014

Training Object Class Detectors from Eye Tracking Data

Dim P. Papadopoulos; Alasdair Clarke; Frank Keller; Vittorio Ferrari

Training an object class detector typically requires a large set of images annotated with bounding-boxes, which is expensive and time consuming to create. We propose novel approach to annotate object locations which can substantially reduce annotation time. We first track the eye movements of annotators instructed to find the object and then propose a technique for deriving object bounding-boxes from these fixations. To validate our idea, we collected eye tracking data for the trainval part of 10 object classes of Pascal VOC 2012 (6,270 images, 5 observers). Our technique correctly produces bounding-boxes in 50%of the images, while reducing the total annotation time by factor 6.8× compared to drawing bounding-boxes. Any standard object class detector can be trained on the bounding-boxes predicted by our model. Our large scale eye tracking dataset is available at groups.inf.ed.ac.uk/calvin/eyetrackdataset/ .


Journal of The Optical Society of America A-optics Image Science and Vision | 2010

Measuring perceived differences in surface texture due to changes in higher order statistics

Khemraj Emrith; Mike J. Chantler; Patrick R. Green; Laurence T. Maloney; Alasdair Clarke

We investigate the ability of humans to perceive changes in the appearance of images of surface texture caused by the variation of their higher order statistics. We incrementally randomize their phase spectra while holding their first and second order statistics constant in order to ensure that the change in the appearance is due solely to changes in third and other higher order statistics. Stimuli comprise both natural and synthetically generated naturalistic images, with the latter being used to prevent observers from making pixel-wise comparisons. A difference scaling method is used to derive the perceptual scales for each observer, which show a sigmoidal relationship with the degree of randomization. Observers were maximally sensitive to changes within the 20%-60% randomization range. In order to account for this behavior we propose a biologically plausible model that computes the variance of local measurements of phase congruency.


Frontiers in Psychology | 2013

Where's Wally: the influence of visual salience on referring expression generation

Alasdair Clarke; Micha Elsner; Hannah Rohde

Referring expression generation (REG) presents the converse problem to visual search: given a scene and a specified target, how does one generate a description which would allow somebody else to quickly and accurately locate the target?Previous work in psycholinguistics and natural language processing has failed to find an important and integrated role for vision in this task. That previous work, which relies largely on simple scenes, tends to treat vision as a pre-process for extracting feature categories that are relevant to disambiguation. However, the visual search literature suggests that some descriptions are better than others at enabling listeners to search efficiently within complex stimuli. This paper presents a study testing whether participants are sensitive to visual features that allow them to compose such “good” descriptions. Our results show that visual properties (salience, clutter, area, and distance) influence REG for targets embedded in images from the Wheres Wally? books. Referring expressions for large targets are shorter than those for smaller targets, and expressions about targets in highly cluttered scenes use more words. We also find that participants are more likely to mention non-target landmarks that are large, salient, and in close proximity to the target. These findings identify a key role for visual salience in language production decisions and highlight the importance of scene complexity for REG.


The Journal of Neuroscience | 2016

Representation of Maximally Regular Textures in Human Visual Cortex

Peter Köhler; Alasdair Clarke; Alexandra Yakovleva; Yanxi Liu; Anthony M. Norcia

Naturalistic textures with an intermediate degree of statistical regularity can capture key structural features of natural images (Freeman and Simoncelli, 2011). V2 and later visual areas are sensitive to these features, while primary visual cortex is not (Freeman et al., 2013). Here we expand on this work by investigating a class of textures that have maximal formal regularity, the 17 crystallographic wallpaper groups (Fedorov, 1891). We used texture stimuli from four of the groups that differ in the maximum order of rotation symmetry they contain, and measured neural responses in human participants using functional MRI and high-density EEG. We found that cortical area V3 has a parametric representation of the rotation symmetries in the textures that is not present in either V1 or V2, the first discovery of a stimulus property that differentiates processing in V3 from that of lower-level areas. Parametric responses were also seen in higher-order ventral stream areas V4, VO1, and lateral occipital complex (LOC), but not in dorsal stream areas. The parametric response pattern was replicated in the EEG data, and source localization indicated that responses in V3 and V4 lead responses in LOC, which is consistent with a feedforward mechanism. Finally, we presented our stimuli to four well developed feedforward models and found that none of them were able to account for our results. Our results highlight structural regularity as an important stimulus dimension for distinguishing the early stages of visual processing, and suggest a previously unrecognized role for V3 in the visual form-processing hierarchy. SIGNIFICANCE STATEMENT Hierarchical processing is a fundamental organizing principle in visual neuroscience, with each successive processing stage being sensitive to increasingly complex stimulus properties. Here, we probe the encoding hierarchy in human visual cortex using a class of visual textures—wallpaper patterns—that are maximally regular. Through a combination of fMRI and EEG source imaging, we find specific responses to texture regularity that depend parametrically on the maximum order of rotation symmetry in the textures. These parametric responses are seen in several areas of the ventral visual processing stream, as well as in area V3, but not in V1 or V2. This is the first demonstration of a stimulus property that differentiates processing in V3 from that of lower-level visual areas.


Vision Research | 2014

Deriving an appropriate baseline for describing fixation behaviour

Alasdair Clarke; Benjamin W. Tatler

Humans display image-independent viewing biases when inspecting complex scenes. One of the strongest such bias is the central tendency in scene viewing: observers favour making fixations towards the centre of an image, irrespective of its content. Characterising these biases accurately is important for three reasons: (1) they provide a necessary baseline for quantifying the association between visual features in scenes and fixation selection; (2) they provide a benchmark for evaluating models of fixation behaviour when viewing scenes; and (3) they can be included as a component of generative models of eye guidance. In the present study we compare four commonly used approaches to describing image-independent biases and report their ability to describe observed data and correctly classify fixations across 10 eye movement datasets. We propose an anisotropic Gaussian function that can serve as an effective and appropriate baseline for describing image-independent biases without the need to fit functions to individual datasets or subjects.


Frontiers in Psychology | 2013

The impact of attentional, linguistic, and visual features during object naming

Alasdair Clarke; Moreno I. Coco; Frank Keller

Object detection and identification are fundamental to human vision, and there is mounting evidence that objects guide the allocation of visual attention. However, the role of objects in tasks involving multiple modalities is less clear. To address this question, we investigate object naming, a task in which participants have to verbally identify objects they see in photorealistic scenes. We report an eye-tracking study that investigates which features (attentional, visual, and linguistic) influence object naming. We find that the amount of visual attention directed toward an object, its position and saliency, along with linguistic factors such as word frequency, animacy, and semantic proximity, significantly influence whether the object will be named or not. We then ask how features from different modalities are combined during naming, and find significant interactions between saliency and position, saliency and linguistic features, and attention and position. We conclude that when the cognitive system performs tasks such as object naming, it uses input from one modality to constraint or enhance the processing of other modalities, rather than processing each input modality independently.


british machine vision conference | 2011

Perceptual Similarity: A Texture Challenge.

Alasdair Clarke; Fraser Halley; Andrew J. Newell; Lewis D. Griffin; Mike J. Chantler

Texture classification and segmentation have been extensively researched over the last thirty years. Early on the Brodatz album[1] quickly became the de facto standard in which a texture class comprised a set of nonoverlapping sub-images cropped from a single photograph. Later, as the focus shifted to investigating illuminationand pose-invariant algorithms, the CUReT database[3] became popular and the texture class became the set of photographs of a single physical sample captured under a variety of imaging conditions. While extremely successful algorithms have been developed to address classification problems based on these databases, the challenging problem of measuring perceived inter-class texture similarity has rarely been discussed. This paper makes use of a new texture collection[4]. It comprises 334 texture samples, including examples of embossed vinyl, woven wall coverings, carpets, rugs, window blinds, soft fabrics, building materials, product packaging, etc. Additionally, an associated perceptual similarity matrix is provided. This was obtained from a grouping experiment using 30 observers. The similarity scores, S(Ii, I j), for each texture pair were calculated simply by dividing the number of observers that grouped the pair into the same sub-set by the number of observers that had the opportunity do so. A dissimilarity matrix was then defined as dsim(Ii, I j) = 1− S(Ii, I j). Hence dsim(Ii, Ii) = 0 for all images Ii, and dsim(Ii, I j) = 1 if none of the participants grouped images Ii together with I j.


Journal of Vision | 2009

Modeling visual search on a rough surface

Alasdair Clarke; Mike J. Chantler; Patrick R. Green

The LNL (linear, non-linear, linear) model has previously been successfully applied to the problem of texture segmentation. In this study we investigate the extent to which a simple LNL model can simulate human performance in a search task involving a target on a textured surface. Two different classes of surface are considered: 1/f(beta)-noise and near-regular textures. We find that in both cases the search performance of the model does not differ significantly from that of people, over a wide range of task difficulties.


Vision Research | 2008

Visual search for a target against a 1/fβ continuous textured background

Alasdair Clarke; Patrick R. Green; Mike J. Chantler; Khemraj Emrith

We present synthetic surface textures as a novel class of stimuli for use in visual search experiments. Surface textures have certain advantages over both the arrays of abstract discrete items commonly used in search studies and photographs of natural scenes. In this study we investigate how changing the properties of the surface and target influence the difficulty of a search task. We present a comparison with Itti and Kochs saliency model and find that it fails to model human behaviour on these surfaces. In particular it does not respond to changes in orientation in the same manner as human observers.


Symmetry | 2011

Similar symmetries: the role of wallpaper groups in perceptual texture similarity

Alasdair Clarke; Patrick R. Green; Fraser Halley; Mike J. Chantler

Periodic patterns and symmetries are striking visual properties that have been used decoratively around the world throughout human history. Periodic patterns can be mathematically classified into one of 17 different Wallpaper groups, and while computational models have been developed which can extract an images symmetry group, very little work has been done on how humans perceive these patterns. This study presents the results from a grouping experiment using stimuli from the different wallpaper groups. We find that while different images from the same wallpaper group are perceived as similar to one another, not all groups have the same degree of self-similarity. The similarity relationships between wallpaper groups appear to be dominated by rotations.

Collaboration


Dive into the Alasdair Clarke's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hannah Rohde

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Frank Keller

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aoife Mahon

University of Aberdeen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge