Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Geoffrey M. Underwood is active.

Publication


Featured researches published by Geoffrey M. Underwood.


European Journal of Cognitive Psychology | 2006

Eye movements during scene inspection: A test of the saliency map hypothesis.

Geoffrey M. Underwood; Tom Foulsham; Editha van Loon; Louise Humphreys; Jackie Bloyce

What attracts attention when we inspect a scene? Two experiments recorded eye movements while viewers inspected pictures of natural office scenes in which two objects of interest were placed. One object had low contour density and uniform colouring (a piece of fruit), relative to another that was visually complex (for example, coffee mugs and commercial packages). In each picture the visually complex object had the highest visual saliency according to the Itti and Koch algorithm. Two experiments modified the task while the pictures were inspected, to determine whether visual saliency is invariably dominant in determining the pattern of fixations, or whether the purpose of inspection can provide a cognitive override that renders saliency secondary. In the first experiment viewers inspected the scene in preparation for a memory task, and the more complex objects were potent in attracting early fixations, in support of a saliency map model of scene inspection. In the second experiment viewers were set the task of detecting the presence of a low saliency target, and the effect of a high saliency distractor was negligible, supporting a model in which the saliency map can be built with cognitive influences that override low-level visual features.


international conference on human-computer interaction | 1995

Learning graphical programming: An evaluation of KidSim™

David J. Gilmore; Karen Pheasey; Jean Underwood; Geoffrey M. Underwood

This paper presents part of an evaluation of a new children’s programming environment, developed by Apple Computer Inc. for 10–13 year old children. We studied 56 children, generally working in groups of 2–3, using KidSimTM for between 2–12 hours, over a period of between 2 days and 3 weeks. The results show that children of this age can readily learn to master the programming environment, and that they greatly enjoy using the system — indeed in most cases it clearly fired their imaginations. However, questions remain about the level of programming abstractions that they were able to understand.


Eye Movements#R##N#A Window on Mind and Brain | 2007

Congruency, saliency and gist in the inspection of objects in natural scenes

Geoffrey M. Underwood; Louise Humphreys; Eleanor Cross

Publisher Summary Early studies of the inspection of scenes suggested that eye fixations are attracted to objects that are incongruent with the gist of a picture, whereas more recent studies have questioned this conclusion. The two experiments presented in the chapter investigate the potency of incongruent objects in attracting eye fixations during the inspection of pictures of real-world scenes. Pictures sometimes contained an object that violated the gist, such as a cow grazing on a ski slope, and the question asked was whether fixations were attracted to these objects earlier than when they appeared in congruent contexts. In both experiments earlier fixation of incongruent objects was recorded, suggesting a role for peripheral vision in the early comprehension of the gist of a scene and in the detection of anomalies.


Cognitive Computation | 2009

Cognitive Processes in Eye Guidance: Algorithms for Attention in Image Processing

Geoffrey M. Underwood

When inspecting an image for the first time, how does the viewer decide where to look next? The saliency map hypothesis proposes that viewers initially analyse the image for variations in low-level visual features including intensity, colour, and edge orientation, and that their eyes are guided towards the most salient region. The saliency of objects in scenes may provide an explanation of why some experiments find that incongruent objects attract attention whilst other studies do not find this effect. Experiments that have monitored eye movements during scene inspection have found some support for the saliency map hypothesis, particularly when pictures are inspected in anticipation of a memory test. Under some circumstances the hypothesis fails to account for inspection patterns. When scenes are inspected to check the presence or absence of a named object, or when two images are compared to determine whether they are identical, or when the viewer has specialised domain knowledge of the scene depicted, then saliency has little influence. This paper evaluates the saliency map hypothesis of scene perception using evidence of eye movements made when images are first inspected, and concludes that visual saliency can be used by viewers, but that its use is both task-dependent and knowledge-dependent.


USAB '08 Proceedings of the 4th Symposium of the Workgroup Human-Computer Interaction and Usability Engineering of the Austrian Computer Society on HCI and Usability for Education and Work | 2008

Knowledge-Based Patterns of Remembering: Eye Movement Scanpaths Reflect Domain Experience

Geoffrey M. Underwood; Katherine Humphrey; Tom Foulsham

How does knowledge of a domain influence the way in which we inspect artefacts from within that domain? Eye fixation scanpaths were recorded as trained individuals looked at images from within their own domain or from another domain. Sequences of fixations indicated differences in the inspection patterns of the two groups, with knowledge reflected in lower reliance of low-level visual features. Scanpaths were observed during first and second viewings of pictures and found to be reliably similar, and this relationship held in a second experiment when the second viewing was performed one week later. Eye fixation scanpaths indicate the viewers knowledge of the domain of study.


European Journal of Cognitive Psychology | 2010

Enhanced Memory for Emotional Pictures: A Product of Increased Attention to Affective Stimuli?

Louise Humphreys; Geoffrey M. Underwood; Peter Chapman

The current experiment addressed the question, is enhanced memory for emotional pictures due to increased attention to affective stimuli? Participants viewed pairs of pictures (emotional-neutral or neutral-neutral) whilst their eye movements were recorded; participants had to decide which picture out of each pair they preferred. There was increased attention to positive pictures and decreased attention to negative images during picture viewing. Despite this, when a recognition test was given 1 week later, memory enhancements were found for negative pictures only. Moreover, although there was a general correlation between total inspection time and memory performance, this reliability was clear only for neutral pictures, and not for emotional images. The results suggest that memory advantages for emotional pictures can occur without increased attention to such images.


Cognitive Computation | 2011

If Visual Saliency Predicts Search, Then Why? Evidence from Normal and Gaze-Contingent Search Tasks in Natural Scenes

Tom Foulsham; Geoffrey M. Underwood

The Itti and Koch (Vision Research 40: 1489–1506, 2000) saliency map model has inspired a wealth of research testing the claim that bottom-up saliency determines the placement of eye fixations in natural scenes. Although saliency seems to correlate with (although not necessarily cause) fixation in free-viewing or encoding tasks, it has been suggested that visual saliency can be overridden in a search task, with saccades being planned on the basis of target features, rather than being captured by saliency. Here, we find that target regions of a scene that are salient according to this model are found quicker than control regions (Experiment 1). However, this does not seem to be altered by filtering features in the periphery using a gaze-contingent display (Experiment 2), and a deeper analysis of the eye movements made suggests that the saliency effect is instead due to the meaning of the scene regions. Experiment 3 supports this interpretation, showing that scene inversion reduces the saliency effect. These results suggest that saliency effects on search may have nothing to do with bottom-up saccade guidance.


Neural Networks | 2011

2011 Special Issue: Modeling eye movements in visual agnosia with a saliency map approach: Bottom-up guidance or top-down strategy?

Tom Foulsham; Jason J. S. Barton; Alan Kingstone; Richard Dewhurst; Geoffrey M. Underwood

Two recent papers (Foulsham, Barton, Kingstone, Dewhurst, & Underwood, 2009; Mannan, Kennard, & Husain, 2009) report that neuropsychological patients with a profound object recognition problem (visual agnosic subjects) show differences from healthy observers in the way their eye movements are controlled when looking at images. The interpretation of these papers is that eye movements can be modeled as the selection of points on a saliency map, and that agnosic subjects show an increased reliance on visual saliency, i.e., brightness and contrast in low-level stimulus features. Here we review this approach and present new data from our own experiments with an agnosic patient that quantifies the relationship between saliency and fixation location. In addition, we consider whether the perceptual difficulties of individual patients might be modeled by selectively weighting the different features involved in a saliency map. Our data indicate that saliency is not always a good predictor of fixation in agnosia: even for our agnosic subject, as for normal observers, the saliency-fixation relationship varied as a function of the task. This means that top-down processes still have a significant effect on the earliest stages of scanning in the setting of visual agnosia, indicating severe limitations for the saliency map model. Top-down, active strategies-which are the hallmark of our human visual system-play a vital role in eye movement control, whether we know what we are looking at or not.


Computers in Education | 1996

Collaboration and discourse while programming the KidSim microworld simulation

Geoffrey M. Underwood; Jean Underwood; Karen Pheasey; David J. Gilmore

Abstract How does group discussion predict performance while small groups of children work at a programming exercise? This question was approached using 11–14 year old children being introduced to a programming environment (KidSim) that employs graphical re-write rules and programming by direct demonstration. KidSim enables identification of the separate stages of rule writing and testing by the non-group observer. After an introduction to KidSim the groups were set an exercise in which a number of rules were to be written. The children were allowed as long as necessary to complete the exercise, and all groups succeeded. The time allocated to different components of the exercise were related to discussions that occurred during performance. Writing was associated with comments that suggested actions to be taken, and testing was associated with comments that offered opinions, evaluations, and analyses.


Cognitive Computation | 2011

See what I'm saying? Expertise and verbalisation in perception and imagery of complex scenes

Katherine Humphrey; Geoffrey M. Underwood

How does describing a previously viewed picture affect our memory for it? Does verbalisation affect our eye movements even when the picture has disappeared? When viewing a photograph, the sequences of eye movements we make (‘scanpaths’) are influenced by both bottom-up visual saliency and top-down cognitive knowledge. Recognition memory is enhanced and the similarity of scanpaths at encoding and recognition is greater for domain-specific pictures. A similarity in scanpaths is also observed during imagery but to a greatly reduced degree. This study explored whether scanpath similarity could be improved by verbalising one’s memory of the picture and whether the previously observed domain-specific advantage was still present when no bottom-up information was available. Specialists and controls were shown a set of photographs, and after each one had to either visualise it or describe it from memory. The stimuli were complex scenes, half of which contained a domain-specific object. Recognition accuracy was increased by post-stimulus verbalisation, and specialists demonstrated an advantage for stimuli that contained domain-relevant information. Saliency influenced both verbal feedback and eye movements but was moderated by domain expertise. Scanpaths were more similar when pictures were described compared to when imagined, and specialists produced more similar scanpaths when describing domain-specific pictures, compared to control pictures and control participants.

Collaboration


Dive into the Geoffrey M. Underwood's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Karen Pheasey

University of Nottingham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eleanor Cross

University of Nottingham

View shared research outputs
Top Co-Authors

Avatar

Emma Templeman

University of Nottingham

View shared research outputs
Top Co-Authors

Avatar

Jackie Bloyce

University of Nottingham

View shared research outputs
Researchain Logo
Decentralizing Knowledge