Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anne P. Hillstrom is active.

Publication


Featured researches published by Anne P. Hillstrom.


Computers in Human Behavior | 2006

Factors that guide or disrupt attentive visual processing

Anne P. Hillstrom; Yu-Chin Chai

A review of basic research findings about what distracts visual attention is presented within the framework of designing visual interfaces for computer applications. The factors discussed include the distinctiveness of stimuli, the motivation of the user, memory for previous encounters with similar displays, and the perceptual organization of displays.


Quarterly Journal of Experimental Psychology | 2017

Tracking the truth: The effect of face familiarity on eye fixations during deception

Ailsa E. Millen; Lorraine Hope; Anne P. Hillstrom; Aldert Vrij

In forensic investigations, suspects sometimes conceal recognition of a familiar person to protect co-conspirators or hide knowledge of a victim. The current experiment sought to determine whether eye fixations could be used to identify memory of known persons when lying about recognition of faces. Participants’ eye movements were monitored whilst they lied and told the truth about recognition of faces that varied in familiarity (newly learned, famous celebrities, personally known). Memory detection by eye movements during recognition of personally familiar and famous celebrity faces was negligibly affected by lying, thereby demonstrating that detection of memory during lies is influenced by the prior learning of the face. By contrast, eye movements did not reveal lies robustly for newly learned faces. These findings support the use of eye movements as markers of memory during concealed recognition but also suggest caution when familiarity is only a consequence of one brief exposure.


Journal of Experimental Psychology: Applied | 2013

The effect of transparency on recognition of overlapping objects.

Anne P. Hillstrom; Hannah Wakefield; Helen Scholey

Are overlapping objects easier to recognize when the objects are transparent or opaque? It is important to know whether the transparency of X-ray images of luggage contributes to the difficulty in searching those images for targets. Transparency provides extra information about objects that would normally be occluded but creates potentially ambiguous depth relations at the region of overlap. Two experiments investigated the threshold durations at which adult participants could accurately name pairs of overlapping objects that were opaque or transparent. In Experiment 1, the transparent displays included monocular cues to relative depth. Recognition of the back object was possible at shorter durations for transparent displays than for opaque displays. In Experiment 2, the transparent displays had no monocular depth cues. There was no difference in the duration at which the back object was recognized across transparent and opaque displays. The results of the two experiments suggest that transparent displays, even though less familiar than opaque displays, do not make object recognition more difficult, and possibly show a benefit. These findings call into question the importance of edge junctions in object recognition. (PsycINFO Database Record (c) 2013 APA, all rights reserved).


Philosophical Transactions of the Royal Society B | 2017

Cat and mouse search: the influence of scene and object analysis on eye movements when targets change locations during search

Anne P. Hillstrom; Joice Segabinazi; Hayward J. Godwin; Simon P. Liversedge; Valerie Benson

We explored the influence of early scene analysis and visible object characteristics on eye movements when searching for objects in photographs of scenes. On each trial, participants were shown sequentially either a scene preview or a uniform grey screen (250 ms), a visual mask, the name of the target and the scene, now including the target at a likely location. During the participants first saccade during search, the target location was changed to: (i) a different likely location, (ii) an unlikely but possible location or (iii) a very implausible location. The results showed that the first saccade landed more often on the likely location in which the target re-appeared than on unlikely or implausible locations, and overall the first saccade landed nearer the first target location with a preview than without. Hence, rapid scene analysis influenced initial eye movement planning, but availability of the target rapidly modified that plan. After the target moved, it was found more quickly when it appeared in a likely location than when it appeared in an unlikely or implausible location. The findings show that both scene gist and object properties are extracted rapidly, and are used in conjunction to guide saccadic eye movements during visual search. This article is part of the themed issue ‘Auditory and visual scene analysis’.


Archive | 2017

Program and Data for 'Cat and Mouse Search' paper

Anne P. Hillstrom; Hayward J. Godwin; Simon P. Liversedge; Valerie Benson; Joice Segabinazi

Hillstrom, Anne P., Segabinazi, Joice D., Godwin, Hayward J., Liversedge, Simon P. and Benson, Valerie (2017) Cat and mouse search: the influence of scene and object analysis on eye movements when targets change locations during search. Philosophical Transactions of The Royal Society B Biological Sciences, 372, (1714), 1-9. (doi:10.1098/rstb.2016.0106 ). (PMID:28044017). The programme was written using Experiment Builder, software from SR-Research Ltd.


Behavioral and Brain Sciences | 2017

The FVF framework and target prevalence effects

Tamaryn Menneer; Hayward J. Godwin; Simon P. Liversedge; Anne P. Hillstrom; Valerie Benson; Nick Donnelly

The Functional Visual Field (FVF) offers explanatory power. To us, it relates to existing literature on the flexibility of attentional focus in visual search and reading (Eriksen & St. James 1986; McConkie & Rayner 1975). The target article promotes reflection on existing findings. Here we consider the FVF as a mechanism in the Prevalence Effect (PE) in visual search.


PLOS ONE | 2013

Oculomotor examination of the weapon focus effect: does a gun automatically engage visual attention?

Heather D. Flowe; Lorraine Hope; Anne P. Hillstrom

Background A person is less likely to be accurately remembered if they appear in a visual scene with a gun, a result that has been termed the weapon focus effect (WFE). Explanations of the WFE argue that weapons engage attention because they are unusual and/or threatening, which causes encoding deficits for the other items in the visual scene. Previous WFE research has always embedded the weapon and nonweapon objects within a larger context that provides information about an actors intention to use the object. As such, it is currently unknown whether a gun automatically engages attention to a greater extent than other objects independent of the context in which it is presented. Method Reflexive responding to a gun compared to other objects was examined in two experiments. Experiment 1 employed a prosaccade gap-overlap paradigm, whereby participants looked toward a peripheral target, and Experiment 2 employed an antisaccade gap-overlap paradigm, whereby participants looked away from a peripheral target. In both experiments, the peripheral target was a gun or a nonthreatening object (i.e., a tomato or pocket watch). We also controlled how unexpected the targets were and compared saccadic reaction times across types of objects. Results A gun was not found to differentially engage attention compared to the unexpected object (i.e., a pocket watch). Some evidence was found (Experiment 2) that both the gun and the unexpected object engaged attention to a greater extent compared the expected object (i.e., a tomato). Conclusion An image of a gun did not engage attention to a larger extent than images of other types of objects (i.e., a pocket watch or tomato). The results suggest that context may be an important determinant of WFE. The extent to which an object is threatening may depend on the larger context in which it is presented.


Journal of Vision | 2010

Using gaze measures to diagnose what guides search in complex displays

Anne P. Hillstrom; Tamaryn Menneer; Nick Donnelly; Mel Krokos

GIS maps are one kind of complex display in which people search for targets. Recent studies have shown that the choice of colour-scales when displaying these maps has important implications for peoples strategies in searching these displays (Donnelly, Cave, Welland & Menneer, 2006). The current study follows up on this research. Observers searched for multiple targets in each display. Two targets were red and two were blue, and targets were not very salient. Observers searched until all targets were found. This often took several seconds and many fixations. The order in hich observers found targets suggested that they were more reliant on search for particular colours under some color-scales than under others. What will be presented here is a number of oculomotor measures used to explore how search was guided in the displays: the degree to which fixations clustered around targets, the image characteristics of regions of the display that were fixated, and goodness of fit to fixation distributions of Itti & Koch saliency maps, where the features used to compute saliency were varied. The goal was to see which measures would best pick up on differences in what guided search through complex displays


Psychonomic Bulletin & Review | 2012

The effect of the first glimpse at a scene on eye movements during search

Anne P. Hillstrom; Helen Scholey; Simon P. Liversedge; Valerie Benson


Archive | 2008

Applying psychological science to the CCTV review process: A review of cognitive and ergonomic literature.

Anne P. Hillstrom; Lorraine Hope; Claire Nee

Collaboration


Dive into the Anne P. Hillstrom's collaboration.

Top Co-Authors

Avatar

Lorraine Hope

University of Portsmouth

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tamaryn Menneer

University of Southampton

View shared research outputs
Top Co-Authors

Avatar

Valerie Benson

University of Southampton

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nick Donnelly

University of Southampton

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

D.J. Taunton

University of Southampton

View shared research outputs
Top Co-Authors

Avatar

Helen Scholey

University of Portsmouth

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge