Stephen R. H. Langton
University of Stirling
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Stephen R. H. Langton.
Trends in Cognitive Sciences | 2000
Stephen R. H. Langton; Roger Watt; Vicki Bruce
The face communicates an impressive amount of visual information. We use it to identify its owner, how they are feeling and to help us understand what they are saying. Models of face processing have considered how we extract such meaning from the face but have ignored another important signal - eye gaze. In this article we begin by reviewing evidence from recent neurophysiological studies that suggests that the eyes constitute a special stimulus in at least two senses. First, the structure of the eyes is such that it provides us with a particularly powerful signal to the direction of another persons gaze, and second, we may have evolved neural mechanisms devoted to gaze processing. As a result, gaze direction is analysed rapidly and automatically, and is able to trigger reflexive shifts of an observers visual attention. However, understanding where another individual is directing their attention involves more than simply analysing their gaze direction. We go on to describe research with adult participants, children and non-human primates that suggests that other cues such as head orientation and pointing gestures make significant contributions to the computation of anothers direction of attention.
Visual Cognition | 1999
Stephen R. H. Langton; Vicki Bruce
Four experiments investigate the hypothesis that cues to the direction of anothers social attention produce a reflexive orienting of an observers visual attention. Participants were asked to make a simple detection response to a target letter which could appear at one of four locations on a visual display. Before the presentation of the target, one of these possible locations was cued by the orientation of a digitized head stimulus, which appeared at fixation in the centre of the display. Uninformative and to-be-ignored cueing stimuli produced faster target detection latencies at cued relative to uncued locations, but only when the cues appeared 100 msec before the onset of the target (Experiments 1 and 2). The effect was uninfluenced by the introduction of a to-be-attended and relatively informative cue (Experiment 3), but was disrupted by the inversion of the head cues (Experiment 4). It is argued that these findings are consistent with the operation of a reflexive, stimulus-driven or exogenous orient...
Journal of Experimental Psychology: Applied | 1997
Gwyneth Doherty-Sneddon; Anne H. Anderson; Claire O'Malley; Stephen R. H. Langton; Simon Garrod; Vicki Bruce
This article examined communication and task performance in face-to-face, copresent, and video-mediated communication (VMC). Study 1 showed that when participants in a collaborative problem-solving task could both see and hear each other, the structure of their dialogues differed compared with dialogues obtained when they only heard each other. The audio-only conversations had more words, and these extra utterances often provided and elicited verbal feedback functions, which visual signals can deliver when available. Study 2, however, showed that high-quality VMC did not appear to deliver the same benefits as face-to-face, copresent interaction. It appears that novelty, attenuation, and remoteness all may have contributed to the effects found factors that should be considered by designers of remote video-conferencing systems.
Quarterly Journal of Experimental Psychology | 2000
Stephen R. H. Langton
Three experiments are reported that investigate the hypothesis that head orientation and gaze direction interact in the processing of another individuals direction of social attention. A Stroop-type interference paradigm was adopted, in which gaze and head cues were placed into conflict. In separate blocks of trials, participants were asked to make speeded keypress responses contingent on either the direction of gaze, or the orientation of the head displayed in a digitized photograph of a male face. In Experiments 1 and 2, head and gaze cues showed symmetrical interference effects. Compared with congruent arrangements, incongruent head cues slowed responses to gaze cues, and incongruent gaze cues slowed responses to head cues, suggesting that head and gaze are mutually influential in the analysis of social attention direction. This mutuality was also evident in a cross-modal version of the task (Experiment 3) where participants responded to spoken directional words whilst ignoring the head/gaze images. It is argued that these interference effects arise from the independent influences of gaze and head orientation on decisions concerning social attention direction.
Cognition | 2008
Stephen R. H. Langton; Anna S. Law; A. Mike Burton; Stefan R. Schweinberger
We report three experiments that investigate whether faces are capable of capturing attention when in competition with other non-face objects. In Experiment 1a participants took longer to decide that an array of objects contained a butterfly target when a face appeared as one of the distracting items than when the face did not appear in the array. This irrelevant face effect was eliminated when the items in the arrays were inverted in Experiment 1b ruling out an explanation based on some low-level image-based properties of the faces. Experiment 2 replicated and extended the results of Experiment 1a. Irrelevant faces once again interfered with search for butterflies but, when the roles of faces and butterflies were reversed, irrelevant butterflies no longer interfered with search for faces. This suggests that the irrelevant face effect is unlikely to have been caused by the relative novelty of the faces or arises because butterflies and faces were the only animate items in the arrays. We conclude that these experiments offer evidence of a stimulus-driven capture of attention by faces.
Attention Perception & Psychophysics | 2004
Stephen R. H. Langton; Helen Honeyman; Emma Tessler
We report seven experiments that investigate the influence that head orientation exerts on the perception of eye-gaze direction. In each of these experiments, participants were asked to decide whether the eyes in a brief and masked presentation were looking directly at them or were averted. In each case, the eyes could be presented alone, or in the context of congruent or incongruent stimuli. In Experiment 1A, the congruent and incongruent stimuli were provided by the orientation of face features and head outline. Discrimination of gaze direction was found to be better when face and gaze were congruent than in both of the other conditions, an effect that was not eliminated by inversion of the stimuli (Experiment 1B). In Experiment 2A, the internal face features were removed, but the outline of the head profile was found to produce an identical pattern of effects on gaze discrimination, effects that were again insensitive to inversion (Experiment 2B) and which persisted when lateral displacement of the eyes was controlled (Experiment 2C). Finally, in Experiment 3A, nose angle was also found to influence participants’ ability to discriminate direct gaze from averted gaze, but here the effectwas eliminated by inversion of the stimuli (Experiment 3B). We concluded that an image-based mechanism is responsible for the influence of head profile on gaze perception, whereas the analysis of nose angle involves the configural processing of face features.
Visual Cognition | 2008
Markus Bindemann; A. Mike Burton; Stephen R. H. Langton
Previous research has demonstrated an interaction between eye gaze and selected facial emotional expressions, whereby the perception of anger and happiness is impaired when the eyes are horizontally averted within a face, but the perception of fear and sadness is enhanced under the same conditions. The current study reexamined these claims over six experiments. In the first three experiments, the categorization of happy and sad expressions (Experiments 1 and 2) and angry and fearful expressions (Experiment 3) was impaired when eye gaze was averted, in comparison to direct gaze conditions. Experiment 4 replicated these findings in a rating task, which combined all four expressions within the same design. Experiments 5 and 6 then showed that previous findings, that the perception of selected expressions is enhanced under averted gaze, are stimulus and task-bound. The results are discussed in relation to research on facial expression processing and visual attention.
Journal of Vision | 2007
Markus Bindemann; A. Mike Burton; Stephen R. H. Langton; Stefan R. Schweinberger; Martin J. Doherty
Humans attend to faces. This study examines the extent to which attention biases to faces are under top-down control. In a visual cueing paradigm, observers responded faster to a target probe appearing in the location of a face cue than of a competing object cue (Experiments 1a and 2a). This effect could be reversed when faces were negatively predictive of the likely target location, making it beneficial to attend to the object cues (Experiments 1b and 2b). It was easier still to strategically shift attention to predictive face cues (Experiment 2c), indicating that the endogenous allocation of attention was augmented here by an additional effect. However, faces merely delayed the voluntary deployment of attention to object cues, but they could not prevent it, even at short cue-target intervals. This finding suggests that attention biases for faces can be rapidly countered by an observers endogenous control.
Perspectives on Psychological Science | 2014
V. K. Alogna; M. K. Attaya; Philip Aucoin; Štěpán Bahník; S. Birch; Angela R Birt; Brian H. Bornstein; Samantha Bouwmeester; Maria A. Brandimonte; Charity Brown; K. Buswell; Curt A. Carlson; Maria A. Carlson; S. Chu; A. Cislak; M. Colarusso; Melissa F. Colloff; Kimberly S. Dellapaolera; Jean-François Delvenne; A. Di Domenico; Aaron Drummond; Gerald Echterhoff; John E. Edlund; Casey Eggleston; B. Fairfield; G. Franco; Fiona Gabbert; B. W. Gamblin; Maryanne Garry; R. Gentry
Trying to remember something now typically improves your ability to remember it later. However, after watching a video of a simulated bank robbery, participants who verbally described the robber were 25% worse at identifying the robber in a lineup than were participants who instead listed U.S. states and capitals—this has been termed the “verbal overshadowing” effect (Schooler & Engstler-Schooler, 1990). More recent studies suggested that this effect might be substantially smaller than first reported. Given uncertainty about the effect size, the influence of this finding in the memory literature, and its practical importance for police procedures, we conducted two collections of preregistered direct replications (RRR1 and RRR2) that differed only in the order of the description task and a filler task. In RRR1, when the description task immediately followed the robbery, participants who provided a description were 4% less likely to select the robber than were those in the control condition. In RRR2, when the description was delayed by 20 min, they were 16% less likely to select the robber. These findings reveal a robust verbal overshadowing effect that is strongly influenced by the relative timing of the tasks. The discussion considers further implications of these replications for our understanding of verbal overshadowing.
Memory & Cognition | 2007
Claus-Christian Carbon; Tilo Strobach; Stephen R. H. Langton; Géza Harsányi; Helmut Leder; Gyula Kovács
A central problem of face identification is forming stable representations from entities that vary—both in a rigid and nonrigid manner—over time, under different viewing conditions, and with altering appearances. Three experiments investigated the underlying mechanism that is more flexible than has often been supposed. The experiments used highly familiar faces that were first inspected as configurally manipulated versions. When participants had to select the veridical version (known from TV/media/movies) out of a series of gradually altered versions, their selections were biased toward the previously inspected manipulated versions. This adaptation effect (face identity aftereffect, Leopold, Rhodes, Müller, & Jeffery, 2005) was demonstrated even for a delay of 24h between inspection and test phase. Moreover, the inspection of a specific image version of a famous person not only changed the veridicality decision of the same image, but also transferred to other images of this person as well. Thus, this adaptation effect is apparently not based on simple pictorial grounds, but appears to have a rather structural basis. Importantly, as indicated by Experiment 3, the adaptation effect was not based on a simple averaging mechanism or an episodic memory effect, but on identity-specific information.