Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where A. Mike Burton is active.

Publication


Featured researches published by A. Mike Burton.


Vision Research | 2001

A principal component analysis of facial expressions.

Andrew J. Calder; A. Mike Burton; Paul Miller; Andrew W. Young; Shigeru Akamatsu

Pictures of facial expressions from the Ekman and Friesen set (Ekman, P., Friesen, W. V., (1976). Pictures of facial affect. Palo Alto, California: Consulting Psychologists Press) were submitted to a principal component analysis (PCA) of their pixel intensities. The output of the PCA was submitted to a series of linear discriminant analyses which revealed three principal findings: (1) a PCA-based system can support facial expression recognition, (2) continuous two-dimensional models of emotion (e.g. Russell, J. A. (1980). A circumplex model of affect. Journal of Personality and Social Psychology, 39, 1161-1178) are reflected in the statistical structure of the Ekman and Friesen facial expressions, and (3) components for coding facial expression information are largely different to components for facial identity information. The implications for models of face processing are discussed.


Perception | 1993

Sex Discrimination: How Do We Tell the Difference between Male and Female Faces?

Vicki Bruce; A. Mike Burton; Elias Hanna; Pat Healey; Oli Mason; Anne M. Coombes; Rick Fright; Alf D. Linney

People are remarkably accurate (approaching ceiling) at deciding whether faces are male or female, even when cues from hairstyle, makeup, and facial hair are minimised. Experiments designed to explore the perceptual basis of our ability to categorise the sex of faces are reported. Subjects were considerably less accurate when asked to judge the sex of three-dimensional (3-D) representations of faces obtained by laser-scanning, compared with a condition where photographs were taken with hair concealed and eyes closed. This suggests that cues from features such as eyebrows, and skin texture, play an important role in decisionmaking. Performance with the laser-scanned heads remained quite high with 3/4-view faces, where the 3-D shape of the face should be easiest to see, suggesting that the 3-D structure of the face is a further source of information contributing to the classification of its sex. Performance at judging the sex from photographs (with hair concealed) was disrupted if the photographs were inverted, which implies that the superficial cues contributing to the decision are not processed in a purely ‘local’ way. Performance was also disrupted if the faces were shown in photographic negatives, which is consistent with the use of 3-D information, since negation probably operates by disrupting the computation of shape from shading. In 3-D, the ‘average’ male face differs from the ‘average’ female face by having a more protuberant nose/brow and more prominent chin/jaw. The effects of manipulating the shapes of the noses and chins of the laser-scanned heads were assessed and significant effects of such manipulations on the apparent masculinity or femininity of the heads were revealed. It appears that our ability to make this most basic of facial categorisations may be multiply determined by a combination of 2-D, 3-D, and textural cues and their interrelationships.


Perception | 1993

What's the Difference between Men and Women? Evidence from Facial Measurement

A. Mike Burton; Vicki Bruce; Neal Dench

Human subjects are able to identify the sex of faces with very high accuracy. Using photographs of adults in which hair was concealed by a swimming cap, subjects performed with 96% accuracy. Previous work has identified a number of dimensions on which the faces of men and women differ. An attempt to combine these dimensions into a single function to classify male and female faces reliably is described. Photographs were taken of 91 male and 88 female faces in full face and profile. These were measured in several ways: (i) simple distances between key points in the pictures; (ii) ratios and angles formed between key points in the pictures; (iii) three-dimensional (3-D) distances derived by combination of full-face and profile photographs. Discriminant function analysis showed that the best discriminators were derived from simple distance measurements in the full face (85% accuracy with 12 variables) and 3-D distances (85% accuracy with 6 variables). Combining measures taken from the picture plane with those derived in 3-D produced a discriminator approaching human performance (94% accuracy with 16 variables). Performance of the discriminant function was compared with that of human perceivers and found to be correlated, but far from perfectly. The difficulty of deriving a reliable function to distinguish between the sexes is discussed with reference to the development of automatic face-processing programs in machine vision. It is argued that such systems will need to incorporate an understanding of the stimuli if they are to be effective.


Cognitive Science | 1999

From Pixels to People: A Model of Familiar Face Recognition

A. Mike Burton; Vicki Bruce; Peter J. B. Hancock

Research in face recognition has largely been divided between those projects concerned with front-end image processing and those projects concerned with memory for familiar people. These perceptual and cognitive programmes of research have proceeded in parallel, with only limited mutual influence. In this paper we present a model of human face recognition which combines both a perceptual and a cognitive component. The perceptual front-end is based on principal components analysis of face images, and the cognitive back-end is based on a simple interactive activation and competition architecture. We demonstrate that this model has a much wider predictive range than either perceptual or cognitive models alone, and we show that this type of combination is necessary in order to analyse some important effects in human face recognition. In sum, the model takes varying images of “known” faces and delivers information about these people.


Memory & Cognition | 2006

Unfamiliar faces are not faces: Evidence from a matching task

Ahmed M. Megreya; A. Mike Burton

It is difficult to match two images of the same unfamiliar face, even under good conditions. Here, we show that there are large individual differences on unfamiliar face matching. Initially, we tried to predict these using tests of visual short-term memory, cognitive style, and perceptual speed. Moderate correlations were produced by various components of these tests. In three other experiments, we found very strong correlations between face matching and inverted face matching on the same test. Finally, we examined potential associations between familiar and unfamiliar face processing. Strong correlations were found between familiar and unfamiliar face processing, but only when the familiar faces were inverted. We conclude that unfamiliar faces are processed for identity in a qualitatively different way than are familiar faces.


Behavior Research Methods | 2010

The Glasgow Face Matching Test

A. Mike Burton; David White; Allan McNeill

We describe a new test for unfamiliar face matching, the Glasgow Face Matching Test (GFMT). Viewers are shown pairs of faces, photographed in full-face view but with different cameras, and are asked to make same/different judgments. The full version of the test comprises 168 face pairs, and we also describe a shortened version with 40 pairs. We provide normative data for these tests derived from large subject samples. We also describe associations between the GFMT and other tests of matching and memory. The new test correlates moderately with face memory but more strongly with object matching, a result that is consistent with previous research highlighting a link between object and face matching, specific to unfamiliar faces. The test is available free for scientific use.


Memory & Cognition | 1996

Face processing: Human perception and principal components analysis

Peter J. B. Hancock; A. Mike Burton; Vicki Bruce

Principal components analysis (PCA) of face images is here related to subjects’ performance on the same images. In two experiments subjects were shown a set of faces and asked to rate them for distinctiveness. They were subsequently shown a superset of faces and asked to identify those that had appeared originally. Replicating previous work, we found that hits and false positives (FPs) did not correlate: Those faces easy to identify as being “seen” were unrelated to those faces easy to reject as being “unseen.” PCA was performed on three data sets: (1) face images with eye position standardized, (2) face images morphed to a standard template to remove shape information, and (3) the shape information from faces only. Analyses based on PCA of shape-free faces gave high predictions of FPs, whereas shape information itself contributed only to hits. Furthermore, whereas FPs were generally predictable from components early in the PCA, hits appeared to be accounted for by later components. We conclude that shape and “texture” (the image-based information remaining after morphing) may be used separately by the human face processing system, and that PCA of images offers a useful tool for understanding this system.


Cognition | 2011

Variability in photos of the same face

Rob Jenkins; David White; Xandra Van Montfort; A. Mike Burton

Psychological studies of face recognition have typically ignored within-person variation in appearance, instead emphasising differences between individuals. Studies typically assume that a photograph adequately captures a persons appearance, and for that reason most studies use just one, or a small number of photos per person. Here we show that photographs are not consistent indicators of facial appearance because they are blind to within-person variability. Crucially, this within-person variability is often very large compared to the differences between people. To investigate variability in photos of the same face, we collected images from the internet to sample a realistic range for each individual. In Experiments 1 and 2, unfamiliar viewers perceived images of the same person as being different individuals, while familiar viewers perfectly identified the same photos. In Experiment 3, multiple photographs of any individual formed a continuum of good to bad likeness, which was highly sensitive to familiarity. Finally, in Experiment 4, we found that within-person variability exceeded between-person variability in attractiveness. These observations are critical to our understanding of face processing, because they suggest that a key component of face processing has been ignored. As well as its theoretical significance, this scale of variability has important practical implications. For example, our findings suggest that face photographs are unsuitable as proof of identity.


Cognitive Psychology | 2005

Robust representations for face recognition: the power of averages.

A. Mike Burton; Rob Jenkins; Peter J. B. Hancock; David White

We are able to recognise familiar faces easily across large variations in image quality, though our ability to match unfamiliar faces is strikingly poor. Here we ask how the representation of a face changes as we become familiar with it. We use a simple image-averaging technique to derive abstract representations of known faces. Using Principal Components Analysis, we show that computational systems based on these averages consistently outperform systems based on collections of instances. Furthermore, the quality of the average improves as more images are used to derive it. These simulations are carried out with famous faces, over which we had no control of superficial image characteristics. We then present data from three experiments demonstrating that image averaging can also improve recognition by human observers. Finally, we describe how PCA on image averages appears to preserve identity-specific face information, while eliminating non-diagnostic pictorial information. We therefore suggest that this is a good candidate for a robust face representation.


Neuropsychologia | 2002

Human brain potential correlates of repetition priming in face and name recognition

Stefan R. Schweinberger; Esther C. Pickering; A. Mike Burton; Jürgen M. Kaufmann

We investigated repetition priming in the recognition of famous people by recording event-related brain potentials (ERPs) and reaction times (RTs). Participants performed speeded two-choice responses depending on whether or not a stimulus showed a famous person. In Experiment 1, a facilitation was found in RTs to famous (but not to unfamiliar) faces when primed by the same face shown in an earlier priming phase of the experiment. In ERPs, an influence of repetition priming was observed neither for the N170 nor for a temporal N250 component which in previous studies had been shown to be sensitive to immediate face repetitions. ERPs to primed unfamiliar faces were more negative over right occipitotemporal areas than those to unprimed faces, but this effect was specific for repetitions of the same image, consistent with recent findings. In contrast, ERPs to primed familiar faces were more positive than those to unprimed faces at parietal sites from 500-600 ms after face onset, and these priming effects were comparable regardless of whether the same or a different image of the celebrity had served as prime. In Experiment 2, similar results were found for name recognition-a facilitation in RTs to primed familiar but not unfamiliar names, and a parietal positivity to primed names around 500-600 ms. ERP repetition effects showed comparable topographies for faces and names, consistent with the idea of a common underlying source. With reference to current models of face recognition, we suggest that these ERP repetition effects for familiar stimuli reflect a change in post-perceptual representations for people, rather than a neural correlate of recognition at a perceptual level.

Collaboration


Dive into the A. Mike Burton's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David White

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Peter J. B. Hancock

University of Central Lancashire

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Robertson

University of Strathclyde

View shared research outputs
Researchain Logo
Decentralizing Knowledge