Nichola Burton
University of Western Australia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nichola Burton.
Journal of Vision | 2015
Nichola Burton; Linda Jeffery; Andrew J. Calder; Gillian Rhodes
Facial expression is theorized to be visually represented in a multidimensional expression space, relative to a norm. This norm-based coding is typically argued to be implemented by a two-pool opponent coding system. However, the evidence supporting the opponent coding of expression cannot rule out the presence of a third channel tuned to the center of each coded dimension. Here we used a paradigm not previously applied to facial expression to determine whether a central-channel model is necessary to explain expression coding. Participants identified expressions taken from a fear/antifear trajectory, first at baseline and then in two adaptation conditions. In one condition, participants adapted to the expression at the center of the trajectory. In the other condition, participants adapted to alternating images from the two ends of the trajectory. The range of expressions that participants perceived as lying at the center of the trajectory narrowed in both conditions, a pattern that is not predicted by the central-channel model but can be explained by the opponent-coding model. Adaptation to the center of the trajectory also increased identification of both fear and antifear, which may indicate a functional benefit for adaptive coding of facial expression.
Cognition | 2015
Gillian Rhodes; Stephen Pond; Nichola Burton; Nadine Kloth; Linda Jeffery; Jason Bell; Louise Ewing; Andrew J. Calder; Romina Palermo
Traditional models of face perception emphasize distinct routes for processing face identity and expression. These models have been highly influential in guiding neural and behavioural research on the mechanisms of face perception. However, it is becoming clear that specialised brain areas for coding identity and expression may respond to both attributes and that identity and expression perception can interact. Here we use perceptual aftereffects to demonstrate the existence of dimensions in perceptual face space that code both identity and expression, further challenging the traditional view. Specifically, we find a significant positive association between face identity aftereffects and expression aftereffects, which dissociates from other face (gaze) and non-face (tilt) aftereffects. Importantly, individual variation in the adaptive calibration of these common dimensions significantly predicts ability to recognize both identity and expression. These results highlight the role of common dimensions in our ability to recognize identity and expression, and show why the high-level visual processing of these attributes is not entirely distinct.
PLOS ONE | 2014
Frances Caulfield; Louise Ewing; Nichola Burton; Eleni Avard; Gillian Rhodes
Appearance-based trustworthiness inferences may reflect the misinterpretation of emotional expression cues. Children and adults typically perceive faces that look happy to be relatively trustworthy and those that look angry to be relatively untrustworthy. Given reports of atypical expression perception in children with Autism Spectrum Disorder (ASD), the current study aimed to determine whether the modulation of trustworthiness judgments by emotional expression cues in children with ASD is also atypical. Cognitively-able children with and without ASD, aged 6–12 years, rated the trustworthiness of faces showing happy, angry and neutral expressions. Trust judgments in children with ASD were significantly modulated by overt happy and angry expressions, like those of typically-developing children. Furthermore, subtle emotion cues in neutral faces also influenced trust ratings of the children in both groups. These findings support a powerful influence of emotion cues on perceived trustworthiness, which even extends to children with social cognitive impairments.
Journal of Experimental Psychology: Human Perception and Performance | 2013
Nichola Burton; Linda Jeffery; Andy Skinner; Christopher P. Benton; Gillian Rhodes
Children are less skilled than adults at making judgments about facial expression. This could be because they have not yet developed adult-like mechanisms for visually representing faces. Adults are thought to represent faces in a multidimensional face-space, and have been shown to code the expression of a face relative to the norm or average face in face-space. Norm-based coding is economical and adaptive, and may be what makes adults more sensitive to facial expression than children. This study investigated the coding system that children use to represent facial expression. An adaptation aftereffect paradigm was used to test 24 adults and 18 children (9 years 2 months to 9 years 11 months old). Participants adapted to weak and strong antiexpressions. They then judged the expression of an average expression. Adaptation created aftereffects that made the test face look like the expression opposite that of the adaptor. Consistent with the predictions of norm-based but not exemplar-based coding, aftereffects were larger for strong than weak adaptors for both age groups. Results indicate that, like adults, childrens coding of facial expressions is norm-based.
Journal of Vision | 2016
Nichola Burton; Linda Jeffery; Jack Bonner; Gillian Rhodes
Adaptation to facial expressions produces aftereffects that bias perception of subsequent expressions away from the adaptor. Studying the temporal dynamics of an aftereffect can help us to understand the neural processes that underlie perception, and how they change with experience. Little is known about the temporal dynamics of the expression aftereffect. We conducted two experiments to measure the timecourse of this aftereffect. In Experiment 1 we examined how the size of the aftereffect varies with changes in the duration of the adaptor and test stimuli. We found that the expression aftereffect follows the classic timecourse pattern of logarithmic build-up and exponential decay that has been demonstrated for many lower level aftereffects, as well as for facial identity and figural face aftereffects. This classic timecourse pattern suggests that the adaptive calibration mechanisms of facial expression are similar to those of lower level visual stimuli, and is consistent with a perceptual locus for the adaptation aftereffect. We also found that aftereffects could be generated by as little as 1 s of adaptation, and in some conditions lasted for as long as 3200 ms. We extended this last finding in Experiment 2, exploring the longevity of the expression aftereffect by adding a stimulus-free gap of varying duration between adaptation and test. We found that significant expression aftereffects were still present 32 s after adaptation. The persistence of the expression aftereffect suggests that they may have a considerable impact on day-to-day expression perception.
Journal of Experimental Psychology: Human Perception and Performance | 2017
Gillian Rhodes; Stephen Pond; Linda Jeffery; Christopher P. Benton; Andy Skinner; Nichola Burton
We used aftereffects to investigate the coding mechanisms underlying perception of facial expression. Recent evidence that some dimensions are common to the coding of both expression and identity suggests that the same type of coding system could be used for both attributes. Identity is adaptively opponent coded by pairs of neural populations tuned to opposite extremes of relevant dimensions. Therefore, the authors hypothesized that expression would also be opponent coded. An important line of support for opponent coding is that aftereffects increase with adaptor extremity (distance from an average test face) over the full natural range of possible faces. Previous studies have reported that expression aftereffects increase with adaptor extremity. Critically, however, they did not establish the extent of the natural range and so have not ruled out a decrease within that range that could indicate narrowband, multichannel coding. Here the authors show that expression aftereffects, like identity aftereffects, increase linearly over the full natural range of possible faces and remain high even for impossibly distorted adaptors. These results suggest that facial expression, like face identity, is opponent coded.
Journal of Experimental Psychology: Human Perception and Performance | 2017
Jemma R. Collova; Nadine Kloth; Kate Crookes; Nichola Burton; Cynthia Y. H. Chan; Janet Hui-wen Hsiao; Gillian Rhodes
The well-known other-race effect in face recognition has been widely studied, both for its theoretical insights into the nature of face expertise and because of its social and forensic importance. Here we demonstrate an other-race effect for the perception of a simple visual signal provided by the eyes, namely gaze direction. In Study 1, Caucasian and Asian participants living in Australia both showed greater perceptual sensitivity to detect direct gaze in own-race than other-race faces. In Study 2, Asian (Chinese) participants living in Australia and Asian (Chinese) participants living in Hong Kong both showed this other-race effect, but Caucasian participants did not. Despite this inconsistency, meta-analysis revealed a significant other-race effect when results for all 5 participant groups from corresponding conditions in the 2 studies were combined. These results demonstrate a new other-race effect for the perception of the simple, but socially potent, cue of direct gaze. When identical morphed-race eyes were inserted into the faces, removing race-specific eye cues, no other-race effect was found (with 1 exception). Thus, the balance of evidence implicated perceptual expertise, rather than social motivation, in the other-race effect for detecting direct gaze.
Journal of Experimental Psychology: Human Perception and Performance | 2017
Romina Palermo; Linda Jeffery; Jessica Lewandowsky; Chiara Fiorentini; Jessica Irons; Amy Dawel; Nichola Burton; Elinor McKone; Gillian Rhodes
There are large, reliable individual differences in the recognition of facial expressions of emotion across the general population. The sources of this variation are not yet known. We investigated the contribution of a key face perception mechanism, adaptive coding, which calibrates perception to optimize discrimination within the current perceptual “diet.” We expected that a facial expression system that readily recalibrates might boost sensitivity to variation among facial expressions, thereby enhancing recognition ability. We measured adaptive coding strength with an established facial expression aftereffect task and measured facial expression recognition ability with 3 tasks optimized for the assessment of individual differences. As expected, expression recognition ability was positively associated with the strength of facial expression aftereffects. We also asked whether individual variation in affective factors might contribute to expression recognition ability, given that clinical levels of such traits have previously been linked to ability. Expression recognition ability was negatively associated with self-reported anxiety but not with depression, mood, or degree of autism-like or empathetic traits. Finally, we showed that the perceptual factor of adaptive coding contributes to variation in expression recognition ability independently of affective factors.
British Journal of Psychology | 2018
Bianca Thorup; Kate Crookes; Paul Chang; Nichola Burton; Stephen Pond; Tze Kwan Li; Janet Hui-wen Hsiao; Gillian Rhodes
People are better at recognizing own-race than other-race faces. This other-race effect has been argued to be the result of perceptual expertise, whereby face-specific perceptual mechanisms are tuned through experience. We designed new tasks to determine whether other-race effects extend to categorizing faces by national origin. We began by selecting sets of face stimuli for these tasks that are typical in appearance for each of six nations (three Caucasian, three Asian) according to people from those nations (Study 1). Caucasian and Asian participants then categorized these faces by national origin (Study 2). Own-race faces were categorized more accurately than other-race faces. In contrast, Asian American participants, with more extensive other-race experience than the first Asian group, categorized other-race faces better than own-race faces, demonstrating a reversal of the other-race effect. Therefore, other-race effects extend to the ability to categorize faces by national origin, but only if participants have greater perceptual experience with own-race, than other-race faces. Study 3 ruled out non-perceptual accounts by showing that Caucasian and Asian faces were sorted more accurately by own-race than other-race participants, even in a sorting task without any explicit labelling required. Together, our results demonstrate a new other-race effect in sensitivity to national origin of faces that is linked to perceptual expertise.
Journal of Experimental Psychology: Human Perception and Performance | 2017
Linda Jeffery; Nichola Burton; Stephen Pond; Colin W. G. Clifford; Gillian Rhodes
Face identity can be represented in a multidimensional space centered on the average. It has been argued that the average acts as a perceptual norm, with the norm coded implicitly by balanced activation in pairs of channels that respond to opposite extremes of face dimensions (two-channel model). In Experiment 1 we used face identity aftereffects to distinguish this model from a narrow-band multichannel model with no norm. We show that as adaptors become more extreme, aftereffects initially increase sharply and then plateau. Crucially there is no decrease, ruling out narrow-band multichannel coding, but consistent with a two-channel norm-based model. However, these results leave open the possibility that there may be a third channel, tuned explicitly to the norm (three-channel model). In Experiment 2 we show that alternating adaptation widens the range identified as the average whereas adaptation to the average narrows the range, consistent with the three-channel model. Explicit modeling confirmed the three-channel model as the best fit for the combined data from both experiments. However, a two-channel model with decision criteria allowed to vary between adapting conditions, also provided a very good fit. These results support opponent, norm-based coding of face identity with additional explicit coding of the norm.