Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anthony P. Atkinson is active.

Publication


Featured researches published by Anthony P. Atkinson.


Perception | 2004

Emotion perception from dynamic and static body expressions in point-light and full-light displays.

Anthony P. Atkinson; Winand H. Dittrich; Andrew J Gemmell; Andrew W. Young

Research on emotion recognition has been dominated by studies of photographs of facial expressions. A full understanding of emotion perception and its neural substrate will require investigations that employ dynamic displays and means of expression other than the face. Our aims were: (i) to develop a set of dynamic and static whole-body expressions of basic emotions for systematic investigations of clinical populations, and for use in functional-imaging studies; (ii) to assess forced-choice emotion-classification performance with these stimuli relative to the results of previous studies; and (iii) to test the hypotheses that more exaggerated whole-body movements would produce (a) more accurate emotion classification and (b) higher ratings of emotional intensity. Ten actors portrayed 5 emotions (anger, disgust, fear, happiness, and sadness) at 3 levels of exaggeration, with their faces covered. Two identical sets of 150 emotion portrayals (full-light and point-light) were created from the same digital footage, along with corresponding static images of the ‘peak’ of each emotion portrayal. Recognition tasks confirmed previous findings that basic emotions are readily identifiable from body movements, even when static form information is minimised by use of point-light displays, and that full-light and even point-light displays can convey identifiable emotions, though rather less efficiently than dynamic displays. Recognition success differed for individual emotions, corroborating earlier results about the importance of distinguishing differences in movement characteristics for different emotional expressions. The patterns of misclassifications were in keeping with earlier findings on emotional clustering. Exaggeration of body movement (a) enhanced recognition accuracy, especially for the dynamic point-light displays, but notably not for sadness, and (b) produced higher emotional-intensity ratings, regardless of lighting condition, for movies but to a lesser extent for stills, indicating that intensity judgments of body gestures rely more on movement (or form-from-movement) than static form information.


Cognition | 1993

What's lost in inverted faces?

Gillian Rhodes; Susan Brake; Anthony P. Atkinson

Disproportionate inversion decrements for recognizing faces and other homogeneous stimuli are often interpreted as evidence that experts use relational features to recognize stimuli that share a configuration. However, it has never directly been shown that inversion disrupts the coding of relational features more than isolated features. Here we report three studies that compare inversion decrements for detecting changes that span the isolated-relational features continuum. Relatively large inversion decrements occurred for relational features (Thatcher illusion changes, internal feature spacing), with smaller decrements for isolated features (presence/absence of facial hair or glasses). The one discrepancy was a relatively large inversion decrement for detecting changes to the eyes and mouth, which we had classified as an isolated feature change. However, this decrement disappeared when the features were presented out of the face context (Experiments 2 and 3), suggesting that it occurs because subjects spontaneously code relations between the features and the rest of the face. Although the results support the interpretation of disproportionate inversion effects as evidence of relational coding, the difficulty of classifying changes as isolated or relational highlights an undesirable ambiguity in the isolated-relational feature distinction. We therefore consider alternative construals of the configural coding notion.


Emotion | 2002

The eyebrow frown: a salient social signal.

Jason Tipples; Anthony P. Atkinson; Andrew W. Young

Seven experiments investigated the finding that threatening schematic faces are detected more quickly than nonthreatening faces. Threatening faces with v-shaped eyebrows (angry and scheming expressions) were detected more quickly than nonthreatening faces with inverted v-shaped eyebrows (happy and sad expressions). In contrast to the hypothesis that these effects were due to perceptual features unrelated to the face, no advantage was found for v-shaped eyebrows presented in a nonfacelike object. Furthermore, the addition of internal facial features (the eyes, or the nose and mouth) was necessary to produce the detection advantage for faces with v-shaped eyebrows. Overall, the results are interpreted as showing that the v-shaped eyebrow configuration affords easy detection, but only when other internal facial features are present.


Philosophical Transactions of the Royal Society B | 2011

The neuropsychology of face perception: beyond simple dissociations and functional selectivity

Anthony P. Atkinson; Ralph Adolphs

Face processing relies on a distributed, patchy network of cortical regions in the temporal and frontal lobes that respond disproportionately to face stimuli, other cortical regions that are not even primarily visual (such as somatosensory cortex), and subcortical structures such as the amygdala. Higher-level face perception abilities, such as judging identity, emotion and trustworthiness, appear to rely on an intact face-processing network that includes the occipital face area (OFA), whereas lower-level face categorization abilities, such as discriminating faces from objects, can be achieved without OFA, perhaps via the direct connections to the fusiform face area (FFA) from several extrastriate cortical areas. Some lesion, transcranial magnetic stimulation (TMS) and functional magnetic resonance imaging (fMRI) findings argue against a strict feed-forward hierarchical model of face perception, in which the OFA is the principal and common source of input for other visual and non-visual cortical regions involved in face perception, including the FFA, face-selective superior temporal sulcus and somatosensory cortex. Instead, these findings point to a more interactive model in which higher-level face perception abilities depend on the interplay between several functionally and anatomically distinct neural regions. Furthermore, the nature of these interactions may depend on the particular demands of the task. We review the lesion and TMS literature on this topic and highlight the dynamic and distributed nature of face processing.


Cognition | 2007

Evidence for Distinct Contributions of Form and Motion Information to the Recognition of Emotions from Body Gestures.

Anthony P. Atkinson; Mary Tunstall; Winand H. Dittrich

The importance of kinematics in emotion perception from body movement has been widely demonstrated. Evidence also suggests that the perception of biological motion relies to some extent on information about spatial and spatiotemporal form, yet the contribution of such form-related cues to emotion perception remains unclear. This study reports, for the first time, the relative effects on emotion recognition of inverting and motion-reversing patch-light compared to fully illuminated displays of whole-body emotion gestures. Inverting the gesture movies or playing them backwards significantly impaired emotion classification accuracy, but did so more for patch-light displays than for identical but fully illuminated movement sequences. This result suggests that inversion impairs the processing of form information related to the configuration of body parts, and reversal impairs the sequencing of form changes, more than these manipulations impair the processing of kinematic cues. This effect was strongest for inversion, suggesting an important role for configural information in emotion recognition. Nevertheless, even in combination these stimulus manipulations did not abolish above chance recognition of any of the emotions, suggesting that kinematics help distinguish emotions expressed by body gestures. Disproportionate impairments in recognition accuracy were observed for fear and disgust under inversion, and for fear under motion reversal, suggesting a greater role for form-related cues in the perception of these emotions.


Neuropsychologia | 2009

Impaired recognition of emotions from body movements is associated with elevated motion coherence thresholds in autism spectrum disorders.

Anthony P. Atkinson

Recent research has confirmed that individuals with autism spectrum disorder (ASD) have difficulties in recognizing emotions from body movements. Difficulties in perceiving coherent motion are also common in ASD. Yet it is unknown whether these two impairments are related. Thirteen adults with ASD and 16 age- and IQ-matched typically developing (TD) adults classified basic emotions from point-light and full-light displays of body movements and discriminated the direction of coherent motion in random-dot kinematograms. The ASD group was reliably less accurate in classifying emotions regardless of stimulus display type, and in perceiving coherent motion. As predicted, ASD individuals with higher motion coherence thresholds were less accurate in classifying emotions from body movements, especially in the point-light displays; this relationship was not evident for the TD group. The results are discussed in relation to recent models of biological motion processing and known abnormalities in the neural substrates of motion and social perception in ASD.


Social Cognitive and Affective Neuroscience | 2007

Emotional modulation of body-selective visual areas

Marius V. Peelen; Anthony P. Atkinson; Frédéric Andersson; Patrik Vuilleumier

Emotionally expressive faces have been shown to modulate activation in visual cortex, including face-selective regions in ventral temporal lobe. Here, we tested whether emotionally expressive bodies similarly modulate activation in body-selective regions. We show that dynamic displays of bodies with various emotional expressions vs neutral bodies, produce significant activation in two distinct body-selective visual areas, the extrastriate body area and the fusiform body area. Multi-voxel pattern analysis showed that the strength of this emotional modulation was related, on a voxel-by-voxel basis, to the degree of body selectivity, while there was no relation with the degree of selectivity for faces. Across subjects, amygdala responses to emotional bodies positively correlated with the modulation of body-selective areas. Together, these results suggest that emotional cues from body movements produce topographically selective influences on category-specific populations of neurons in visual cortex, and these increases may implicate discrete modulatory projections from the amygdala.


Trends in Cognitive Sciences | 2000

Consciousness: mapping the theoretical landscape

Anthony P. Atkinson; Michael S. C. Thomas; Axel Cleeremans

What makes us conscious? Many theories that attempt to answer this question have appeared recently in the context of widespread interest about consciousness in the cognitive neurosciences. Most of these proposals are formulated in terms of the information processing conducted by the brain. In this overview, we survey and contrast these models. We first delineate several notions of consciousness, addressing what it is that the various models are attempting to explain. Next, we describe a conceptual landscape that addresses how the theories attempt to explain consciousness. We then situate each of several representative models in this landscape and indicate which aspect of consciousness they try to explain. We conclude that the search for the neural correlates of consciousness should be usefully complemented by a search for the computational correlates of consciousness.


Attention Perception & Psychophysics | 2005

Asymmetric interference between sex and emotion in face perception

Anthony P. Atkinson; Jason Tipples; D. Michael Burt; Andrew W. Young

Previous research with speeded-response interference tasks modeled on the Garner paradigm has demonstrated that task-irrelevant variations in either emotional expression or facial speech do not interfere with identity judgments, but irrelevant variations in identity do interfere with expression and facial speech judgments. Sex, like identity, is a relatively invariant aspect of faces. Drawing on a recent model of face processing according to which invariant and changeable aspects of faces are represented in separate neurological systems, we predicted asymmetric interference between sex and emotion classification. The results of Experiment 1, in which the Garner paradigm was employed, confirmed this prediction: Emotion classifications were influenced by the sex of the faces, but sex classifications remained relatively unaffected by facial expression. A second experiment, in which the difficulty of the tasks was equated, corroborated these findings, indicating that differences in processing speed cannot account for the asymmetric relationship between facial emotion and sex processing. A third experiment revealed the same pattern of asymmetric interference through the use of a variant of the Simon paradigm. To the extent that Garner interference and Simon interference indicate interactions at perceptual and response-selection stages of processing, respectively, a challenge for face processing models is to show how the same asymmetric pattern of interference could occur at these different stages. The implications of these findings for the functional independence of the different components of face processing are discussed.


Emotion Review | 2009

Neuroscientific Evidence for Simulation and Shared Substrates in Emotion Recognition: Beyond Faces

Andrea S. Heberlein; Anthony P. Atkinson

According to simulation or shared-substrates models of emotion recognition, our ability to recognize the emotions expressed by other individuals relies, at least in part, on processes that internally simulate the same emotional state in ourselves. The term “emotional expressions” is nearly synonymous, in many peoples minds, with facial expressions of emotion. However, vocal prosody and whole-body cues also convey emotional information. What is the relationship between these various channels of emotional communication? We first briefly review simulation models of emotion recognition, and then discuss neuroscientific evidence related to these models, including studies using facial expressions, whole-body cues, and vocal prosody. We conclude by discussing these data in the context of simulation and shared-substrates models of emotion recognition.

Collaboration


Dive into the Anthony P. Atkinson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Winand H. Dittrich

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ralph Adolphs

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge