Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Richard J. Harris is active.

Publication


Featured researches published by Richard J. Harris.


Neuropsychologia | 1995

Event-related potentials in cross-modal divided attention in autism

Kristina T. Ciesielski; Jeanne E. Knight; Ronald J. Prince; Richard J. Harris; Stanley D. Handmaker

The behavior and event-related potentials (ERPs) of high functioning subjects with autism (Autism group) were contrasted with the results of normal controls (Control group) during a focused visual attention, a focused auditory attention and a visual/auditory divided attention task. Detecting targets by the Autism group in the cross-modal divided attention condition was more difficult (longer RTs, lower % of correct detections) than attending to one modality. However, both the Autism and Control groups performed all tasks above chance level. The slow negative wave (SNW) was the only negative component which reflected Focused vs Divided task effect in Controls, being largest to stimuli in single channel-focused attention, intermediate when attention was divided between targets of two modalities and smallest to unattended stimuli. Task effects were more evident in the positive peaks for the Autism group. No significant divided attention task effect was noted for P3b, although it was larger for attended than ignored stimuli, of normal morphology and only slightly decreased in size in the Autism group as compared to the Control group. The failure of the Autism group to modulate the slow negative wave in response to Focused/Divided/Ignored conditions in a normal manner, the presence of relatively normal morphology despite the reduced amplitude of the P3b and other positive components, together with the high level of correct target detections are discussed in the context of a selective inhibition deficit and an alternative mechanism of selective attention in autism.


Proceedings of the National Academy of Sciences of the United States of America | 2012

Morphing between expressions dissociates continuous from categorical representations of facial expression in the human brain

Richard J. Harris; Andrew W. Young; Timothy J. Andrews

Whether the brain represents facial expressions as perceptual continua or as emotion categories remains controversial. Here, we measured the neural response to morphed images to directly address how facial expressions of emotion are represented in the brain. We found that face-selective regions in the posterior superior temporal sulcus and the amygdala responded selectively to changes in facial expression, independent of changes in identity. We then asked whether the responses in these regions reflected categorical or continuous neural representations of facial expression. Participants viewed images from continua generated by morphing between faces posing different expressions such that the expression could be the same, could involve a physical change but convey the same emotion, or could differ by the same physical amount but be perceived as two different emotions. We found that the posterior superior temporal sulcus was equally sensitive to all changes in facial expression, consistent with a continuous representation. In contrast, the amygdala was only sensitive to changes in expression that altered the perceived emotion, demonstrating a more categorical representation. These results offer a resolution to the controversy about how facial expression is processed in the brain by showing that both continuous and categorical representations underlie our ability to extract this important social cue.


Cerebral Cortex | 2014

Neural Responses to Expression and Gaze in the Posterior Superior Temporal Sulcus Interact with Facial Identity

Heidi A. Baseler; Richard J. Harris; Andrew W. Young; Timothy J. Andrews

Neural models of human face perception propose parallel pathways. One pathway (including posterior superior temporal sulcus, pSTS) is responsible for processing changeable aspects of faces such as gaze and expression, and the other pathway (including the fusiform face area, FFA) is responsible for relatively invariant aspects such as identity. However, to be socially meaningful, changes in expression and gaze must be tracked across an individual face. Our aim was to investigate how this is achieved. Using functional magnetic resonance imaging, we found a region in pSTS that responded more to sequences of faces varying in gaze and expression in which the identity was constant compared with sequences in which the identity varied. To determine whether this preferential response to same identity faces was due to the processing of identity in the pSTS or was a result of interactions between pSTS and other regions thought to code face identity, we measured the functional connectivity between face-selective regions. We found increased functional connectivity between the pSTS and FFA when participants viewed same identity faces compared with different identity faces. Together, these results suggest that distinct neural pathways involved in expression and identity interact to process the changeable features of the face in a socially meaningful way.


Neuropsychologia | 2014

Dynamic stimuli demonstrate a categorical representation of facial expression in the amygdala.

Richard J. Harris; Andrew W. Young; Timothy J. Andrews

Face-selective regions in the amygdala and posterior superior temporal sulcus (pSTS) are strongly implicated in the processing of transient facial signals, such as expression. Here, we measured neural responses in participants while they viewed dynamic changes in facial expression. Our aim was to explore how facial expression is represented in different face-selective regions. Short movies were generated by morphing between faces posing a neutral expression and a prototypical expression of a basic emotion (either anger, disgust, fear, happiness or sadness). These dynamic stimuli were presented in block design in the following four stimulus conditions: (1) same-expression change, same-identity, (2) same-expression change, different-identity, (3) different-expression change, same-identity, and (4) different-expression change, different-identity. So, within a same-expression change condition the movies would show the same change in expression whereas in the different-expression change conditions each movie would have a different change in expression. Facial identity remained constant during each movie but in the different identity conditions the facial identity varied between each movie in a block. The amygdala, but not the posterior STS, demonstrated a greater response to blocks in which each movie morphed from neutral to a different emotion category compared to blocks in which each movie morphed to the same emotion category. Neural adaptation in the amygdala was not affected by changes in facial identity. These results are consistent with a role of the amygdala in category-based representation of facial expressions of emotion.


NeuroImage | 2014

Brain regions involved in processing facial identity and expression are differentially selective for surface and edge information

Richard J. Harris; Andrew W. Young; Timothy J. Andrews

Although different brain regions are widely considered to be involved in the recognition of facial identity and expression, it remains unclear how these regions process different properties of the visual image. Here, we ask how surface-based reflectance information and edge-based shape cues contribute to the perception and neural representation of facial identity and expression. Contrast-reversal was used to generate images in which normal contrast relationships across the surface of the image were disrupted, but edge information was preserved. In a behavioural experiment, contrast-reversal significantly attenuated judgements of facial identity, but only had a marginal effect on judgements of expression. An fMR-adaptation paradigm was then used to ask how brain regions involved in the processing of identity and expression responded to blocks comprising all normal, all contrast-reversed, or a mixture of normal and contrast-reversed faces. Adaptation in the posterior superior temporal sulcus – a region directly linked with processing facial expression – was relatively unaffected by mixing normal with contrast-reversed faces. In contrast, the response of the fusiform face area – a region linked with processing facial identity – was significantly affected by contrast-reversal. These results offer a new perspective on the reasons underlying the neural segregation of facial identity and expression in which brain regions involved in processing invariant aspects of faces, such as identity, are very sensitive to surface-based cues, whereas regions involved in processing changes in faces, such as expression, are relatively dependent on edge-based cues.


Cerebral Cortex | 2016

Distinct but Overlapping Patterns of Response to Words and Faces in the Fusiform Gyrus

Richard J. Harris; Grace E. Rice; Andrew W. Young; Timothy J. Andrews

Converging evidence suggests that the fusiform gyrus is involved in the processing of both faces and words. We used fMRI to investigate the extent to which the representation of words and faces in this region of the brain is based on a common neural representation. In Experiment 1, a univariate analysis revealed regions in the fusiform gyrus that were only selective for faces and other regions that were only selective for words. However, we also found regions that showed both word-selective and face-selective responses, particularly in the left hemisphere. We then used a multivariate analysis to measure the pattern of response to faces and words. Despite the overlap in regional responses, we found distinct patterns of response to both faces and words in the left and right fusiform gyrus. In Experiment 2, fMR adaptation was used to determine whether information about familiar faces and names is integrated in the fusiform gyrus. Distinct regions of the fusiform gyrus showed adaptation to either familiar faces or familiar names. However, there was no adaptation to sequences of faces and names with the same identity. Taken together, these results provide evidence for distinct, but overlapping, neural representations for words and faces in the fusiform gyrus.


Cortex | 2016

An image-invariant neural response to familiar faces in the human medial temporal lobe

Katja Weibert; Richard J. Harris; Alexandra Mitchell; Hollie Byrne; Andrew W. Young; Timothy J. Andrews

The ability to recognise familiar faces with ease across different viewing conditions contrasts with the inherent difficulty in the perception of unfamiliar faces across similar image manipulations. Models of face processing suggest that this difference is based on the neural representation for familiar faces being more invariant to changes in the image, than it is for unfamiliar faces. Here, we used an fMR-adaptation paradigm to investigate neural correlates of image-invariant face recognition in face-selective regions of the human brain. Participants viewed faces presented in a blocked design. Each block contained different images of the same identity or different images from different identities. Faces in each block were either familiar or unfamiliar to the participants. First, we defined face-selective regions by comparing the response to faces with the response to scenes and scrambled faces. Next, we asked whether any of these face-selective regions showed image-invariant adaptation to the identity of a face. The core face-selective regions showed image-invariant adaptation to familiar and unfamiliar faces. However, there was no difference in the adaptation to familiar compared to unfamiliar faces. In contrast, image-invariant adaptation for familiar faces, but not for unfamiliar faces, was found in face-selective regions of the medial temporal lobe (MTL). Taken together, our results suggest that the marked differences in the perception of familiar and unfamiliar faces may depend critically on neural processes in the medial temporal lobe.


F1000Research | 2015

An image-invariant representation of familiar faces in the human medial temporal lobe

Kay Weibert; Richard J. Harris; Alexandra Mitchell; Hollie Byrne; Timothy J. Andrews


Journal of Vision | 2013

INVARIANCE TO LINEAR BUT NOT NON-LINEAR CHANGES IN THE SPATIAL CONFIGURATION OF FACES IN HUMAN VISUAL CORTEX.

Timothy J. Andrews; Heidi A. Baseler; Richard J. Harris; Rob Jenkins; A. Mike Burton; Andrew W. Young


Journal of Vision | 2013

Contrast negation reveals a dissociation in the neural representations underlying the perception of facial identity and expression

Richard J. Harris; Andrew W. Young; Timothy J. Andrews

Collaboration


Dive into the Richard J. Harris's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge