Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nicholas Furl is active.

Publication


Featured researches published by Nicholas Furl.


Brain | 2009

Voxel-based morphometry reveals reduced grey matter volume in the temporal cortex of developmental prosopagnosics

Lúcia Garrido; Nicholas Furl; Bogdan Draganski; Nikolaus Weiskopf; John M. Stevens; Geoffrey Tan; Jon Driver; R. J. Dolan; Bradley Duchaine

Individuals with developmental prosopagnosia exhibit severe and lasting difficulties in recognizing faces despite the absence of apparent brain abnormalities. We used voxel-based morphometry to investigate whether developmental prosopagnosics show subtle neuroanatomical differences from controls. An analysis based on segmentation of T1-weighted images from 17 developmental prosopagnosics and 18 matched controls revealed that they had reduced grey matter volume in the right anterior inferior temporal lobe and in the superior temporal sulcus/middle temporal gyrus bilaterally. In addition, a voxel-based morphometry analysis based on the segmentation of magnetization transfer parameter maps showed that developmental prosopagnosics also had reduced grey matter volume in the right middle fusiform gyrus and the inferior temporal gyrus. Multiple regression analyses relating three distinct behavioural component scores, derived from a principal component analysis, to grey matter volume revealed an association between a component related to facial identity and grey matter volume in the left superior temporal sulcus/middle temporal gyrus plus the right middle fusiform gyrus/inferior temporal gyrus. Grey matter volume in the lateral occipital cortex was associated with component scores related to object recognition tasks. Our results demonstrate that developmental prosopagnosics have reduced grey matter volume in several regions known to respond selectively to faces and provide new evidence that integrity of these areas relates to face recognition ability.


Cognitive Science | 2002

Face recognition algorithms and the other‐race effect: computational mechanisms for a developmental contact hypothesis

Nicholas Furl; P. Jonathon Phillips; Alice J. O'Toole

People recognize faces of their own race more accurately than faces of other races. The “contact” hypothesis suggests that this “other-race effect” occurs as a result of the greater experience we have with own- versus other-race faces. The computational mechanisms that may underlie different versions of the contact hypothesis were explored in this study. We replicated the other-race effect with human participants and evaluated four classes of computational face recognition algorithms for the presence of an other-race effect. Consistent with the predictions of a developmentalcontact hypothesis, “experience-based models” demonstrated an other-race effect only when the representational system was developed through experience that warped the perceptual space in a way that was sensitive to the overall structure of the model’s experience with faces of different races. When the model’s representation relied on a feature set optimized to encode the information in the learned faces, experience-based algorithms recognized minority-race faces more accurately than majority-race faces. The results suggest a developmental learning process that warps the perceptual space to enhance the encoding of distinctions relevant for own-race faces. This feature space limits the quality of face representations for other-race faces.


Journal of Cognitive Neuroscience | 2011

Fusiform gyrus face selectivity relates to individual differences in facial recognition ability

Nicholas Furl; Lúcia Garrido; R. J. Dolan; Jon Driver; Bradley Duchaine

Regions of the occipital and temporal lobes, including a region in the fusiform gyrus (FG), have been proposed to constitute a “core” visual representation system for faces, in part because they show face selectivity and face repetition suppression. But recent fMRI studies of developmental prosopagnosics (DPs) raise questions about whether these measures relate to face processing skills. Although DPs manifest deficient face processing, most studies to date have not shown unequivocal reductions of functional responses in the proposed core regions. We scanned 15 DPs and 15 non-DP control participants with fMRI while employing factor analysis to derive behavioral components related to face identification or other processes. Repetition suppression specific to facial identities in FG or to expression in FG and STS did not show compelling relationships with face identification ability. However, we identified robust relationships between face selectivity and face identification ability in FG across our sample for several convergent measures, including voxel-wise statistical parametric mapping, peak face selectivity in individually defined “fusiform face areas” (FFAs), and anatomical extents (cluster sizes) of those FFAs. None of these measures showed associations with behavioral expression or object recognition ability. As a group, DPs had reduced face-selective responses in bilateral FFA when compared with non-DPs. Individual DPs were also more likely than non-DPs to lack expected face-selective activity in core regions. These findings associate individual differences in face processing ability with selectivity in core face processing regions. This confirms that face selectivity can provide a valid marker for neural mechanisms that contribute to face identification ability.


Journal of Neurophysiology | 2010

Rewarding feedback after correct visual discriminations has both general and specific influences on visual cortex.

Rimona S. Weil; Nicholas Furl; Christian C. Ruff; Mkael Symmonds; Guillaume Flandin; R. J. Dolan; Jon Driver; Geraint Rees

Reward can influence visual performance, but the neural basis of this effect remains poorly understood. Here we used functional magnetic resonance imaging to investigate how rewarding feedback affected activity in distinct areas of human visual cortex, separating rewarding feedback events after correct performance from preceding visual events. Participants discriminated oriented gratings in either hemifield, receiving auditory feedback at trial end that signaled financial reward after correct performance. Greater rewards improved performance for all but the most difficult trials. Rewarding feedback increased blood-oxygen-level-dependent (BOLD) signals in striatum and orbitofrontal cortex. It also increased BOLD signals in visual areas beyond retinotopic cortex, but not in primary visual cortex representing the judged stimuli. These modulations were seen at a time point in which no visual stimuli were presented or expected, demonstrating a novel type of activity change in visual cortex that cannot reflect modulation of response to incoming or anticipated visual stimuli. Rewarded trials led on the next trial to improved performance and enhanced visual activity contralateral to the judged stimulus, for retinotopic representations of the judged visual stimuli in V1. Our findings distinguish general effects in nonretinotopic visual cortex when receiving rewarding feedback after correct performance from consequences of reward for spatially specific responses in V1.


Proceedings of the National Academy of Sciences of the United States of America | 2007

Experience-dependent coding of facial expression in superior temporal sulcus

Nicholas Furl; Nicola J. van Rijsbergen; Alessandro Treves; K. J. Friston; R. J. Dolan

Sensory information from the external world is inherently ambiguous, necessitating prior experience as a constraint on perception. Prolonged experience (adaptation) induces perception of ambiguous morph faces as a category different from the adapted category, suggesting sensitivity in underlying neural codes to differences between input and recent experience. Using magnetoencephalography, we investigated the neural dynamics of such experience-dependent visual coding by focusing on the timing of responses to morphs after facial expression adaptation. We show that evoked fields arising from the superior temporal sulcus (STS) reflect the degree to which a morph and adapted expression deviate. Furthermore, adaptation effects within STS predict the magnitude of behavioral aftereffects. These findings show that the STS codes expressions relative to recent experience rather than absolutely and may bias perception of expressions. One potential neural mechanism for the late timing of both effects appeals to hierarchical models that ascribe a central role to backward connections in mediating predictive codes.


The Journal of Neuroscience | 2011

Parietal cortex and insula relate to evidence seeking relevant to reward-related decisions

Nicholas Furl; Bruno B. Averbeck

Decisions are most effective after collecting sufficient evidence to accurately predict rewarding outcomes. We investigated whether human participants optimally seek evidence and we characterized the brain areas associated with their evidence seeking. Participants viewed sequences of bead colors drawn from hidden urns and attempted to infer the majority bead color in each urn. When viewing each bead color, participants chose either to seek more evidence about the urn by drawing another bead (draw choices) or to infer the urn contents (urn choices). We then compared their evidence seeking against that predicted by a Bayesian ideal observer model. By this standard, participants sampled less evidence than optimal. Also, when faced with urns that had bead color splits closer to chance (60/40 versus 80/20) or potential monetary losses, participants increased their evidence seeking, but they showed less increase than predicted by the ideal observer model. Functional magnetic resonance imaging showed that urn choices evoked larger hemodynamic responses than draw choices in the insula, striatum, anterior cingulate, and parietal cortex. These parietal responses were greater for participants who sought more evidence on average and for participants who increased more their evidence seeking when draws came from 60/40 urns. The parietal cortex and insula were associated with potential monetary loss. Insula responses also showed modulation with estimates of the expected gains of urn choices. Our findings show that participants sought less evidence than predicted by an ideal observer model and their evidence-seeking behavior may relate to responses in the insula and parietal cortex.


The Journal of Neuroscience | 2013

Top-down control of visual responses to fear by the amygdala.

Nicholas Furl; Richard N. Henson; K. J. Friston; Andrew J. Calder

The visual cortex is sensitive to emotional stimuli. This sensitivity is typically assumed to arise when amygdala modulates visual cortex via backwards connections. Using human fMRI, we compared dynamic causal connectivity models of sensitivity with fearful faces. This model comparison tested whether amygdala modulates distinct cortical areas, depending on dynamic or static face presentation. The ventral temporal fusiform face area showed sensitivity to fearful expressions in static faces. However, for dynamic faces, we found fear sensitivity in dorsal motion-sensitive areas within hMT+/V5 and superior temporal sulcus. The model with the greatest evidence included connections modulated by dynamic and static fear from amygdala to dorsal and ventral temporal areas, respectively. According to this functional architecture, amygdala could enhance encoding of fearful expression movements from video and the form of fearful expressions from static images. The amygdala may therefore optimize visual encoding of socially charged and salient information.


NeuroImage | 2011

Neural prediction of higher-order auditory sequence statistics

Nicholas Furl; Sukhbinder Kumar; Kai Alter; Simon J. Durrant; John Shawe-Taylor; Timothy D. Griffiths

During auditory perception, we are required to abstract information from complex temporal sequences such as those in music and speech. Here, we investigated how higher-order statistics modulate the neural responses to sound sequences, hypothesizing that these modulations are associated with higher levels of the peri-Sylvian auditory hierarchy. We devised second-order Markov sequences of pure tones with uniform first-order transition probabilities. Participants learned to discriminate these sequences from random ones. Magnetoencephalography was used to identify evoked fields in which second-order transition probabilities were encoded. We show that improbable tones evoked heightened neural responses after 200 ms post-tone onset during exposure at the learning stage or around 150 ms during the subsequent test stage, originating near the right temporoparietal junction. These signal changes reflected higher-order statistical learning, which can contribute to the perception of natural sounds with hierarchical structures. We propose that our results reflect hierarchical predictive representations, which can contribute to the experiences of speech and music.


NeuroImage | 2007

Face adaptation aftereffects reveal anterior medial temporal cortex role in high level category representation

Nicholas Furl; Nj van Rijsbergen; Alessandro Treves; R. J. Dolan

Previous studies have shown reductions of the functional magnetic resonance imaging (fMRI) signal in response to repetition of specific visual stimuli. We examined how adaptation affects the neural responses associated with categorization behavior, using face adaptation aftereffects. Adaptation to a given facial category biases categorization towards non-adapted facial categories in response to presentation of ambiguous morphs. We explored a hypothesis, posed by recent psychophysical studies, that these adaptation-induced categorizations are mediated by activity in relatively advanced stages within the occipitotemporal visual processing stream. Replicating these studies, we find that adaptation to a facial expression heightens perception of non-adapted expressions. Using comparable behavioral methods, we also show that adaptation to a specific identity heightens perception of a second identity in morph faces. We show both expression and identity effects to be associated with heightened anterior medial temporal lobe activity, specifically when perceiving the non-adapted category. These regions, incorporating bilateral anterior ventral rhinal cortices, perirhinal cortex and left anterior hippocampus are regions previously implicated in high-level visual perception. These categorization effects were not evident in fusiform or occipital gyri, although activity in these regions was reduced to repeated faces. The findings suggest that adaptation-induced perception is mediated by activity in regions downstream to those showing reductions due to stimulus repetition.


The Journal of Neuroscience | 2012

Dynamic and Static Facial Expressions Decoded from Motion-Sensitive Areas in the Macaque Monkey

Nicholas Furl; Fadila Hadj-Bouziane; Ning Liu; Bruno B. Averbeck; Leslie G. Ungerleider

Humans adeptly use visual motion to recognize socially relevant facial information. The macaque provides a model visual system for studying neural coding of expression movements, as its superior temporal sulcus (STS) possesses brain areas selective for faces and areas sensitive to visual motion. We used functional magnetic resonance imaging and facial stimuli to localize motion-sensitive areas [motion in faces (Mf) areas], which responded more to dynamic faces compared with static faces, and face-selective areas, which responded selectively to faces compared with objects and places. Using multivariate analysis, we found that information about both dynamic and static facial expressions could be robustly decoded from Mf areas. By contrast, face-selective areas exhibited relatively less facial expression information. Classifiers trained with expressions from one motion type (dynamic or static) showed poor generalization to the other motion type, suggesting that Mf areas employ separate and nonconfusable neural codes for dynamic and static presentations of the same expressions. We also show that some of the motion sensitivity elicited by facial stimuli was not specific to faces but could also be elicited by moving dots, particularly in fundus of the superior temporal and middle superior temporal polysensory/lower superior temporal areas, confirming their already well established low-level motion sensitivity. A different pattern was found in anterior STS, which responded more to dynamic than to static faces but was not sensitive to dot motion. Overall, we show that emotional expressions are mostly represented outside of face-selective cortex, in areas sensitive to motion. These regions may play a fundamental role in enhancing recognition of facial expression despite the complex stimulus changes associated with motion.

Collaboration


Dive into the Nicholas Furl's collaboration.

Top Co-Authors

Avatar

R. J. Dolan

University College London

View shared research outputs
Top Co-Authors

Avatar

Jon Driver

Wellcome Trust Centre for Neuroimaging

View shared research outputs
Top Co-Authors

Avatar

Bruno B. Averbeck

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

K. J. Friston

University College London

View shared research outputs
Top Co-Authors

Avatar

Lúcia Garrido

Brunel University London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alessandro Treves

International School for Advanced Studies

View shared research outputs
Top Co-Authors

Avatar

Andrew J. Calder

Cognition and Brain Sciences Unit

View shared research outputs
Top Co-Authors

Avatar

Gabi Teodoru

University College London

View shared research outputs
Researchain Logo
Decentralizing Knowledge