Fraser W. Smith
University of East Anglia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Fraser W. Smith.
Current Biology | 2015
Lars Muckli; Federico De Martino; Luca Vizioli; Lucy S. Petro; Fraser W. Smith; Kamil Ugurbil; Rainer Goebel; Essa Yacoub
Summary Neuronal cortical circuitry comprises feedforward, lateral, and feedback projections, each of which terminates in distinct cortical layers [1–3]. In sensory systems, feedforward processing transmits signals from the external world into the cortex, whereas feedback pathways signal the brain’s inference of the world [4–11]. However, the integration of feedforward, lateral, and feedback inputs within each cortical area impedes the investigation of feedback, and to date, no technique has isolated the feedback of visual scene information in distinct layers of healthy human cortex. We masked feedforward input to a region of V1 cortex and studied the remaining internal processing. Using high-resolution functional brain imaging (0.8 mm3) and multivoxel pattern information techniques, we demonstrate that during normal visual stimulation scene information peaks in mid-layers. Conversely, we found that contextual feedback information peaks in outer, superficial layers. Further, we found that shifting the position of the visual scene surrounding the mask parametrically modulates feedback in superficial layers of V1. Our results reveal the layered cortical organization of external versus internal visual processing streams during perception in healthy human subjects. We provide empirical support for theoretical feedback models such as predictive coding [10, 12] and coherent infomax [13] and reveal the potential of high-resolution fMRI to access internal processing in sub-millimeter human cortex.
Psychological Science | 2009
Fraser W. Smith; Philippe G. Schyns
It is well established that animal communication signals have adapted to the evolutionary pressures of their environment. For example, the low-frequency vocalizations of the elephant are tailored to long-range communications, whereas the high-frequency trills of birds are adapted to their more localized acoustic niche. Like the voice, the human face transmits social signals about the internal emotional state of the transmitter. Here, we address two main issues: First, we characterized the spectral composition of the facial features signaling each of the six universal expressions of emotion (happiness, sadness, fear, disgust, anger, and surprise). From these analyses, we then predicted and tested the effectiveness of the transmission of emotion signals over different viewing distances. We reveal a gradient of recognition over viewing distances constraining the relative adaptive usefulness of facial expressions of emotion (distal expressions are good signals over a wide range of viewing distances; proximal expressions are suited to closer-range communication).
Current Biology | 2014
Petra Vetter; Fraser W. Smith; Lars Muckli
Summary Human early visual cortex was traditionally thought to process simple visual features such as orientation, contrast, and spatial frequency via feedforward input from the lateral geniculate nucleus (e.g., [1]). However, the role of nonretinal influence on early visual cortex is so far insufficiently investigated despite much evidence that feedback connections greatly outnumber feedforward connections [2–5]. Here, we explored in five fMRI experiments how information originating from audition and imagery affects the brain activity patterns in early visual cortex in the absence of any feedforward visual stimulation. We show that category-specific information from both complex natural sounds and imagery can be read out from early visual cortex activity in blindfolded participants. The coding of nonretinal information in the activity patterns of early visual cortex is common across actual auditory perception and imagery and may be mediated by higher-level multisensory areas. Furthermore, this coding is robust to mild manipulations of attention and working memory but affected by orthogonal, cognitively demanding visuospatial processing. Crucially, the information fed down to early visual cortex is category specific and generalizes to sound exemplars of the same category, providing evidence for abstract information feedback rather than precise pictorial feedback. Our results suggest that early visual cortex receives nonretinal input from other brain areas when it is generated by auditory perception and/or imagery, and this input carries common abstract information. Our findings are compatible with feedback of predictive information to the earliest visual input level (e.g., [6]), in line with predictive coding models [7–10].
Proceedings of the National Academy of Sciences of the United States of America | 2010
Fraser W. Smith; Lars Muckli
Even within the early sensory areas, the majority of the input to any given cortical neuron comes from other cortical neurons. To extend our knowledge of the contextual information that is transmitted by such lateral and feedback connections, we investigated how visually nonstimulated regions in primary visual cortex (V1) and visual area V2 are influenced by the surrounding context. We used functional magnetic resonance imaging (fMRI) and pattern-classification methods to show that the cortical representation of a nonstimulated quarter-field carries information that can discriminate the surrounding visual context. We show further that the activity patterns in these regions are significantly related to those observed with feed-forward stimulation and that these effects are driven primarily by V1. These results thus demonstrate that visual context strongly influences early visual areas even in the absence of differential feed-forward thalamic stimulation.
Cerebral Cortex | 2015
Fraser W. Smith; Melvyn A. Goodale
Neurons, even in the earliest sensory areas of cortex, are subject to a great deal of contextual influence from both within and across modality connections. In the present work, we investigated whether the earliest regions of somatosensory cortex (S1 and S2) would contain content-specific information about visual object categories. We reasoned that this might be possible due to the associations formed through experience that link different sensory aspects of a given object. Participants were presented with visual images of different object categories in 2 fMRI experiments. Multivariate pattern analysis revealed reliable decoding of familiar visual object category in bilateral S1 (i.e., postcentral gyri) and right S2. We further show that this decoding is observed for familiar but not unfamiliar visual objects in S1. In addition, whole-brain searchlight decoding analyses revealed several areas in the parietal lobe that could mediate the observed context effects between vision and somatosensation. These results demonstrate that even the first cortical stages of somatosensory processing carry information about the category of visually presented familiar objects.
NeuroImage | 2008
Fraser W. Smith; Lars Muckli; David Brennan; Cyril Pernet; Marie L. Smith; Pascal Belin; Frédéric Gosselin; Donald M. Hadley; Jonathan Cavanagh; Philippe G. Schyns
Reverse correlation methods have been widely used in neuroscience for many years and have recently been applied to study the sensitivity of human brain signals (EEG, MEG) to complex visual stimuli. Here we employ one such method, Bubbles (Gosselin, F., Schyns, P.G., 2001. Bubbles: A technique to reveal the use of information in recognition tasks. Vis. Res. 41, 2261-2271), in conjunction with fMRI in the context of a 3AFC facial expression categorization task. We highlight the regions of the brain showing significant sensitivity with respect to the critical visual information required to perform the categorization judgments. Moreover, we reveal the actual subset of visual information which modulates BOLD sensitivity within each such brain region. Finally, we show the potential which lies within analyzing brain function in terms of the information states of different brain regions. Thus, we can now analyse human brain function in terms of the specific visual information different brain regions process.
European Journal of Neuroscience | 2013
Lucy S. Petro; Fraser W. Smith; Philippe G. Schyns; Lars Muckli
Higher visual areas in the occipitotemporal cortex contain discrete regions for face processing, but it remains unclear if V1 is modulated by top‐down influences during face discrimination, and if this is widespread throughout V1 or localized to retinotopic regions processing task‐relevant facial features. Employing functional magnetic resonance imaging (fMRI), we mapped the cortical representation of two feature locations that modulate higher visual areas during categorical judgements – the eyes and mouth. Subjects were presented with happy and fearful faces, and we measured the fMRI signal of V1 regions processing the eyes and mouth whilst subjects engaged in gender and expression categorization tasks. In a univariate analysis, we used a region‐of‐interest‐based general linear model approach to reveal changes in activation within these regions as a function of task. We then trained a linear pattern classifier to classify facial expression or gender on the basis of V1 data from ‘eye’ and ‘mouth’ regions, and from the remaining non‐diagnostic V1 region. Using multivariate techniques, we show that V1 activity discriminates face categories both in local ‘diagnostic’ and widespread ‘non‐diagnostic’ cortical subregions. This indicates that V1 might receive the processed outcome of complex facial feature analysis from other cortical (i.e. fusiform face area, occipital face area) or subcortical areas (amygdala).
Behavioral and Brain Sciences | 2013
Lars Muckli; Lucy S. Petro; Fraser W. Smith
Clark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models).
PLOS ONE | 2018
Fraser W. Smith; Stephanie Rossit
Facial expressions of emotion are signals of high biological value. Whilst recognition of facial expressions has been much studied in central vision, the ability to perceive these signals in peripheral vision has only seen limited research to date, despite the potential adaptive advantages of such perception. In the present experiment, we investigate facial expression recognition and detection performance for each of the basic emotions (plus neutral) at up to 30 degrees of eccentricity. We demonstrate, as expected, a decrease in recognition and detection performance with increasing eccentricity, with happiness and surprised being the best recognized expressions in peripheral vision. In detection however, while happiness and surprised are still well detected, fear is also a well detected expression. We show that fear is a better detected than recognized expression. Our results demonstrate that task constraints shape the perception of expression in peripheral vision and provide novel evidence that detection and recognition rely on partially separate underlying mechanisms, with the latter more dependent on the higher spatial frequency content of the face stimulus.
Journal of Vision | 2011
Petra Vetter; Fraser W. Smith; Lars Muckli