Frances Crabbe
University of Glasgow
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Frances Crabbe.
NeuroImage | 2015
Cyril Pernet; Philip McAleer; Marianne Latinus; Krzysztof J. Gorgolewski; Ian Charest; Patricia E. G. Bestelmeyer; Rebecca Watson; David Fleming; Frances Crabbe; Mitchell Valdés-Sosa; Pascal Belin
fMRI studies increasingly examine functions and properties of non-primary areas of human auditory cortex. However there is currently no standardized localization procedure to reliably identify specific areas across individuals such as the standard ‘localizers’ available in the visual domain. Here we present an fMRI ‘voice localizer’ scan allowing rapid and reliable localization of the voice-sensitive ‘temporal voice areas’ (TVA) of human auditory cortex. We describe results obtained using this standardized localizer scan in a large cohort of normal adult subjects. Most participants (94%) showed bilateral patches of significantly greater response to vocal than non-vocal sounds along the superior temporal sulcus/gyrus (STS/STG). Individual activation patterns, although reproducible, showed high inter-individual variability in precise anatomical location. Cluster analysis of individual peaks from the large cohort highlighted three bilateral clusters of voice-sensitivity, or “voice patches” along posterior (TVAp), mid (TVAm) and anterior (TVAa) STS/STG, respectively. A series of extra-temporal areas including bilateral inferior prefrontal cortex and amygdalae showed small, but reliable voice-sensitivity as part of a large-scale cerebral voice network. Stimuli for the voice localizer scan and probabilistic maps in MNI space are available for download.
Cerebral Cortex | 2011
Marianne Latinus; Frances Crabbe; Pascal Belin
Temporal voice areas showing a larger activity for vocal than non-vocal sounds have been identified along the superior temporal sulcus (STS); more voice-sensitive areas have been described in frontal and parietal lobes. Yet, the role of voice-sensitive regions in representing voice identity remains unclear. Using a functional magnetic resonance adaptation design, we aimed at disentangling acoustic- from identity-based representations of voices. Sixteen participants were scanned while listening to pairs of voices drawn from morphed continua between 2 initially unfamiliar voices, before and after a voice learning phase. In a given pair, the first and second stimuli could be identical or acoustically different and, at the second session, perceptually similar or different. At both sessions, right mid-STS/superior temporal gyrus (STG) and superior temporal pole (sTP) showed sensitivity to acoustical changes. Critically, voice learning induced changes in the acoustical processing of voices in inferior frontal cortices (IFCs). At the second session only, right IFC and left cingulate gyrus showed sensitivity to changes in perceived identity. The processing of voice identity appears to be subserved by a large network of brain areas ranging from the sTP, involved in an acoustic-based representation of unfamiliar voices, to areas along the convexity of the IFC for identity-related processing of familiar voices.
The Journal of Neuroscience | 2014
Rebecca Watson; Marianne Latinus; Takao Noguchi; Oliver Garrod; Frances Crabbe; Pascal Belin
The integration of emotional information from the face and voice of other persons is known to be mediated by a number of “multisensory” cerebral regions, such as the right posterior superior temporal sulcus (pSTS). However, whether multimodal integration in these regions is attributable to interleaved populations of unisensory neurons responding to face or voice or rather by multimodal neurons receiving input from the two modalities is not fully clear. Here, we examine this question using functional magnetic resonance adaptation and dynamic audiovisual stimuli in which emotional information was manipulated parametrically and independently in the face and voice via morphing between angry and happy expressions. Healthy human adult subjects were scanned while performing a happy/angry emotion categorization task on a series of such stimuli included in a fast event-related, continuous carryover design. Subjects integrated both face and voice information when categorizing emotion—although there was a greater weighting of face information—and showed behavioral adaptation effects both within and across modality. Adaptation also occurred at the neural level: in addition to modality-specific adaptation in visual and auditory cortices, we observed for the first time a crossmodal adaptation effect. Specifically, fMRI signal in the right pSTS was reduced in response to a stimulus in which facial emotion was similar to the vocal emotion of the preceding stimulus. These results suggest that the integration of emotional information from face and voice in the pSTS involves a detectable proportion of bimodal neurons that combine inputs from visual and auditory cortices.
Cortex | 2014
Rebecca Watson; Marianne Latinus; Ian Charest; Frances Crabbe; Pascal Belin
The functional role of the superior temporal sulcus (STS) has been implicated in a number of studies, including those investigating face perception, voice perception, and face–voice integration. However, the nature of the STS preference for these ‘social stimuli’ remains unclear, as does the location within the STS for specific types of information processing. The aim of this study was to directly examine properties of the STS in terms of selective response to social stimuli. We used functional magnetic resonance imaging (fMRI) to scan participants whilst they were presented with auditory, visual, or audiovisual stimuli of people or objects, with the intention of localising areas preferring both faces and voices (i.e., ‘people-selective’ regions) and audiovisual regions designed to specifically integrate person-related information. Results highlighted a ‘people-selective, heteromodal’ region in the trunk of the right STS which was activated by both faces and voices, and a restricted portion of the right posterior STS (pSTS) with an integrative preference for information from people, as compared to objects. These results point towards the dedicated role of the STS as a ‘social-information processing’ centre.
Cerebral Cortex | 2013
Ian Charest; Cyril Pernet; Marianne Latinus; Frances Crabbe; Pascal Belin
Normal listeners effortlessly determine a persons gender by voice, but the cerebral mechanisms underlying this ability remain unclear. Here, we demonstrate 2 stages of cerebral processing during voice gender categorization. Using voice morphing along with an adaptation-optimized functional magnetic resonance imaging design, we found that secondary auditory cortex including the anterior part of the temporal voice areas in the right hemisphere responded primarily to acoustical distance with the previously heard stimulus. In contrast, a network of bilateral regions involving inferior prefrontal and anterior and posterior cingulate cortex reflected perceived stimulus ambiguity. These findings suggest that voice gender recognition involves neuronal populations along the auditory ventral stream responsible for auditory feature extraction, functioning in pair with the prefrontal cortex in voice gender perception.
PLOS ONE | 2011
Karin Petrini; Frances Crabbe; Carol Sheridan; Frank E. Pollick
In humans, emotions from music serve important communicative roles. Despite a growing interest in the neural basis of music perception, action and emotion, the majority of previous studies in this area have focused on the auditory aspects of music performances. Here we investigate how the brain processes the emotions elicited by audiovisual music performances. We used event-related functional magnetic resonance imaging, and in Experiment 1 we defined the areas responding to audiovisual (musicians movements with music), visual (musicians movements only), and auditory emotional (music only) displays. Subsequently a region of interest analysis was performed to examine if any of the areas detected in Experiment 1 showed greater activation for emotionally mismatching performances (combining the musicians movements with mismatching emotional sound) than for emotionally matching music performances (combining the musicians movements with matching emotional sound) as presented in Experiment 2 to the same participants. The insula and the left thalamus were found to respond consistently to visual, auditory and audiovisual emotional information and to have increased activation for emotionally mismatching displays in comparison with emotionally matching displays. In contrast, the right thalamus was found to respond to audiovisual emotional displays and to have similar activation for emotionally matching and mismatching displays. These results suggest that the insula and left thalamus have an active role in detecting emotional correspondence between auditory and visual information during music performances, whereas the right thalamus has a different role.
Frontiers in Human Neuroscience | 2013
Rebecca Watson; Marianne Latinus; Takao Noguchi; Oliver Garrod; Frances Crabbe; Pascal Belin
In the everyday environment, affective information is conveyed by both the face and the voice. Studies have demonstrated that a concurrently presented voice can alter the way that an emotional face expression is perceived, and vice versa, leading to emotional conflict if the information in the two modalities is mismatched. Additionally, evidence suggests that incongruence of emotional valence activates cerebral networks involved in conflict monitoring and resolution. However, it is currently unclear whether this is due to task difficulty—that incongruent stimuli are harder to categorize—or simply to the detection of mismatching information in the two modalities. The aim of the present fMRI study was to examine the neurophysiological correlates of processing incongruent emotional information, independent of task difficulty. Subjects were scanned while judging the emotion of face-voice affective stimuli. Both the face and voice were parametrically morphed between anger and happiness and then paired in all audiovisual combinations, resulting in stimuli each defined by two separate values: the degree of incongruence between the face and voice, and the degree of clarity of the combined face-voice information. Due to the specific morphing procedure utilized, we hypothesized that the clarity value, rather than incongruence value, would better reflect task difficulty. Behavioral data revealed that participants integrated face and voice affective information, and that the clarity, as opposed to incongruence value correlated with categorization difficulty. Cerebrally, incongruence was more associated with activity in the superior temporal region, which emerged after task difficulty had been accounted for. Overall, our results suggest that activation in the superior temporal region in response to incongruent information cannot be explained simply by task difficulty, and may rather be due to detection of mismatching information between the two modalities.
BMJ | 2015
Joanna M. Wardlaw; H. Davies; T.C. Booth; Graeme Laurie; A. Compston; C. Freeman; Martin O. Leach; Adam D. Waldman; D.J. Lomas; Klaus Kessler; Frances Crabbe; Alan Jackson
Incidental findings of imaging research studies can turn healthy individuals into anxious patients, while putting an extra burden on primary care. J M Wardlaw and colleagues argue that doctors should ensure that the personal, ethical, healthcare, and cost implications of these common findings are managed proportionately, sensitively, and economically
Cognitive, Affective, & Behavioral Neuroscience | 2014
Phil McAleer; Frank E. Pollick; Scott A. Love; Frances Crabbe; Jeffrey M. Zacks
It has been proposed that we make sense of the movements of others by observing fluctuations in the kinematic properties of their actions. At the neural level, activity in the human motion complex (hMT+) and posterior superior temporal sulcus (pSTS) has been implicated in this relationship. However, previous neuroimaging studies have largely utilized brief, diminished stimuli, and the role of relevant kinematic parameters for the processing of human action remains unclear. We addressed this issue by showing extended-duration natural displays of an actor engaged in two common activities, to 12 participants in an fMRI study under passive viewing conditions. Our region-of-interest analysis focused on three neural areas (hMT+, pSTS, and fusiform face area) and was accompanied by a whole-brain analysis. The kinematic properties of the actor, particularly the speed of body part motion and the distance between body parts, were related to activity in hMT+ and pSTS. Whole-brain exploratory analyses revealed additional areas in posterior cortex, frontal cortex, and the cerebellum whose activity was related to these features. These results indicate that the kinematic properties of peoples’ movements are continually monitored during everyday activity as a step to determining actions and intent.
Frontiers in Human Neuroscience | 2014
Paddy D. Ross; Beatrice de Gelder; Frances Crabbe; Marie-Hélène Grosbras
Our ability to read other people’s non-verbal signals gets refined throughout childhood and adolescence. How this is paralleled by brain development has been investigated mainly with regards to face perception, showing a protracted functional development of the face-selective visual cortical areas. In view of the importance of whole-body expressions in interpersonal communication it is important to understand the development of brain areas sensitive to these social signals. Here we used functional magnetic resonance imaging (fMRI) to compare brain activity in a group of 24 children (age 6–11) and 26 adults while they passively watched short videos of body or object movements. We observed activity in similar regions in both groups; namely the extra-striate body area (EBA), fusiform body area (FBA), posterior superior temporal sulcus (pSTS), amygdala and premotor regions. Adults showed additional activity in the inferior frontal gyrus (IFG). Within the main body-selective regions (EBA, FBA and pSTS), the strength and spatial extent of fMRI signal change was larger in adults than in children. Multivariate Bayesian (MVB) analysis showed that the spatial pattern of neural representation within those regions did not change over age. Our results indicate, for the first time, that body perception, like face perception, is still maturing through the second decade of life.