Patricia E. G. Bestelmeyer
Bangor University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Patricia E. G. Bestelmeyer.
Psychiatry Research-neuroimaging | 2009
Patricia E. G. Bestelmeyer; Louise H. Phillips; Caroline Crombie; Philip J. Benson; David St Clair
It has been proposed that psychophysiological abnormalities in schizophrenia, such as decreased amplitude of the evoked potential component P300, may be genetically influenced. Studies of heritability of the P300 have used different and typically more complex tasks than those used in clinical studies of schizophrenia. Here we present data on P300 parameters on the same set of auditory and visual tasks in samples of twins, and patients with schizophrenia or bipolar disorder to examine the P300 as a possible endophenotype. Evidence from the twin study indicated that the auditory, but not visual, P300 amplitude is genetically influenced at centro-parietal sites. Similarly, auditory and to a lesser extent visual P300 amplitude were decreased in schizophrenia and bipolar disorder. Results indicate that the auditory P300 may serve as an endophenotype for schizophrenia. However, given that schizophrenia and bipolar disorder patients could not be distinguished on these measures at midline sites, it appears that the P300 may be a marker for functional psychosis in general rather than being specific to schizophrenia.
Cognition | 2010
Patricia E. G. Bestelmeyer; Julien Rouger; Lisa M. DeBruine; Pascal Belin
Previous research has demonstrated perceptual aftereffects for emotionally expressive faces, but the extent to which they can also be obtained in a different modality is unknown. In two experiments we show for the first time that adaptation to affective, non-linguistic vocalisations elicits significant auditory aftereffects. Adaptation to angry vocalisations caused voices drawn from an anger-fear morphed continuum to be perceived as less angry and more fearful, while adaptation to fearful vocalisations elicited opposite aftereffects (Experiment 1). We then tested the link between these aftereffects and the underlying acoustics by using caricatured adaptors. Although caricatures exaggerated the acoustical and affective properties of the vocalisations, the caricatured adaptors resulted in aftereffects which were comparable to those obtained with natural vocalisations (Experiment 2). Our findings suggest that these aftereffects cannot be solely explained by low-level adaptation to acoustical characteristics of the adaptors but are likely to depend on higher-level adaptation of neural representations of vocal affect.
NeuroImage | 2015
Cyril Pernet; Philip McAleer; Marianne Latinus; Krzysztof J. Gorgolewski; Ian Charest; Patricia E. G. Bestelmeyer; Rebecca Watson; David Fleming; Frances Crabbe; Mitchell Valdés-Sosa; Pascal Belin
fMRI studies increasingly examine functions and properties of non-primary areas of human auditory cortex. However there is currently no standardized localization procedure to reliably identify specific areas across individuals such as the standard ‘localizers’ available in the visual domain. Here we present an fMRI ‘voice localizer’ scan allowing rapid and reliable localization of the voice-sensitive ‘temporal voice areas’ (TVA) of human auditory cortex. We describe results obtained using this standardized localizer scan in a large cohort of normal adult subjects. Most participants (94%) showed bilateral patches of significantly greater response to vocal than non-vocal sounds along the superior temporal sulcus/gyrus (STS/STG). Individual activation patterns, although reproducible, showed high inter-individual variability in precise anatomical location. Cluster analysis of individual peaks from the large cohort highlighted three bilateral clusters of voice-sensitivity, or “voice patches” along posterior (TVAp), mid (TVAm) and anterior (TVAa) STS/STG, respectively. A series of extra-temporal areas including bilateral inferior prefrontal cortex and amygdalae showed small, but reliable voice-sensitivity as part of a large-scale cerebral voice network. Stimuli for the voice localizer scan and probabilistic maps in MNI space are available for download.
Current Biology | 2013
Marianne Latinus; Philip McAleer; Patricia E. G. Bestelmeyer; Pascal Belin
Summary Listeners exploit small interindividual variations around a generic acoustical structure to discriminate and identify individuals from their voice—a key requirement for social interactions. The human brain contains temporal voice areas (TVA) [1] involved in an acoustic-based representation of voice identity [2–6], but the underlying coding mechanisms remain unknown. Indirect evidence suggests that identity representation in these areas could rely on a norm-based coding mechanism [4, 7–11]. Here, we show by using fMRI that voice identity is coded in the TVA as a function of acoustical distance to two internal voice prototypes (one male, one female)—approximated here by averaging a large number of same-gender voices by using morphing [12]. Voices more distant from their prototype are perceived as more distinctive and elicit greater neuronal activity in voice-sensitive cortex than closer voices—a phenomenon not merely explained by neuronal adaptation [13, 14]. Moreover, explicit manipulations of distance-to-mean by morphing voices toward (or away from) their prototype elicit reduced (or enhanced) neuronal activity. These results indicate that voice-sensitive cortex integrates relevant acoustical features into a complex representation referenced to idealized male and female voice prototypes. More generally, they shed light on remarkable similarities in cerebral representations of facial and vocal identity.
The Journal of Neuroscience | 2014
Patricia E. G. Bestelmeyer; Pierre Maurage; Julien Rouger; Marianne Latinus; Pascal Belin
The human voice carries speech as well as important nonlinguistic signals that influence our social interactions. Among these cues that impact our behavior and communication with other people is the perceived emotional state of the speaker. A theoretical framework for the neural processing stages of emotional prosody has suggested that auditory emotion is perceived in multiple steps (Schirmer and Kotz, 2006) involving low-level auditory analysis and integration of the acoustic information followed by higher-level cognition. Empirical evidence for this multistep processing chain, however, is still sparse. We examined this question using functional magnetic resonance imaging and a continuous carry-over design (Aguirre, 2007) to measure brain activity while volunteers listened to non-speech-affective vocalizations morphed on a continuum between anger and fear. Analyses dissociated neuronal adaptation effects induced by similarity in perceived emotional content between consecutive stimuli from those induced by their acoustic similarity. We found that bilateral voice-sensitive auditory regions as well as right amygdala coded the physical difference between consecutive stimuli. In contrast, activity in bilateral anterior insulae, medial superior frontal cortex, precuneus, and subcortical regions such as bilateral hippocampi depended predominantly on the perceptual difference between morphs. Our results suggest that the processing of vocal affect recognition is a multistep process involving largely distinct neural networks. Amygdala and auditory areas predominantly code emotion-related acoustic information while more anterior insular and prefrontal regions respond to the abstract, cognitive representation of vocal affect.
Visual Cognition | 2010
Patricia E. G. Bestelmeyer; Benedict C. Jones; Lisa M. DeBruine; Anthony C. Little; Lisa L. M. Welling
Bruce and Young (1986) proposed that functionally different aspects of faces (e.g., sex, identity, and expression) are processed independently. Although interdependent processing of identity and expression and of identity and sex have been demonstrated previously, evidence for interdependent processing of sex and expression is equivocal. Using a visual adaptation paradigm, we show that expression aftereffects can be simultaneously induced in different directions along anger–fear continua for male and female faces (Experiment 1) and for East Asian and Black African faces (Experiment 2). These findings for sex- and race-contingent expression aftereffects suggest that processing of expression is interdependent with processing of sex and race and are therefore problematic for models of face perception that have emphasized independent processing of functionally different aspects of faces. By contrast, our findings are consistent with models of face processing that propose that invariant physical aspects of faces and changeable social cues can be processed interdependently.
Current Biology | 2011
Patricia E. G. Bestelmeyer; Pascal Belin; Marie-Hélène Grosbras
Summary Functional magnetic resonance imaging (fMRI) research has revealed bilateral cortical regions along the upper banks of the superior temporal sulci (STS) which respond preferentially to voices compared to non-vocal, environmental sounds [1,2]. This sensitivity is particularly pronounced in the right hemisphere. Voice perception models imply that these regions, referred to as the temporal voice areas (TVAs), could correspond to a first stage of voice-specific processing in auditory cortex [3,4], after which different types of vocal information are processed in interacting but partially independent functional pathways. However, clear causal evidence for this claim is missing. Here we provide the first direct link between TVA activity and voice detection ability using repetitive transcranial magnetic stimulation (rTMS). Voice/non-voice discrimination ability was impaired when rTMS was targeted at the right TVA compared with a control site. In contrast, a lower-level loudness judgement task was not differentially affected by site of stimulation. Results imply that neuronal computations in the right TVA are necessary for the distinction between human voices and other, non-vocal sounds.
Neuropsychologia | 2004
Patricia E. G. Bestelmeyer; David P. Carey
A Posner-like paradigm was employed to investigate the effects of valid and invalid cueing of each hand on reaction time, movement time and peak velocity in an aiming task. Given claims of left hemisphere superiority in movement selection and inhibition (and the privileged within-hemisphere access of the right hand to such systems), it was hypothesised that invalidly cueing the left hand (i.e. right-hand movement precued, left-handed movement required by a go signal) would result in increased reaction time relative to invalid right-hand cueing. The hypothesis was not confirmed as reaction times of both hands were slowed equivalently by invalid cueing. Nevertheless, it was found that the movement duration of the left hand was increased substantially by invalid cueing, while the right hand was unaffected on this measure, suggesting a possible intentional rather than attentional difference between the two hands. These results are discussed in terms of a possible asymmetry of intentional processes related to hand movement and the right-hand advantage in movement duration.
NeuroImage: Clinical | 2013
Pierre Maurage; Patricia E. G. Bestelmeyer; Julien Rouger; Ian Charest; Pascal Belin
Binge drinking is now considered a central public health issue and is associated with emotional and interpersonal problems, but the neural implications of these deficits remain unexplored. The present study aimed at offering the first insights into the effects of binge drinking on the neural processing of vocal affect. On the basis of an alcohol-consumption screening phase (204 students), 24 young adults (12 binge drinkers and 12 matched controls, mean age: 23.8 years) were selected and performed an emotional categorisation task on morphed vocal stimuli (drawn from a morphed fear–anger continuum) during fMRI scanning. In comparison to controls, binge drinkers presented (1) worse behavioural performance in emotional affect categorisation; (2) reduced activation of bilateral superior temporal gyrus; and (3) increased activation of right middle frontal gyrus. These results constitute the first evidence of altered cerebral processing of emotional stimuli in binge drinking and confirm that binge drinking leads to marked cerebral changes, which has important implications for research and clinical practice.
Psychiatry Research-neuroimaging | 2012
Patricia E. G. Bestelmeyer
Amplitude reduction of the P300 event-related potential has long been suggested as a marker for schizophrenia. However, recent research has shown that this reduction in the P300 amplitude is not specific to schizophrenia as it can also be observed in related illnesses such as bipolar disorder. Due to this lack of specificity the P300 elicited using traditional oddball paradigms may be a less valuable endophenotypic marker. The current study employed a cognitively demanding three-stimulus oddball paradigm to elicit the P300 to visual target and distracting stimuli. Patients with schizophrenia showed amplitude reductions of P300 components to targets, distractors and frequent stimuli. The P300 in patients with bipolar disorder was not significantly different from either group. The pattern of results may further the understanding of the nature of the impairment in schizophrenia.