Zarinah K. Agnew
University College London
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Zarinah K. Agnew.
Cerebral Cortex | 2010
Rebecca Elliott; Zarinah K. Agnew; J.F.W. Deakin
Functional imaging studies have revealed roles for orbitofrontal cortex (OFC) in reward processing and decision making. In many situations, rewards signal that the current behavior should be maintained, whereas punishments cue a change in behavior. Thus, hedonic responses to reinforcers are conflated with their function as behavioral cues. In an attempt to disambiguate these functions, we performed a functional magnetic resonance imaging study of a 2-choice decision-making task. After each trial, subjects were rewarded or punished and independently provided with a cue to maintain or change behavior. We identified key regions of OFC involved in these processes. An anterior medial focus responded to reward, whereas bilateral lateral foci responded to punishment. The right-sided lateral region that responded to punishment also responded to cues for behavior change (shift), whereas a more ventral and anterior bilateral region responded to cues for behavioral maintenance (stay). The right-sided stay region responded specifically when stay cues were combined with punishment. These results support the view that OFC codes both hedonic responses to reinforcers and their behavioral consequences. Punishments and shift cues are associated with the same right lateral OFC focus, suggesting a fundamental connection between emotive response to negative reinforcement and use of negative information to cue behavioral change.
Journal of Cognitive Neuroscience | 2012
Carolyn McGettigan; Samuel Evans; Stuart Rosen; Zarinah K. Agnew; Poonam Shah; Sophie K. Scott
The question of hemispheric lateralization of neural processes is one that is pertinent to a range of subdisciplines of cognitive neuroscience. Language is often assumed to be left-lateralized in the human brain, but there has been a long running debate about the underlying reasons for this. We addressed this problem with fMRI by identifying the neural responses to amplitude and spectral modulations in speech and how these interact with speech intelligibility to test previous claims for hemispheric asymmetries in acoustic and linguistic processes in speech perception. We used both univariate and multivariate analyses of the data, which enabled us to both identify the networks involved in processing these acoustic and linguistic factors and to test the significance of any apparent hemispheric asymmetries. We demonstrate bilateral activation of superior temporal cortex in response to speech-derived acoustic modulations in the absence of intelligibility. However, in a contrast of amplitude-modulated and spectrally modulated conditions that differed only in their intelligibility (where one was partially intelligible and the other unintelligible), we show a left dominant pattern of activation in STS, inferior frontal cortex, and insula. Crucially, multivariate pattern analysis showed that there were significant differences between the left and the right hemispheres only in the processing of intelligible speech. This result shows that the left hemisphere dominance in linguistic processing does not arise because of low-level, speech-derived acoustic factors and that multivariate pattern analysis provides a method for unbiased testing of hemispheric asymmetries in processing.
PLOS ONE | 2012
Zarinah K. Agnew; Richard Wise; Robert Leech
Mirror neurons are single cells found in macaque premotor and parietal cortices that are active during action execution and observation. In non-human primates, mirror neurons have only been found in relation to object-directed movements or communicative gestures, as non-object directed actions of the upper limb are not well characterized in non-human primates. Mirror neurons provide important evidence for motor simulation theories of cognition, sometimes referred to as the direct matching hypothesis, which propose that observed actions are mapped onto associated motor schemata in a direct and automatic manner. This study, for the first time, directly compares mirror responses, defined as the overlap between action execution and observation, during object directed and meaningless non-object directed actions. We present functional MRI data that demonstrate a clear dissociation between object directed and non-object directed actions within the human mirror system. A premotor and parietal network was preferentially active during object directed actions, whether observed or executed. Moreover, we report spatially correlated activity across multiple voxels for observation and execution of an object directed action. In contrast to predictions made by motor simulation theory, no similar activity was observed for non-object directed actions. These data demonstrate that object directed and meaningless non-object directed actions are subserved by different neuronal networks and that the human mirror response is significantly greater for object directed actions. These data have important implications for understanding the human mirror system and for simulation theories of motor cognition. Subsequent theories of motor simulation must account for these differences, possibly by acknowledging the role of experience in modulating the mirror response.
The Journal of Neuroscience | 2008
Zarinah K. Agnew; Richard Wise
There is common neural activity in parietal and premotor cortex when executing and observing goal-directed movements: the “mirror” response. In addition, active and passive limb movements cause overlapping activity in premotor and somatosensory cortex. This association of motor and sensory activity cannot ascribe agency, the ability to discriminate between self- and non-self-generated events. This requires that some signals accompanying self-initiated limb movement dissociate from those evoked by observing the action of another or by movement imposed on oneself by external force. We demonstrated associated activity within the medial parietal operculum in response to feedforward visual or somatosensory information accompanying observed and imposed finger movements. In contrast, the response to motor and somatosensory information during self-initiated finger and observed movements resulted in activity localized to the lateral parietal operculum. This ascribes separate functions to medial and lateral second-order somatosensory cortex, anatomically dissociating the agent and the mirror response, demonstrating how executed and observed events are distinguished despite common activity in widespread sensorimotor cortices.
NeuroImage | 2013
Zarinah K. Agnew; Carolyn McGettigan; B. Banks; Sophie K. Scott
Production of actions is highly dependent on concurrent sensory information. In speech production, for example, movement of the articulators is guided by both auditory and somatosensory input. It has been demonstrated in non-human primates that self-produced vocalizations and those of others are differentially processed in the temporal cortex. The aim of the current study was to investigate how auditory and motor responses differ for self-produced and externally produced speech. Using functional neuroimaging, subjects were asked to produce sentences aloud, to silently mouth while listening to a different speaker producing the same sentence, to passively listen to sentences being read aloud, or to read sentences silently. We show that that separate regions of the superior temporal cortex display distinct response profiles to speaking aloud, mouthing while listening, and passive listening. Responses in anterior superior temporal cortices in both hemispheres are greater for passive listening compared with both mouthing while listening, and speaking aloud. This is the first demonstration that articulation, whether or not it has auditory consequences, modulates responses of the dorsolateral temporal cortex. In contrast posterior regions of the superior temporal cortex are recruited during both articulation conditions. In dorsal regions of the posterior superior temporal gyrus, responses to mouthing and reading aloud were equivalent, and in more ventral posterior superior temporal sulcus, responses were greater for reading aloud compared with mouthing while listening. These data demonstrate an anterior–posterior division of superior temporal regions where anterior fields are suppressed during motor output, potentially for the purpose of enhanced detection of the speech of others. We suggest posterior fields are engaged in auditory processing for the guidance of articulation by auditory information.
Proceedings of the National Academy of Sciences of the United States of America | 2010
Carolyn McGettigan; Zarinah K. Agnew; Sophie K. Scott
A recent PNAS article by Yuen et al. (1) presented evidence for articulation-specific effects of auditory speech on subsequent production of spoken syllables (Expt 1). This was interpreted as evidence that motor routines are “automatically” and “involuntarily” activated by heard speech, and the authors concluded their paradigm provides “a behavioral diagnostic for the activation of articulatory information in speech perception.” Electropalatography is an appealing approach to investigating the nature of motor involvement in speech perception. The authors demonstrated that an incongruent alveolar /t/ sound led to a greater proportion of alveolar tongue contact in the initial phase of subsequent production of /s/ and /k/ sounds, relative to a congruent condition. However, the claim that hearing speech evokes automatic motor activation is undermined by the inclusion of an overt phoneme monitoring task on the auditory speech. In Experiment 1, participants heard a distractor syllable, then received a written cue to produce a rhyming target aloud, after which they were visually prompted to indicate whether or not a specific phoneme had occurred in the original distractor. Phoneme monitoring requires segmentation of heard speech into its constituent elements, and it has long been argued that such tasks engage neural systems beyond those associated with normal speech comprehension (2, 3). For example, illiterate individuals can understand speech but cannot perform simple phoneme segmentation (4). Consistent with this perspective, a recent study showed that repetitive transcranial magnetic stimulation (TMS) applied to premotor cortex has a detrimental effect on performance of a task requiring phoneme segmentation, but has no effect on tasks that relied on simpler acoustic processing (e.g., identification of isolated phonemes) (5).
Cerebral Cortex | 2015
César F. Lima; Nadine Lavan; Samuel Evans; Zarinah K. Agnew; Andrea R. Halpern; Pradheep Shanmugalingam; Sophie Meekings; Dana Boebinger; Markus Ostarek; Carolyn McGettigan; Jane E. Warren; Sophie K. Scott
Humans can generate mental auditory images of voices or songs, sometimes perceiving them almost as vividly as perceptual experiences. The functional networks supporting auditory imagery have been described, but less is known about the systems associated with interindividual differences in auditory imagery. Combining voxel-based morphometry and fMRI, we examined the structural basis of interindividual differences in how auditory images are subjectively perceived, and explored associations between auditory imagery, sensory-based processing, and visual imagery. Vividness of auditory imagery correlated with gray matter volume in the supplementary motor area (SMA), parietal cortex, medial superior frontal gyrus, and middle frontal gyrus. An analysis of functional responses to different types of human vocalizations revealed that the SMA and parietal sites that predict imagery are also modulated by sound type. Using representational similarity analysis, we found that higher representational specificity of heard sounds in SMA predicts vividness of imagery, indicating a mechanistic link between sensory- and imagery-based processing in sensorimotor cortex. Vividness of imagery in the visual domain also correlated with SMA structure, and with auditory imagery scores. Altogether, these findings provide evidence for a signature of imagery in brain structure, and highlight a common role of perceptual–motor interactions for processing heard and internally generated auditory information.
Journal of Cognitive Neuroscience | 2011
Zarinah K. Agnew; Carolyn McGettigan; Sophie K. Scott
Several perspectives on speech perception posit a central role for the representation of articulations in speech comprehension, supported by evidence for premotor activation when participants listen to speech. However, no experiments have directly tested whether motor responses mirror the profile of selective auditory cortical responses to native speech sounds or whether motor and auditory areas respond in different ways to sounds. We used fMRI to investigate cortical responses to speech and nonspeech mouth (ingressive click) sounds. Speech sounds activated bilateral superior temporal gyri more than other sounds, a profile not seen in motor and premotor cortices. These results suggest that there are qualitative differences in the ways that temporal and motor areas are activated by speech and click sounds: Anterior temporal lobe areas are sensitive to the acoustic or phonetic properties, whereas motor responses may show more generalized responses to the acoustic stimuli.
Journal of Cognitive Neuroscience | 2016
Samuel Evans; Carolyn McGettigan; Zarinah K. Agnew; Stuart Rosen; Sophie K. Scott
Spoken conversations typically take place in noisy environments, and different kinds of masking sounds place differing demands on cognitive resources. Previous studies, examining the modulation of neural activity associated with the properties of competing sounds, have shown that additional speech streams engage the superior temporal gyrus. However, the absence of a condition in which target speech was heard without additional masking made it difficult to identify brain networks specific to masking and to ascertain the extent to which competing speech was processed equivalently to target speech. In this study, we scanned young healthy adults with continuous fMRI, while they listened to stories masked by sounds that differed in their similarity to speech. We show that auditory attention and control networks are activated during attentive listening to masked speech in the absence of an overt behavioral task. We demonstrate that competing speech is processed predominantly in the left hemisphere within the same pathway as target speech but is not treated equivalently within that stream and that individuals who perform better in speech in noise tasks activate the left mid-posterior superior temporal gyrus more. Finally, we identify neural responses associated with the onset of sounds in the auditory environment; activity was found within right lateralized frontal regions consistent with a phasic alerting response. Taken together, these results provide a comprehensive account of the neural processes involved in listening in noise.
Journal of Cognitive Neuroscience | 2014
Jeong S. Kyong; Sophie K. Scott; Stuart Rosen; Timothy Howe; Zarinah K. Agnew; Carolyn McGettigan
The melodic contour of speech forms an important perceptual aspect of tonal and nontonal languages and an important limiting factor on the intelligibility of speech heard through a cochlear implant. Previous work exploring the neural correlates of speech comprehension identified a left-dominant pathway in the temporal lobes supporting the extraction of an intelligible linguistic message, whereas the right anterior temporal lobe showed an overall preference for signals clearly conveying dynamic pitch information [Johnsrude, I. S., Penhune, V. B., & Zatorre, R. J. Functional specificity in the right human auditory cortex for perceiving pitch direction. Brain, 123, 155–163, 2000; Scott, S. K., Blank, C. C., Rosen, S., & Wise, R. J. Identification of a pathway for intelligible speech in the left temporal lobe. Brain, 123, 2400–2406, 2000]. The current study combined modulations of overall intelligibility (through vocoding and spectral inversion) with a manipulation of pitch contour (normal vs. falling) to investigate the processing of spoken sentences in functional MRI. Our overall findings replicate and extend those of Scott et al. [Scott, S. K., Blank, C. C., Rosen, S., & Wise, R. J. Identification of a pathway for intelligible speech in the left temporal lobe. Brain, 123, 2400–2406, 2000], where greater sentence intelligibility was predominately associated with increased activity in the left STS, and the greatest response to normal sentence melody was found in right superior temporal gyrus. These data suggest a spatial distinction between brain areas associated with intelligibility and those involved in the processing of dynamic pitch information in speech. By including a set of complexity-matched unintelligible conditions created by spectral inversion, this is additionally the first study reporting a fully factorial exploration of spectrotemporal complexity and spectral inversion as they relate to the neural processing of speech intelligibility. Perhaps surprisingly, there was little evidence for an interaction between the two factors—we discuss the implications for the processing of sound and speech in the dorsolateral temporal lobes.