Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mairéad MacSweeney is active.

Publication


Featured researches published by Mairéad MacSweeney.


Cognitive Brain Research | 2001

Cortical substrates for the perception of face actions: an fMRI study of the specificity of activation for seen speech and for meaningless lower-face acts (gurning)

Ruth Campbell; Mairéad MacSweeney; S Surguladze; Gemma A. Calvert; Philip McGuire; John Suckling; Michael Brammer; As David

Can the cortical substrates for the perception of face actions be distinguished when the superficial visual qualities of these actions are very similar? Two fMRI experiments are reported. Compared with watching the face at rest, observing silent speech was associated with bilateral activation in a number of temporal cortical regions, including the superior temporal sulcus (STS). Watching face movements of similar extent and duration, but which could not be construed as speech (gurning; Experiment 1b) was not associated with activation of superior temporal cortex to the same extent, especially in the left hemisphere. Instead, the peak focus of the largest cluster of activation was in the posterior part of the inferior temporal gyrus (right, BA 37). Observing silent speech, but not gurning faces, was also associated with bilateral activation of inferior frontal cortex (BA 44 and 45). In a second study, speechreading and observing gurning faces were compared within a single experiment, using stimuli which comprised the speakers face and torso (and hence a much smaller image of the speakers face and facial actions). There was again differential engagement of superior temporal cortex which followed the pattern of Experiment 1. These findings suggest that superior temporal gyrus and neighbouring regions are activated bilaterally when subjects view face actions--at different scales--that can be interpreted as speech. This circuitry is not accessed to the same extent by visually similar, but linguistically meaningless actions. However, some temporal regions, such as the posterior part of the right superior temporal sulcus, appear to be common processing sites for processing both seen speech and gurns.


Journal of Magnetic Resonance Imaging | 2002

Acoustic Noise and Functional Magnetic Resonance Imaging: Current Strategies and Future Prospects

Edson Amaro; Steve C.R. Williams; Sukhi Shergill; Cynthia H.Y. Fu; Mairéad MacSweeney; Marco Picchioni; Michael Brammer; Philip McGuire

Functional magnetic resonance imaging (fMRI) has become the method of choice for studying the neural correlates of cognitive tasks. Nevertheless, the scanner produces acoustic noise during the image acquisition process, which is a problem in the study of auditory pathway and language generally. The scanner acoustic noise not only produces activation in brain regions involved in auditory processing, but also interferes with the stimulus presentation. Several strategies can be used to address this problem, including modifications of hardware and software. Although reduction of the source of the acoustic noise would be ideal, substantial hardware modifications to the current base of installed MRI systems would be required. Therefore, the most common strategy employed to minimize the problem involves software modifications. In this work we consider three main types of acquisitions: compressed, partially silent, and silent. For each implementation, paradigms using block and event‐related designs are assessed. We also provide new data, using a silent event‐related (SER) design, which demonstrate higher blood oxygen level‐dependent (BOLD) response to a simple auditory cue when compared to a conventional image acquisition. J. Magn. Reson. Imaging 2002;16:497–510.


Trends in Cognitive Sciences | 2008

The signing brain: the neurobiology of sign language

Mairéad MacSweeney; Cheryl M. Capek; Ruth Campbell; Bencie Woll

Most of our knowledge about the neurobiological bases of language comes from studies of spoken languages. By studying signed languages, we can determine whether what we have learnt so far is characteristic of language per se or whether it is specific to languages that are spoken and heard. Overwhelmingly, lesion and neuroimaging studies indicate that the neural systems supporting signed and spoken language are very similar: both involve a predominantly left-lateralised perisylvian network. Recent studies have also highlighted processing differences between languages in these different modalities. These studies provide rich insights into language and communication processes in deaf and hearing people.


Neuroreport | 2000

Silent speechreading in the absence of scanner noise: an event-related fMRI study.

Mairéad MacSweeney; Edson Amaro; Gemma A. Calvert; Ruth Campbell; Anthony S. David; Philip McGuire; Steven Williams; Bencie Woll; Michael Brammer

In a previous study we used functional magnetic resonance imaging (fMRI) to demonstrate activation in auditory cortex during silent speechreading. Since image acquisition during fMRI generates acoustic noise, this pattern of activation could have reflected an interaction between background scanner noise and the visual lip-read stimuli. In this study we employed an event-related fMRI design which allowed us to measure activation during speechreading in the absence of acoustic scanner noise. In the experimental condition, hearing subjects were required to speechread random numbers from a silent speaker. In the control condition subjects watched a static image of the same speaker with mouth closed and were required to subvocally count an intermittent visual cue. A single volume of images was collected to coincide with the estimated peak of the blood oxygen level dependent (BOLD) response to these stimuli across multiple baseline and experimental trials. Silent speechreading led to greater activation in lateral temporal cortex relative to the control condition. This indicates that activation of auditory areas during silent speech-reading is not a function of acoustic scanner noise and confirms that silent speechreading engages similar regions of auditory cortex as listening to speech.


NeuroImage | 2004

Dissociating linguistic and nonlinguistic gestural communication in the brain

Mairéad MacSweeney; Ruth Campbell; Bencie Woll; Vincent Giampietro; Anthony S. David; Philip McGuire; Gemma A. Calvert; Michael Brammer

Gestures of the face, arms, and hands are components of signed languages used by Deaf people. Signaling codes, such as the racecourse betting code known as Tic Tac, are also made up of such gestures. Tic Tac lacks the phonological structure of British Sign Language (BSL) but is similar in terms of its visual and articulatory components. Using fMRI, we compared the neural correlates of viewing a gestural language (BSL) and a manual-brachial code (Tic Tac) relative to a low-level baseline task. We compared three groups: Deaf native signers, hearing native signers, and hearing nonsigners. None of the participants had any knowledge of Tic Tac. All three groups activated an extensive frontal-posterior network in response to both types of stimuli. Superior temporal cortex, including the planum temporale, was activated bilaterally in response to both types of gesture in all groups, irrespective of hearing status. The engagement of these traditionally auditory processing regions was greater in Deaf than hearing participants. These data suggest that the planum temporale may be responsive to visual movement in both deaf and hearing people, yet when hearing is absent early in development, the visual processing role of this region is enhanced. Greater activation for BSL than Tic Tac was observed in signers, but not in nonsigners, in the left posterior superior temporal sulcus and gyrus, extending into the supramarginal gyrus. This suggests that the left posterior perisylvian cortex is of fundamental importance to language processing, regardless of the modality in which it is conveyed.


NeuroImage | 2008

Phonological processing in deaf signers and the impact of age of first language acquisition.

Mairéad MacSweeney; Dafydd Waters; Michael Brammer; Bencie Woll; Usha Goswami

Just as words can rhyme, the signs of a signed language can share structural properties, such as location. Linguistic description at this level is termed phonology. We report that a left-lateralised fronto-parietal network is engaged during phonological similarity judgements made in both English (rhyme) and British Sign Language (BSL; location). Since these languages operate in different modalities, these data suggest that the neural network supporting phonological processing is, to some extent, supramodal. Activation within this network was however modulated by language (BSL/English), hearing status (deaf/hearing), and age of BSL acquisition (native/non-native). The influence of language and hearing status suggests an important role for the posterior portion of the left inferior frontal gyrus in speech-based phonological processing in deaf people. This, we suggest, is due to increased reliance on the articulatory component of speech when the auditory component is absent. With regard to age of first language acquisition, non-native signers activated the left inferior frontal gyrus more than native signers during the BSL task, and also during the task performed in English, which both groups acquired late. This is the first neuroimaging demonstration that age of first language acquisition has implications not only for the neural systems supporting the first language, but also for networks supporting languages learned subsequently.


Journal of Cognitive Neuroscience | 2002

Neural Correlates of British Sign Language Comprehension: Spatial Processing Demands of Topographic Language

Mairéad MacSweeney; Bencie Woll; Ruth Campbell; Gemma A. Calvert; Philip McGuire; Anthony S. David; Andrew Simmons; Michael Brammer

In all signed languages used by deaf people, signs are executed in sign space in front of the body. Some signed sentences use this space to map detailed real-world spatial relationships directly. Such sentences can be considered to exploit sign space topographically. Using functional magnetic resonance imaging, we explored the extent to which increasing the topographic processing demands of signed sentences was reflected in the differential recruitment of brain regions in deaf and hearing native signers of the British Sign Language. When BSL signers performed a sentence anomaly judgement task, the occipito-temporal junction was activated bilaterally to a greater extent for topographic than nontopo-graphic processing. The differential role of movement in the processing of the two sentence types may account for this finding. In addition, enhanced activation was observed in the left inferior and superior parietal lobules during processing of topographic BSL sentences. We argue that the left parietal lobe is specifically involved in processing the precise configuration and location of hands in space to represent objects, agents, and actions. Importantly, no differences in these regions were observed when hearing people heard and saw English translations of these sentences. Despite the high degree of similarity in the neural systems underlying signed and spoken languages, exploring the linguistic features which are unique to each of these broadens our understanding of the systems involved in language comprehension.


Neuropsychologia | 2008

Cortical circuits for silent speechreading in deaf and hearing people

Cheryl M. Capek; Mairéad MacSweeney; Bencie Woll; Dafydd Waters; Philip McGuire; Anthony S. David; Michael Brammer; Ruth Campbell

This fMRI study explored the functional neural organisation of seen speech in congenitally deaf native signers and hearing non-signers. Both groups showed extensive activation in perisylvian regions for speechreading words compared to viewing the model at rest. In contrast to earlier findings, activation in left middle and posterior portions of superior temporal cortex, including regions within the lateral sulcus and the superior and middle temporal gyri, was greater for deaf than hearing participants. This activation pattern survived covarying for speechreading skill, which was better in deaf than hearing participants. Furthermore, correlational analysis showed that regions of activation related to speechreading skill varied with the hearing status of the observers. Deaf participants showed a positive correlation between speechreading skill and activation in the middle/posterior superior temporal cortex. In hearing participants, however, more posterior and inferior temporal activation (including fusiform and lingual gyri) was positively correlated with speechreading skill. Together, these findings indicate that activation in the left superior temporal regions for silent speechreading can be modulated by both hearing status and speechreading skill.


Frontiers in Psychology | 2011

A generative model of speech production in Broca's and Wernicke's areas

Cathy J. Price; Jenny Crinion; Mairéad MacSweeney

Speech production involves the generation of an auditory signal from the articulators and vocal tract. When the intended auditory signal does not match the produced sounds, subsequent articulatory commands can be adjusted to reduce the difference between the intended and produced sounds. This requires an internal model of the intended speech output that can be compared to the produced speech. The aim of this functional imaging study was to identify brain activation related to the internal model of speech production after activation related to vocalization, auditory feedback, and movement in the articulators had been controlled. There were four conditions: silent articulation of speech, non-speech mouth movements, finger tapping, and visual fixation. In the speech conditions, participants produced the mouth movements associated with the words “one” and “three.” We eliminated auditory feedback from the spoken output by instructing participants to articulate these words without producing any sound. The non-speech mouth movement conditions involved lip pursing and tongue protrusions to control for movement in the articulators. The main difference between our speech and non-speech mouth movement conditions is that prior experience producing speech sounds leads to the automatic and covert generation of auditory and phonological associations that may play a role in predicting auditory feedback. We found that, relative to non-speech mouth movements, silent speech activated Broca’s area in the left dorsal pars opercularis and Wernicke’s area in the left posterior superior temporal sulcus. We discuss these results in the context of a generative model of speech production and propose that Broca’s and Wernicke’s areas may be involved in predicting the speech output that follows articulation. These predictions could provide a mechanism by which rapid movement of the articulators is precisely matched to the intended speech outputs during future articulations.


Proceedings of the Royal Society of London B: Biological Sciences | 2001

Dispersed activation in the left temporal cortex for speech-reading in congenitally deaf people

Mairéad MacSweeney; Ruth Campbell; Gemma A. Calvert; Philip McGuire; Anthony S. David; John Suckling; C Andrew; Bencie Woll; Michael Brammer

Does the lateral temporal cortex require acoustic exposure in order to become specialized for speech processing? Six hearing participants and six congenitally deaf participants, all with spoken English as their first language, were scanned using functional magnetic resonance imaging while performing a simple speech–reading task. Focal activation of the left lateral temporal cortex was significantly reduced in the deaf group compared with the hearing group. Activation within this region was present in individual deaf participants, but varied in location from person to person. Early acoustic experience may be required for regions within the left temporal cortex in order to develop into a coherent network with subareas devoted to specific speech analysis functions.

Collaboration


Dive into the Mairéad MacSweeney's collaboration.

Top Co-Authors

Avatar

Ruth Campbell

University College London

View shared research outputs
Top Co-Authors

Avatar

Bencie Woll

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dafydd Waters

University College London

View shared research outputs
Top Co-Authors

Avatar

Cheryl M. Capek

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge