Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dafydd Waters is active.

Publication


Featured researches published by Dafydd Waters.


NeuroImage | 2008

Phonological processing in deaf signers and the impact of age of first language acquisition.

Mairéad MacSweeney; Dafydd Waters; Michael Brammer; Bencie Woll; Usha Goswami

Just as words can rhyme, the signs of a signed language can share structural properties, such as location. Linguistic description at this level is termed phonology. We report that a left-lateralised fronto-parietal network is engaged during phonological similarity judgements made in both English (rhyme) and British Sign Language (BSL; location). Since these languages operate in different modalities, these data suggest that the neural network supporting phonological processing is, to some extent, supramodal. Activation within this network was however modulated by language (BSL/English), hearing status (deaf/hearing), and age of BSL acquisition (native/non-native). The influence of language and hearing status suggests an important role for the posterior portion of the left inferior frontal gyrus in speech-based phonological processing in deaf people. This, we suggest, is due to increased reliance on the articulatory component of speech when the auditory component is absent. With regard to age of first language acquisition, non-native signers activated the left inferior frontal gyrus more than native signers during the BSL task, and also during the task performed in English, which both groups acquired late. This is the first neuroimaging demonstration that age of first language acquisition has implications not only for the neural systems supporting the first language, but also for networks supporting languages learned subsequently.


Neuropsychologia | 2008

Cortical circuits for silent speechreading in deaf and hearing people

Cheryl M. Capek; Mairéad MacSweeney; Bencie Woll; Dafydd Waters; Philip McGuire; Anthony S. David; Michael Brammer; Ruth Campbell

This fMRI study explored the functional neural organisation of seen speech in congenitally deaf native signers and hearing non-signers. Both groups showed extensive activation in perisylvian regions for speechreading words compared to viewing the model at rest. In contrast to earlier findings, activation in left middle and posterior portions of superior temporal cortex, including regions within the lateral sulcus and the superior and middle temporal gyri, was greater for deaf than hearing participants. This activation pattern survived covarying for speechreading skill, which was better in deaf than hearing participants. Furthermore, correlational analysis showed that regions of activation related to speechreading skill varied with the hearing status of the observers. Deaf participants showed a positive correlation between speechreading skill and activation in the middle/posterior superior temporal cortex. In hearing participants, however, more posterior and inferior temporal activation (including fusiform and lingual gyri) was positively correlated with speechreading skill. Together, these findings indicate that activation in the left superior temporal regions for silent speechreading can be modulated by both hearing status and speechreading skill.


Brain | 2009

Enhanced activation of the left inferior frontal gyrus in deaf and dyslexic adults during rhyming

Mairéad MacSweeney; Michael Brammer; Dafydd Waters; Usha Goswami

Hearing developmental dyslexics and profoundly deaf individuals both have difficulties processing the internal structure of words (phonological processing) and learning to read. In hearing non-impaired readers, the development of phonological representations depends on audition. In hearing dyslexics, many argue, auditory processes may be impaired. In congenitally profoundly deaf individuals, auditory speech processing is essentially absent. Two separate literatures have previously reported enhanced activation in the left inferior frontal gyrus in both deaf and dyslexic adults when contrasted with hearing non-dyslexics during reading or phonological tasks. Here, we used a rhyme judgement task to compare adults from these two special populations to a hearing non-dyslexic control group. All groups were matched on non-verbal intelligence quotient, reading age and rhyme performance. Picture stimuli were used since this requires participants to generate their own phonological representations, rather than have them partially provided via text. By testing well-matched groups of participants on the same task, we aimed to establish whether previous literatures reporting differences between individuals with and without phonological processing difficulties have identified the same regions of differential activation in these two distinct populations. The data indicate greater activation in the deaf and dyslexic groups than in the hearing non-dyslexic group across a large portion of the left inferior frontal gyrus. This includes the pars triangularis, extending superiorly into the middle frontal gyrus and posteriorly to include the pars opercularis, and the junction with the ventral precentral gyrus. Within the left inferior frontal gyrus, there was variability between the two groups with phonological processing difficulties. The superior posterior tip of the left pars opercularis, extending into the precentral gyrus, was activated to a greater extent by deaf than dyslexic participants, whereas the superior posterior portion of the pars triangularis extending into the ventral pars opercularis, was activated to a greater extent by dyslexic than deaf participants. Whether these regions play differing roles in compensating for poor phonological processing is not clear. However, we argue that our main finding of greater inferior frontal gyrus activation in both groups with phonological processing difficulties in contrast to controls suggests greater reliance on the articulatory component of speech during phonological processing when auditory processes are absent (deaf group) or impaired (dyslexic group). Thus, the brain appears to develop a similar solution to a processing problem that has different antecedents in these two populations.


Journal of Cognitive Neuroscience | 2008

Hand and mouth: Cortical correlates of lexical processing in british sign language and speechreading english

Cheryl M. Capek; Dafydd Waters; Bencie Woll; Mairéad MacSweeney; Michael Brammer; Philip McGuire; Anthony S. David; Ruth Campbell

Spoken languages use one set of articulatorsthe vocal tract, whereas signed languages use multiple articulators, including both manual and facial actions. How sensitive are the cortical circuits for language processing to the particular articulators that are observed? This question can only be addressed with participants who use both speech and a signed language. In this study, we used functional magnetic resonance imaging to compare the processing of speechreading and sign processing in deaf native signers of British Sign Language (BSL) who were also proficient speechreaders. The following questions were addressed: To what extent do these different language types rely on a common brain network? To what extent do the patterns of activation differ? How are these networks affected by the articulators that languages use? Common peri-sylvian regions were activated both for speechreading English words and for BSL signs. Distinctive activation was also observed reflecting the language form. Speechreading elicited greater activation in the left mid-superior temporal cortex than BSL, whereas BSL processing generated greater activation at the temporo-parieto-occipital junction in both hemispheres. We probed this distinction further within BSL, where manual signs can be accompanied by different types of mouth action. BSL signs with speech-like mouth actions showed greater superior temporal activation, whereas signs made with non-speech-like mouth actions showed more activation in posterior and inferior temporal regions. Distinct regions within the temporal cortex are not only differentially sensitive to perception of the distinctive articulators for speech and for sign but also show sensitivity to the different articulators within the (signed) language.


NeuroImage | 2007

Fingerspelling, signed language, text and picture processing in deaf native signers: The role of the mid-fusiform gyrus

Dafydd Waters; Ruth Campbell; Cheryl M. Capek; Bencie Woll; Anthony S. David; Philip McGuire; Michael Brammer; Mairéad MacSweeney

In fingerspelling, different hand configurations are used to represent the different letters of the alphabet. Signers use this method of representing written language to fill lexical gaps in a signed language. Using fMRI, we compared cortical networks supporting the perception of fingerspelled, signed, written, and pictorial stimuli in deaf native signers of British Sign Language (BSL). In order to examine the effects of linguistic knowledge, hearing participants who knew neither fingerspelling nor a signed language were also tested. All input forms activated a left fronto-temporal network, including portions of left inferior temporal and mid-fusiform gyri, in both groups. To examine the extent to which activation in this region was influenced by orthographic structure, two contrasts of orthographic and non-orthographic stimuli were made: one using static stimuli (text vs. pictures), the other using dynamic stimuli (fingerspelling vs. signed language). Greater activation in left and right inferior temporal and mid-fusiform gyri was found for pictures than text in both deaf and hearing groups. In the fingerspelling vs. signed language contrast, a significant interaction indicated locations within the left and right mid-fusiform gyri. This showed greater activation for fingerspelling than signed language in deaf but not hearing participants. These results are discussed in light of recent proposals that the mid-fusiform gyrus may act as an integration region, mediating between visual input and higher-order stimulus properties.


Brain and Language | 2010

Superior temporal activation as a function of linguistic knowledge: insights from deaf native signers who speechread.

Cheryl M. Capek; Bencie Woll; Mairéad MacSweeney; Dafydd Waters; Philip McGuire; Anthony S. David; Michael Brammer; Ruth Campbell

Studies of spoken and signed language processing reliably show involvement of the posterior superior temporal cortex. This region is also reliably activated by observation of meaningless oral and manual actions. In this study we directly compared the extent to which activation in posterior superior temporal cortex is modulated by linguistic knowledge irrespective of differences in language form. We used a novel cross-linguistic approach in two groups of volunteers who differed in their language experience. Using fMRI, we compared deaf native signers of British Sign Language (BSL), who were also proficient speechreaders of English (i.e., two languages) with hearing people who could speechread English, but knew no BSL (i.e., one language). Both groups were presented with BSL signs and silently spoken English words, and were required to respond to a signed or spoken target. The interaction of group and condition revealed activation in the superior temporal cortex, bilaterally, focused in the posterior superior temporal gyri (pSTG, BA 42/22). In hearing people, these regions were activated more by speech than by sign, but in deaf respondents they showed similar levels of activation for both language forms – suggesting that posterior superior temporal regions are highly sensitive to language knowledge irrespective of the mode of delivery of the stimulus material.


Brain and Language | 2013

Motor excitability during visual perception of known and unknown spoken languages.

Swathi Swaminathan; Mairéad MacSweeney; Rowan Boyles; Dafydd Waters; Kate E. Watkins; Riikka Möttönen

Highlights • Native and non-native English speakers can visually discriminate English from an unknown language.• Viewing known speech excites the articulatory motor cortex more than unknown speech.• Viewing known speech excites the articulatory motor cortex more than non-speech mouth movements.• Motor excitability is high during observation of a face not speaking.• Motor excitability does not differ between native and non-native speakers.


Brain and Language | 2015

Identification of the regions involved in phonological assembly using a novel paradigm.

Tae Twomey; Dafydd Waters; Cathy J. Price; Ferath Kherif; Bencie Woll; Mairéad MacSweeney

Highlights • Sequential delivery of letters in words encourages the use of phonological assembly.• Greater activation in left SMG, POp and precentral gyrus during sequential delivery.• Activation for ‘phonological assembly’ not confounded with stimulus properties.• Activation for ‘phonological assembly’ not wholly attributable to processing load.


The Journal of Neuroscience | 2017

How auditory experience differentially influences the function of left and right superior temporal cortices

Tae Twomey; Dafydd Waters; Cathy J. Price; Samuel Evans; Mairéad MacSweeney

To investigate how hearing status, sign language experience, and task demands influence functional responses in the human superior temporal cortices (STC) we collected fMRI data from deaf and hearing participants (male and female), who either acquired sign language early or late in life. Our stimuli in all tasks were pictures of objects. We varied the linguistic and visuospatial processing demands in three different tasks that involved decisions about (1) the sublexical (phonological) structure of the British Sign Language (BSL) signs for the objects, (2) the semantic category of the objects, and (3) the physical features of the objects. Neuroimaging data revealed that in participants who were deaf from birth, STC showed increased activation during visual processing tasks. Importantly, this differed across hemispheres. Right STC was consistently activated regardless of the task whereas left STC was sensitive to task demands. Significant activation was detected in the left STC only for the BSL phonological task. This task, we argue, placed greater demands on visuospatial processing than the other two tasks. In hearing signers, enhanced activation was absent in both left and right STC during all three tasks. Lateralization analyses demonstrated that the effect of deafness was more task-dependent in the left than the right STC whereas it was more task-independent in the right than the left STC. These findings indicate how the absence of auditory input from birth leads to dissociable and altered functions of left and right STC in deaf participants. SIGNIFICANCE STATEMENT Those born deaf can offer unique insights into neuroplasticity, in particular in regions of superior temporal cortex (STC) that primarily respond to auditory input in hearing people. Here we demonstrate that in those deaf from birth the left and the right STC have altered and dissociable functions. The right STC was activated regardless of demands on visual processing. In contrast, the left STC was sensitive to the demands of visuospatial processing. Furthermore, hearing signers, with the same sign language experience as the deaf participants, did not activate the STCs. Our data advance current understanding of neural plasticity by determining the differential effects that hearing status and task demands can have on left and right STC function.


NeuroImage | 2008

Corrigendum to “Fingerspelling, signed language, text and picture processing in deaf native signers: The role of the mid-fusiform gyrus” [NeuroImage 35 (2007) 1287–1302]

Dafydd Waters; Ruth Campbell; Cheryl M. Capek; Bencie Woll; Anthony S. David; Philip McGuire; Michael Brammer; Mairéad MacSweeney

We recently noticed an error in the Methods section of this paper. Voxel size in Talairach space was reported as 3×3×3 mm. It was actually 3.3×3.3×3.3 mm. Due to this error, and to a systematic error in calculating metric volume, activation volumes were reported incorrectly throughout the paper. These errors have no impact, however, on the arguments presented. In the Results section, and in Tables 3–8, activation sizes are given as both number of activated voxels and metric volume. In all cases, the figure reported for the number of voxels is correct, whereas the volume reported is incorrect. The following formula may be used to calculate the correct volume for these activations:

Collaboration


Dive into the Dafydd Waters's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bencie Woll

University College London

View shared research outputs
Top Co-Authors

Avatar

Ruth Campbell

University College London

View shared research outputs
Top Co-Authors

Avatar

Cheryl M. Capek

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Usha Goswami

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge