Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nienke van Atteveldt is active.

Publication


Featured researches published by Nienke van Atteveldt.


Neuron | 2004

Integration of letters and speech sounds in the human brain

Nienke van Atteveldt; Elia Formisano; Rainer Goebel; Leo Blomert

Most people acquire literacy skills with remarkable ease, even though the human brain is not evolutionarily adapted to this relatively new cultural phenomenon. Associations between letters and speech sounds form the basis of reading in alphabetic scripts. We investigated the functional neuroanatomy of the integration of letters and speech sounds using functional magnetic resonance imaging (fMRI). Letters and speech sounds were presented unimodally and bimodally in congruent or incongruent combinations. Analysis of single-subject data and group data aligned on the basis of individual cortical anatomy revealed that letters and speech sounds are integrated in heteromodal superior temporal cortex. Interestingly, responses to speech sounds in a modality-specific region of the early auditory cortex were modified by simultaneously presented letters. These results suggest that efficient processing of culturally defined associations between letters and speech sounds relies on neural mechanisms similar to those naturally evolved for integrating audiovisual speech.


Current Biology | 2009

Reduced neural integration of letters and speech sounds links phonological and reading deficits in adult dyslexia

Vera C Blau; Nienke van Atteveldt; Michel Ekkebus; Rainer Goebel; Leo Blomert

Developmental dyslexia is a specific reading and spelling deficit affecting 4% to 10% of the population. Advances in understanding its origin support a core deficit in phonological processing characterized by difficulties in segmenting spoken words into their minimally discernable speech segments (speech sounds, or phonemes) and underactivation of left superior temporal cortex. A suggested but unproven hypothesis is that this phonological deficit impairs the ability to map speech sounds onto their homologous visual letters, which in turn prevents the attainment of fluent reading levels. The present functional magnetic resonance imaging (fMRI) study investigated the neural processing of letters and speech sounds in unisensory (visual, auditory) and multisensory (audiovisual congruent, audiovisual incongruent) conditions as a function of reading ability. Our data reveal that adult dyslexic readers underactivate superior temporal cortex for the integration of letters and speech sounds. This reduced audiovisual integration is directly associated with a more fundamental deficit in auditory processing of speech sounds, which in turn predicts performance on phonological tasks. The data provide a neurofunctional account of developmental dyslexia, in which phonological processing deficits are linked to reading failure through a deficit in neural integration of letters and speech sounds.


Brain | 2010

Deviant processing of letters and speech sounds as proximate cause of reading failure: a functional magnetic resonance imaging study of dyslexic children

Vera C Blau; Joel Reithler; Nienke van Atteveldt; Jochen Seitz; Patty Gerretsen; Rainer Goebel; Leo Blomert

Learning to associate auditory information of speech sounds with visual information of letters is a first and critical step for becoming a skilled reader in alphabetic languages. Nevertheless, it remains largely unknown which brain areas subserve the learning and automation of such associations. Here, we employ functional magnetic resonance imaging to study letter-speech sound integration in children with and without developmental dyslexia. The results demonstrate that dyslexic children show reduced neural integration of letters and speech sounds in the planum temporale/Heschl sulcus and the superior temporal sulcus. While cortical responses to speech sounds in fluent readers were modulated by letter-speech sound congruency with strong suppression effects for incongruent letters, no such modulation was observed in the dyslexic readers. Whole-brain analyses of unisensory visual and auditory group differences additionally revealed reduced unisensory responses to letters in the fusiform gyrus in dyslexic children, as well as reduced activity for processing speech sounds in the anterior superior temporal gyrus, planum temporale/Heschl sulcus and superior temporal sulcus. Importantly, the neural integration of letters and speech sounds in the planum temporale/Heschl sulcus and the neural response to letters in the fusiform gyrus explained almost 40% of the variance in individual reading performance. These findings indicate that an interrelated network of visual, auditory and heteromodal brain areas contributes to the skilled use of letter-speech sound associations necessary for learning to read. By extending similar findings in adults, the data furthermore argue against the notion that reduced neural integration of letters and speech sounds in dyslexia reflect the consequence of a lifetime of reading struggle. Instead, they support the view that letter-speech sound integration is an emergent property of learning to read that develops inadequately in dyslexic readers, presumably as a result of a deviant interactive specialization of neural systems for processing auditory and visual linguistic inputs.


NeuroImage | 2007

Top–down task effects overrule automatic multisensory responses to letter–sound pairs in auditory association cortex

Nienke van Atteveldt; Elia Formisano; Rainer Goebel; Leo Blomert

In alphabetic scripts, letters and speech sounds are the basic elements of correspondence between spoken and written language. In two previous fMRI studies, we showed that the response to speech sounds in the auditory association cortex was enhanced by congruent letters and suppressed by incongruent letters. Interestingly, temporal synchrony was critical for this congruency effect to occur. We interpreted these results as a neural correlate of letter-sound integration, driven by the learned congruency of letter-sound pairs. The present event-related fMRI study was designed to address two questions that could not directly be addressed in the previous studies, due to their passive nature and blocked design. Specifically: (1) to examine whether the enhancement/suppression of auditory cortex are truly multisensory integration effects or can be explained by different attention levels during congruent/incongruent blocks, and (2) to examine the effect of top-down task demands on the neural integration of letter-sound pairs. Firstly, we replicated the previous results with random stimulus presentation, which rules out an explanation of the congruency effect in auditory cortex solely in terms of attention. Secondly, we showed that the effects of congruency and temporal asynchrony in the auditory association cortex were absent during active matching. This indicates that multisensory responses in the auditory association cortex heavily depend on task demands. Without task instructions, the auditory cortex is modulated to favor the processing of congruent and synchronous information. This modulation is overruled during explicit matching when all audiovisual stimuli are equally relevant, independent of congruency and temporal relation.


Journal of Cognitive Neuroscience | 2009

The long road to automation: Neurocognitive development of letter-speech sound processing

Dries Froyen; Milene Bonte; Nienke van Atteveldt; Leo Blomert

In transparent alphabetic languages, the expected standard for complete acquisition of letter–speech sound associations is within one year of reading instruction. The neural mechanisms underlying the acquisition of letter–speech sound associations have, however, hardly been investigated. The present article describes an ERP study with beginner and advanced readers in which the influence of letters on speech sound processing is investigated by comparing the MMN to speech sounds presented in isolation with the MMN to speech sounds accompanied by letters. Furthermore, SOA between letter and speech sound presentation was manipulated in order to investigate the development of the temporal window of integration for letter–speech sound processing. Beginner readers, despite one year of reading instruction, showed no early letter–speech sound integration, that is, no influence of the letter on the evocation of the MMN to the speech sound. Only later in the difference wave, at 650 msec, was an influence of the letter on speech sound processing revealed. Advanced readers, with 4 years of reading instruction, showed early and automatic letter–speech sound processing as revealed by an enhancement of the MMN amplitude, however, at a different temporal window of integration in comparison with experienced adult readers. The present results indicate a transition from mere association in beginner readers to more automatic, but still not “adult-like,” integration in advanced readers. In contrast to general assumptions, the present study provides evidence for an extended development of letter–speech sound integration.


Neuroscience Letters | 2008

Cross-modal enhancement of the MMN to speech-sounds indicates early and automatic integration of letters and speech-sounds

Dries Froyen; Nienke van Atteveldt; Milene Bonte; Leo Blomert

Recently brain imaging evidence indicated that letter/speech-sound integration, necessary for establishing fluent reading, takes place in auditory association areas and that the integration is influenced by stimulus onset asynchrony (SOA) between the letter and the speech-sound. In the present study, we used a specific ERP measure known for its automatic character, the mismatch negativity (MMN), to investigate the time course and automaticity of letter/speech-sound integration. We studied the effect of visual letters and SOA on the MMN elicited by a deviant speech-sound. We found a clear enhancement of the MMN by simultaneously presenting a letter, but without changing the auditory stimulation. This enhancement diminishes linearly with increasing SOA. These results suggest that letters and speech-sounds are processed as compound stimuli early and automatically in the auditory association cortex of fluent readers and that this processing is strongly dependent on timing.


Frontiers in Integrative Neuroscience | 2010

Exploring the Role of Low Level Visual Processing in Letter-Speech Sound Integration : A Visual MMN Study

Dries Froyen; Nienke van Atteveldt; Leo Blomert

In contrast with for example audiovisual speech, the relation between visual and auditory properties of letters and speech sounds is artificial and learned only by explicit instruction. The arbitrariness of the audiovisual link together with the widespread usage of letter–speech sound pairs in alphabetic languages makes those audiovisual objects a unique subject for crossmodal research. Brain imaging evidence has indicated that heteromodal areas in superior temporal, as well as modality-specific auditory cortex are involved in letter–speech sound processing. The role of low level visual areas, however, remains unclear. In this study the visual counterpart of the auditory mismatch negativity (MMN) is used to investigate the influences of speech sounds on letter processing. Letter and non-letter deviants were infrequently presented in a train of standard letters, either in isolation or simultaneously with speech sounds. Although previous findings showed that letters systematically modulate speech sound processing (reflected by auditory MMN amplitude modulation), the reverse does not seem to hold: our results did not show evidence for an automatic influence of speech sounds on letter processing (no visual MMN amplitude modulation). This apparent asymmetric recruitment of low level sensory cortices during letter–speech sound processing, contrasts with the symmetric involvement of these cortices in audiovisual speech processing, and is possibly due to the arbitrary nature of the link between letters and speech sounds.


Experimental Brain Research | 2009

Multisensory functional magnetic resonance imaging: a future perspective

Rainer Goebel; Nienke van Atteveldt

Advances in functional magnetic resonance imaging (fMRI) technology and analytic tools provide a powerful approach to unravel how the human brain combines the different sensory systems. In this perspective, we outline promising future directions of fMRI to make optimal use of its strengths in multisensory research, and to meet its weaker sides by combining it with other imaging modalities and computational modeling.


European Journal of Neuroscience | 2008

Task‐irrelevant visual letters interact with the processing of speech sounds in heteromodal and unimodal cortex

Vera C Blau; Nienke van Atteveldt; Elia Formisano; Rainer Goebel; Leo Blomert

Letters and speech sounds are the basic units of correspondence between spoken and written language. Associating auditory information of speech sounds with visual information of letters is critical for learning to read; however, the neural mechanisms underlying this association remain poorly understood. The present functional magnetic resonance imaging study investigates the automaticity and behavioral relevance of integrating letters and speech sounds. Within a unimodal auditory identification task, speech sounds were presented in isolation (unimodally) or bimodally in congruent and incongruent combinations with visual letters. Furthermore, the quality of the visual letters was manipulated parametrically. Our analyses revealed that the presentation of congruent visual letters led to a behavioral improvement in identifying speech sounds, which was paralleled by a similar modulation of cortical responses in the left superior temporal sulcus. Under low visual noise, cortical responses in superior temporal and occipito‐temporal cortex were further modulated by the congruency between auditory and visual stimuli. These cross‐modal modulations of performance and cortical responses during an unimodal auditory task (speech identification) indicate the existence of a strong and automatic functional coupling between processing of letters (orthography) and speech (phonology) in the literate adult brain.


BMC Neuroscience | 2010

fMR-adaptation indicates selectivity to audiovisual content congruency in distributed clusters in human superior temporal cortex.

Nienke van Atteveldt; Vera C Blau; Leo Blomert; Rainer Goebel

BackgroundEfficient multisensory integration is of vital importance for adequate interaction with the environment. In addition to basic binding cues like temporal and spatial coherence, meaningful multisensory information is also bound together by content-based associations. Many functional Magnetic Resonance Imaging (fMRI) studies propose the (posterior) superior temporal cortex (STC) as the key structure for integrating meaningful multisensory information. However, a still unanswered question is how superior temporal cortex encodes content-based associations, especially in light of inconsistent results from studies comparing brain activation to semantically matching (congruent) versus nonmatching (incongruent) multisensory inputs. Here, we used fMR-adaptation (fMR-A) in order to circumvent potential problems with standard fMRI approaches, including spatial averaging and amplitude saturation confounds. We presented repetitions of audiovisual stimuli (letter-speech sound pairs) and manipulated the associative relation between the auditory and visual inputs (congruent/incongruent pairs). We predicted that if multisensory neuronal populations exist in STC and encode audiovisual content relatedness, adaptation should be affected by the manipulated audiovisual relation.ResultsThe results revealed an occipital-temporal network that adapted independently of the audiovisual relation. Interestingly, several smaller clusters distributed over superior temporal cortex within that network, adapted stronger to congruent than to incongruent audiovisual repetitions, indicating sensitivity to content congruency.ConclusionsThese results suggest that the revealed clusters contain multisensory neuronal populations that encode content relatedness by selectively responding to congruent audiovisual inputs, since unisensory neuronal populations are assumed to be insensitive to the audiovisual relation. These findings extend our previously revealed mechanism for the integration of letters and speech sounds and demonstrate that fMR-A is sensitive to multisensory congruency effects that may not be revealed in BOLD amplitude per se.

Collaboration


Dive into the Nienke van Atteveldt's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elana Zion-Golumbic

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge