Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Poeppel is active.

Publication


Featured researches published by David Poeppel.


Nature Reviews Neuroscience | 2007

The cortical organization of speech processing

Gregory Hickok; David Poeppel

Despite decades of research, the functional neuroanatomy of speech processing has been difficult to characterize. A major impediment to progress may have been the failure to consider task effects when mapping speech-related processing systems. We outline a dual-stream model of speech processing that remedies this situation. In this model, a ventral stream processes speech signals for comprehension, and a dorsal stream maps acoustic speech signals to frontal lobe articulatory networks. The model assumes that the ventral stream is largely bilaterally organized — although there are important computational differences between the left- and right-hemisphere systems — and that the dorsal stream is strongly left-hemisphere dominant.


Nature Reviews Neuroscience | 2008

A cortical network for semantics: (de)constructing the N400.

Ellen F. Lau; Colin Phillips; David Poeppel

Measuring event-related potentials (ERPs) has been fundamental to our understanding of how language is encoded in the brain. One particular ERP response, the N400 response, has been especially influential as an index of lexical and semantic processing. However, there remains a lack of consensus on the interpretation of this component. Resolving this issue has important consequences for neural models of language comprehension. Here we show that evidence bearing on where the N400 response is generated provides key insights into what it reflects. A neuroanatomical model of semantic processing is used as a guide to interpret the pattern of activated regions in functional MRI, magnetoencephalography and intracranial recordings that are associated with contextual semantic manipulations that lead to N400 effects.


Neuron | 2007

Phase Patterns of Neuronal Responses Reliably Discriminate Speech in Human Auditory Cortex

Huan Luo; David Poeppel

How natural speech is represented in the auditory cortex constitutes a major challenge for cognitive neuroscience. Although many single-unit and neuroimaging studies have yielded valuable insights about the processing of speech and matched complex sounds, the mechanisms underlying the analysis of speech dynamics in human auditory cortex remain largely unknown. Here, we show that the phase pattern of theta band (4-8 Hz) responses recorded from human auditory cortex with magnetoencephalography (MEG) reliably tracks and discriminates spoken sentences and that this discrimination ability is correlated with speech intelligibility. The findings suggest that an approximately 200 ms temporal window (period of theta oscillation) segments the incoming speech signal, resetting and sliding to track speech dynamics. This hypothesized mechanism for cortical speech analysis is based on the stimulus-induced modulation of inherent cortical rhythms and provides further evidence implicating the syllable as a computational primitive for the representation of spoken language.


Nature Neuroscience | 2005

Hierarchical and asymmetric temporal sensitivity in human auditory cortices

Anthony Boemio; Stephen J. Fromm; Allen R. Braun; David Poeppel

Lateralization of function in auditory cortex has remained a persistent puzzle. Previous studies using signals with differing spectrotemporal characteristics support a model in which the left hemisphere is more sensitive to temporal and the right more sensitive to spectral stimulus attributes. Here we use single-trial sparse-acquisition fMRI and a stimulus with parametrically varying segmental structure affecting primarily temporal properties. We show that both left and right auditory cortices are remarkably sensitive to temporal structure. Crucially, beyond bilateral sensitivity to timing information, we uncover two functionally significant interactions. First, local spectrotemporal signal structure is differentially processed in the superior temporal gyrus. Second, lateralized responses emerge in the higher-order superior temporal sulcus, where more slowly modulated signals preferentially drive the right hemisphere. The data support a model in which sounds are analyzed on two distinct timescales, 25–50 ms and 200–300 ms.


Language | 1993

The full competence hypothesis of clause structure in early German

David Poeppel; Kenneth Wexler

We argue that young German children have the major functional sentential heads, in particular the inflectional and complementizer systems. The major empirical basis is natural production data from a 25-month-old child. We perform quantitative analyses which show that the full complement of functional categories is available to the child, and that what crucially distinguishes the childs grammar from the adults is the use of infinitives in matrix clauses. The evidence we consider includes the childs knowledge of finiteness and verb placement, agreement, head movement, and permissible wordorder variations. We examine several accounts which presuppose a degenerate grammar or which deviate from the standard analysis of German and conclude that they provide a less adequate explanation of the acquisition facts.*


Philosophical Transactions of the Royal Society B | 2008

Speech perception at the interface of neurobiology and linguistics

David Poeppel; William J. Idsardi; Virginie van Wassenhove

Speech perception consists of a set of computations that take continuously varying acoustic waveforms as input and generate discrete representations that make contact with the lexical representations stored in long-term memory as output. Because the perceptual objects that are recognized by the speech perception enter into subsequent linguistic computation, the format that is used for lexical representation and processing fundamentally constrains the speech perceptual processes. Consequently, theories of speech perception must, at some level, be tightly linked to theories of lexical representation. Minimally, speech perception must yield representations that smoothly and rapidly interface with stored lexical items. Adopting the perspective of Marr, we argue and provide neurobiological and psychophysical evidence for the following research programme. First, at the implementational level, speech perception is a multi-time resolution process, with perceptual analyses occurring concurrently on at least two time scales (approx. 20–80 ms, approx. 150–300 ms), commensurate with (sub)segmental and syllabic analyses, respectively. Second, at the algorithmic level, we suggest that perception proceeds on the basis of internal forward models, or uses an ‘analysis-by-synthesis’ approach. Third, at the computational level (in the sense of Marr), the theory of lexical representation that we adopt is principally informed by phonological research and assumes that words are represented in the mental lexicon in terms of sequences of discrete segments composed of distinctive features. One important goal of the research programme is to develop linking hypotheses between putative neurobiological primitives (e.g. temporal primitives) and those primitives derived from linguistic inquiry, to arrive ultimately at a biologically sensible and theoretically satisfying model of representation and computation in speech.


Neuron | 2007

Endogenous cortical rhythms determine cerebral specialization for speech perception and production.

Anne-Lise Giraud; Andreas Kleinschmidt; David Poeppel; Torben E. Lund; Richard S. J. Frackowiak; Helmut Laufs

Across multiple timescales, acoustic regularities of speech match rhythmic properties of both the auditory and motor systems. Syllabic rate corresponds to natural jaw-associated oscillatory rhythms, and phonemic length could reflect endogenous oscillatory auditory cortical properties. Hemispheric lateralization for speech could result from an asymmetry of cortical tuning, with left and right auditory areas differentially sensitive to spectro-temporal features of speech. Using simultaneous electroencephalographic (EEG) and functional magnetic resonance imaging (fMRI) recordings from humans, we show that spontaneous EEG power variations within the gamma range (phonemic rate) correlate best with left auditory cortical synaptic activity, while fluctuations within the theta range correlate best with that in the right. Power fluctuations in both ranges correlate with activity in the mouth premotor region, indicating coupling between temporal properties of speech perception and production. These data show that endogenous cortical rhythms provide temporal and spatial constraints on the neuronal mechanisms underlying speech perception and production.


Journal of Cognitive Neuroscience | 2000

Auditory Cortex Accesses Phonological Categories: An MEG Mismatch Study

Colin Phillips; Thomas Pellathy; Alec Marantz; Elron Yellin; Kenneth Wexler; David Poeppel; Martha McGinnis; Timothy P.L. Roberts

The studies presented here use an adapted oddball paradigm to show evidence that representations of discrete phonological categories are available to the human auditory cortex. Brain activity was recorded using a 37-channel biomagnetometer while eight subjects listened passively to synthetic speech sounds. In the phonological condition, which contrasted stimuli from an acoustic /d/-/t/ continuum, a magnetic mismatch field (MMF) was elicited in a sequence of stimuli in which phonological categories occurred in a many-to-one ratio, but no acoustic many-to-one ratio was present. In order to isolate the contribution of phonological categories to the MMF responses, the acoustic parameter of voice onset time, which distinguished standard and deviant stimuli, was also varied within the standard and deviant categories. No MMF was elicited in the acoustic condition, in which the acoustic distribution of stimuli was identical to the first experiment, but the many-to-one distribution of phonological categories was removed. The design of these studies makes it possible to demonstrate the all-or-nothing property of phonological category membership. This approach contrasts with a number of previous studies of phonetic perception using the mismatch paradigm, which have demonstrated the graded property of enhanced acoustic discrimination at or near phonetic category boundaries.


Brain and Language | 1996

A Critical Review of PET Studies of Phonological Processing

David Poeppel

The use of positron emission tomography to identify sensory and motor systems in humans in vivo has been very successful. In contrast, studies of cognitive processes have not always generated results that can be reliably interpreted. A metaanalysis of five positron emission tomography studies designed to engage phonological processing (Petersen, Fox, Posner, Mintun, & Raichle, 1989; Zatorre, Evans, Meyer, & Gjedde 1992; Sergent, Zuck, Levesque, & MacDonald, 1992; Demonet, Chollet, Ramsay, Cardebat, Nespoulous, Wise, & Frackowiak, 1992; and Paulesu, Frith, & Frakowiak, 1993) reveals that the results do not converge as expected: Very similar experiments designed to isolate the same language processes show activation in nonoverlapping cortical areas. Although these PET confirm the importance of left perisylvian cortex, the experiments implicate distinct, nonoverlapping perisylvian areas. Because of the divergence of results, it is premature to attribute certain language processes or the elementary computations underlying the construction of the relevant linguistic representations to specific cerebral regions on the basis of positron emission tomographic results. It is argued that this sparse-overlap result is due (1) to insufficiently detailed task decomposition and task-control matching, (2) to insufficient contact with cognitive psychology, psycholinguistics, and linguistic theory, and (3) to some inherent problems in using substractive PET methodology to study the neural representation and processing of language.


Nature Neuroscience | 2016

Cortical tracking of hierarchical linguistic structures in connected speech

Nai Ding; Lucia Melloni; Hang Zhang; Xing Tian; David Poeppel

The most critical attribute of human language is its unbounded combinatorial nature: smaller elements can be combined into larger structures on the basis of a grammatical system, resulting in a hierarchy of linguistic units, such as words, phrases and sentences. Mentally parsing and representing such structures, however, poses challenges for speech comprehension. In speech, hierarchical linguistic structures do not have boundaries that are clearly defined by acoustic cues and must therefore be internally and incrementally constructed during comprehension. We found that, during listening to connected speech, cortical activity of different timescales concurrently tracked the time course of abstract linguistic structures at different hierarchical levels, such as words, phrases and sentences. Notably, the neural tracking of hierarchical linguistic structures was dissociated from the encoding of acoustic cues and from the predictability of incoming words. Our results indicate that a hierarchy of neural processing timescales underlies grammar-based internal construction of hierarchical linguistic structure.

Collaboration


Dive into the David Poeppel's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kensuke Sekihara

Tokyo Metropolitan University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Howard A. Rowley

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Gregory Hickok

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Maria Chait

University College London

View shared research outputs
Researchain Logo
Decentralizing Knowledge