Jonathan E. Peelle
Washington University in St. Louis
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jonathan E. Peelle.
Frontiers in Psychology | 2012
Jonathan E. Peelle; Matthew H. Davis
A key feature of speech is the quasi-regular rhythmic information contained in its slow amplitude modulations. In this article we review the information conveyed by speech rhythm, and the role of ongoing brain oscillations in listeners’ processing of this content. Our starting point is the fact that speech is inherently temporal, and that rhythmic information conveyed by the amplitude envelope contains important markers for place and manner of articulation, segmental information, and speech rate. Behavioral studies demonstrate that amplitude envelope information is relied upon by listeners and plays a key role in speech intelligibility. Extending behavioral findings, data from neuroimaging – particularly electroencephalography (EEG) and magnetoencephalography (MEG) – point to phase locking by ongoing cortical oscillations to low-frequency information (~4–8 Hz) in the speech envelope. This phase modulation effectively encodes a prediction of when important events (such as stressed syllables) are likely to occur, and acts to increase sensitivity to these relevant acoustic cues. We suggest a framework through which such neural entrainment to speech rhythm can explain effects of speech rate on word and segment perception (i.e., that the perception of phonemes and words in connected speech is influenced by preceding speech rate). Neuroanatomically, acoustic amplitude modulations are processed largely bilaterally in auditory cortex, with intelligible speech resulting in differential recruitment of left-hemisphere regions. Notable among these is lateral anterior temporal cortex, which we propose functions in a domain-general fashion to support ongoing memory and integration of meaningful input. Together, the reviewed evidence suggests that low-frequency oscillations in the acoustic speech signal form the foundation of a rhythmic hierarchy supporting spoken language, mirrored by phase-locked oscillations in the human brain.
The Journal of Neuroscience | 2011
Jonathan E. Peelle; Vanessa Troiani; Murray Grossman; Arthur Wingfield
Hearing loss is one of the most common complaints in adults over the age of 60 and a major contributor to difficulties in speech comprehension. To examine the effects of hearing ability on the neural processes supporting spoken language processing in humans, we used functional magnetic resonance imaging to monitor brain activity while older adults with age-normal hearing listened to sentences that varied in their linguistic demands. Individual differences in hearing ability predicted the degree of language-driven neural recruitment during auditory sentence comprehension in bilateral superior temporal gyri (including primary auditory cortex), thalamus, and brainstem. In a second experiment, we examined the relationship of hearing ability to cortical structural integrity using voxel-based morphometry, demonstrating a significant linear relationship between hearing ability and gray matter volume in primary auditory cortex. Together, these results suggest that even moderate declines in peripheral auditory acuity lead to a systematic downregulation of neural activity during the processing of higher-level aspects of speech, and may also contribute to loss of gray matter volume in primary auditory cortex. More generally, these findings support a resource-allocation framework in which individual differences in sensory ability help define the degree to which brain regions are recruited in service of a particular task.
The Journal of Neuroscience | 2012
Conor Wild; Afiqah Yusuf; Daryl E. Wilson; Jonathan E. Peelle; Matthew H. Davis; Ingrid S. Johnsrude
The conditions of everyday life are such that people often hear speech that has been degraded (e.g., by background noise or electronic transmission) or when they are distracted by other tasks. However, it remains unclear what role attention plays in processing speech that is difficult to understand. In the current study, we used functional magnetic resonance imaging to assess the degree to which spoken sentences were processed under distraction, and whether this depended on the acoustic quality (intelligibility) of the speech. On every trial, adult human participants attended to one of three simultaneously presented stimuli: a sentence (at one of four acoustic clarity levels), an auditory distracter, or a visual distracter. A postscan recognition test showed that clear speech was processed even when not attended, but that attention greatly enhanced the processing of degraded speech. Furthermore, speech-sensitive cortex could be parcellated according to how speech-evoked responses were modulated by attention. Responses in auditory cortex and areas along the superior temporal sulcus (STS) took the same form regardless of attention, although responses to distorted speech in portions of both posterior and anterior STS were enhanced under directed attention. In contrast, frontal regions, including left inferior frontal gyrus, were only engaged when listeners were attending to speech and these regions exhibited elevated responses to degraded, compared with clear, speech. We suggest this response is a neural marker of effortful listening. Together, our results suggest that attention enhances the processing of degraded speech by engaging higher-order mechanisms that modulate perceptual auditory processing.
NeuroImage | 2012
Jonathan E. Peelle; Rhodri Cusack; Richard N. Henson
Results from studies that have examined age-related changes in gray matter based on structural MRI scans have not always been consistent. Reasons for this variability likely include small or unevenly-distributed samples, different methods for tissue class segmentation and spatial normalization, and the use of different statistical models. Particularly relevant to the latter is the method of adjusting for global (total) gray matter when making inferences about regionally-specific changes. In the current study, we use voxel-based morphometry (VBM) to explore the impact of these methodological choices in assessing age-related changes in gray matter volume in a sample of 420 adults evenly distributed between the ages of 18–77 years. At a broad level, we replicate previous findings, showing age-related gray matter decline in nearly all parts of the brain, with particularly rapid decline in inferior regions of frontal cortex (e.g., insula and left inferior frontal gyrus) and the central sulcus. Segmentation was improved by increasing the number of tissue classes and using less age-biased templates, and registration was improved by using a diffeomorphic flow-based algorithm (DARTEL) rather than a “constrained warp” approach. Importantly, different approaches to adjusting for global effects – not adjusting, Local Covariation, Global Scaling, and Local Scaling – significantly affected regionally-specific estimates of age-related decline, as demonstrated by ranking age effects across anatomical ROIs. Split-half cross-validation showed that, on average, Local Covariation explained a greater proportion of age-related variance across these ROIs than did Global Scaling. Nonetheless, the appropriate choice for global adjustment depends on ones assumptions and specific research questions. More generally, these results emphasize the importance of being explicit about the assumptions underlying key methodological choices made in VBM analyses and the inferences that follow.
Frontiers in Human Neuroscience | 2010
Jonathan E. Peelle; Ingrid S. Johnsrude; Matthew H. Davis
The anatomical connectivity of the primate auditory system suggests that sound perception involves several hierarchical stages of analysis (Kaas et al., 1999), raising the question of how the processes required for human speech comprehension might map onto such a system. One intriguing possibility is that earlier areas of auditory cortex respond to acoustic differences in speech stimuli, but that later areas are insensitive to such features. Providing a consistent neural response to speech content despite variation in the acoustic signal is a critical feature of “higher level” speech processing regions because it indicates they respond to categorical speech information, such as phonemes and words, rather than idiosyncratic acoustic tokens. In a recent fMRI study, Okada et al. (2010) used multi-voxel pattern analysis (MVPA) to investigate neural responses to spoken sentences in canonical auditory cortex (i.e., superior temporal cortex), using a design modeled after Scott et al. (2000). Okada et al. (2010) used a factorial design that crossed speech clarity (clear speech vs. intelligible noise vocoded speech) with frequency order (normal vs. spectrally rotated). Noise vocoding reduces the amount of spectral detail in the speech signal but faithfully preserves temporal information. Depending on the reduction in spectral resolution (i.e., the number of bands used in vocoding), noise vocoded speech can be highly intelligible, especially following training. By contrast, spectral rotation of the speech signal renders it almost entirely unintelligible without any change in overall level of spectral detail. Thus, the clear and vocoded sentences used by Okada et al. (2010) provided two physically dissimilar presentations of intelligible speech that the authors could use to identify acoustically insensitive neural responses; spectrally rotated stimuli allowed the authors to look for response changes due to intelligibility, independent of reductions in spectral detail.
NeuroImage | 2013
Michael F. Bonner; Jonathan E. Peelle; Philip A. Cook; Murray Grossman
Concepts bind together the features commonly associated with objects and events to form networks in long-term semantic memory. These conceptual networks are the basis of human knowledge and underlie perception, imagination, and the ability to communicate about experiences and the contents of the environment. Although it is often assumed that this distributed semantic information is integrated in higher-level heteromodal association cortices, open questions remain about the role and anatomic basis of heteromodal representations in semantic memory. Here we used combined neuroimaging evidence from functional magnetic resonance imaging (fMRI) and diffusion tensor imaging (DTI) to characterize the cortical networks underlying concept representation. Using a lexical decision task, we examined the processing of concepts in four semantic categories that varied on their sensory-motor feature associations (sight, sound, manipulation, and abstract). We found that the angular gyrus was activated across all categories regardless of their modality-specific feature associations, consistent with a heteromodal account for the angular gyrus. Exploratory analyses suggested that categories with weighted sensory-motor features additionally recruited modality-specific association cortices. Furthermore, DTI tractography identified white matter tracts connecting these regions of modality-specific functional activation with the angular gyrus. These findings are consistent with a distributed semantic network that includes a heteromodal, integrative component in the angular gyrus in combination with sensory-motor feature representations in modality-specific association cortices.
Aging Neuropsychology and Cognition | 2003
Arthur Wingfield; Jonathan E. Peelle; Murray Grossman
An experiment is reported in which young and older adults heard short English sentences that differed in syntactic complexity and speech rate. The syntactic contrast pitted center-embedded sentences with a subject-relative clause against sentences with center-embedded object-relative clauses. Speech rate was varied using computer time-compression of the speech signal. Both young and older adults showed poorer comprehension accuracy for the more complex object-relative clause sentences than subject-relative sentences, with an age difference appearing only when sentences were presented at a very rapid rate. By contrast to accuracy scores, older adults took longer than the young adults to give their comprehension responses at all speech rates tested, with this age difference amplified by both speech rate and syntactic complexity.
The Journal of Neuroscience | 2015
Amy R. Price; Michael F. Bonner; Jonathan E. Peelle; Murray Grossman
Human thought and language rely on the brains ability to combine conceptual information. This fundamental process supports the construction of complex concepts from basic constituents. For example, both “jacket” and “plaid” can be represented as individual concepts, but they can also be integrated to form the more complex representation “plaid jacket.” Although this process is central to the expression and comprehension of language, little is known about its neural basis. Here we present evidence for a neuroanatomic model of conceptual combination from three experiments. We predicted that the highly integrative region of heteromodal association cortex in the angular gyrus would be critical for conceptual combination, given its anatomic connectivity and its strong association with semantic memory in functional neuroimaging studies. Consistent with this hypothesis, we found that the process of combining concepts to form meaningful representations specifically modulates neural activity in the angular gyrus of healthy adults, independent of the modality of the semantic content integrated. We also found that individual differences in the structure of the angular gyrus in healthy adults are related to variability in behavioral performance on the conceptual combination task. Finally, in a group of patients with neurodegenerative disease, we found that the degree of atrophy in the angular gyrus is specifically related to impaired performance on combinatorial processing. These converging anatomic findings are consistent with a critical role for the angular gyrus in conceptual combination.
NeuroImage | 2010
Jonathan E. Peelle; Rowena J. Eason; Sebastian Schmitter; Christian Schwarzbauer; Matthew H. Davis
Echoplanar MRI is associated with significant acoustic noise, which can interfere with the presentation of auditory stimuli, create a more challenging listening environment, and increase discomfort felt by participants. Here we investigate a scanning sequence that significantly reduces the amplitude of acoustic noise associated with echoplanar imaging (EPI). This is accomplished using a constant phase encoding gradient and a sinusoidal readout echo train to produce a narrow-band acoustic frequency spectrum, which is adapted to the scanners frequency response function by choosing an optimum gradient switching frequency. To evaluate the effect of these nonstandard parameters we conducted a speech experiment comparing four different EPI sequences: Quiet, Sparse, Standard, and Matched Standard (using the same readout duration as Quiet). For each sequence participants listened to sentences and signal-correlated noise (SCN), which provides an unintelligible amplitude-matched control condition. We used BOLD sensitivity maps to quantify sensitivity loss caused by the longer EPI readout duration used in the Quiet and Matched Standard EPI sequences. We found that the Quiet sequence provided more robust activation for SCN in primary auditory areas and comparable activation in frontal and temporal regions for Sentences > SCN, but less sentence-related activity in inferotemporal cortex. The increased listening effort associated with the louder Standard sequence relative to the Quiet sequence resulted in increased activation in the left temporal and inferior parietal cortices. Together, these results suggest that the Quiet sequence is suitable, and perhaps preferable, for many auditory studies. However, its applicability depends on the specific brain regions of interest.
Frontiers in Human Neuroscience | 2012
Jonathan E. Peelle
A recurring question in neuroimaging studies of spoken language is whether speech is processed largely bilaterally, or whether the left hemisphere plays a more dominant role (cf., Hickok and Poeppel, 2007; Rauschecker and Scott, 2009). Although questions regarding underlying mechanisms are certainly of interest, the discussion unfortunately gets sidetracked due to the imprecise use of the word “speech”: by being more explicit about the type of cognitive and linguistic processing to which we are referring it may be possible to reconcile many of the disagreements present in the literature.