Patrick C. M. Wong
The Chinese University of Hong Kong
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Patrick C. M. Wong.
Journal of Cognitive Neuroscience | 2008
Judy H. Song; Erika Skoe; Patrick C. M. Wong; Nina Kraus
Peripheral and central structures along the auditory pathway contribute to speech processing and learning. However, because speech requires the use of functionally and acoustically complex sounds which necessitates high sensory and cognitive demands, long-term exposure and experience using these sounds is often attributed to the neocortex with little emphasis placed on subcortical structures. The present study examines changes in the auditory brainstem, specifically the frequency following response (FFR), as native English-speaking adults learn to incorporate foreign speech sounds (lexical pitch patterns) in word identification. The FFR presumably originates from the auditory midbrain and can be elicited preattentively. We measured FFRs to the trained pitch patterns before and after training. Measures of pitch tracking were then derived from the FFR signals. We found increased accuracy in pitch tracking after training, including a decrease in the number of pitch-tracking errors and a refinement in the energy devoted to encoding pitch. Most interestingly, this change in pitch-tracking accuracy only occurred in the most acoustically complex pitch contour (dipping contour), which is also the least familiar to our English-speaking subjects. These results not only demonstrate the contribution of the brainstem in language learning and its plasticity in adulthood but also demonstrate the specificity of this contribution (i.e., changes in encoding only occur in specific, least familiar stimuli, not all stimuli). Our findings complement existing data showing cortical changes after second-language learning, and are consistent with models suggesting that brainstem changes resulting from perceptual learning are most apparent when acuity in encoding is most needed.
Neuropsychologia | 2009
Patrick C. M. Wong; James Xumin Jin; Geshri M. Gunasekera; Rebekah Abel; Edward R. Lee; Sumitrajit Dhar
Spoken language processing in noisy environments, a hallmark of the human brain, is subject to age-related decline, even when peripheral hearing might be intact. The present study examines the cortical cerebral hemodynamics (measured by fMRI) associated with such processing in the aging brain. Younger and older subjects identified single words in quiet and in two multi-talker babble noise conditions (SNR 20 and -5dB). Behaviorally, older and younger subjects did not show significant differences in the first two conditions but older adults performed less accurately in the SNR -5 condition. The fMRI results showed reduced activation in the auditory cortex but an increase in working memory and attention-related cortical areas (prefrontal and precuneus regions) in older subjects, especially in the SNR -5 condition. Increased cortical activities in general cognitive regions were positively correlated with behavioral performance in older listeners, suggestive of a compensatory strategy. Furthermore, inter-regional correlation revealed that while younger subjects showed a more streamlined cortical network of auditory regions in response to spoken word processing in noise, older subjects showed a more diffused network involving frontal and ventral brain regions. These results are consistent with the decline-compensation hypothesis, suggestive of its applicability to the auditory domain.
Human Brain Mapping | 2007
Patrick C. M. Wong; Tyler K. Perrachione; Todd B. Parrish
A remarkable characteristic of the human nervous system is its ability to learn to integrate novel (foreign) complex sounds into words. However, the neural changes involved in how adults learn to integrate novel sounds into words and the associated individual differences are largely unknown. Unlike English, most languages of the world use pitch patterns to mark individual word meaning. We report a study assessing the neural correlates of learning to use these pitch patterns in words by English‐speaking adults who had no previous exposure to such usage. Before and after training, subjects discriminated pitch patterns of the words they learned while blood oxygenation levels were measured using fMRI. Subjects who mastered the learning program showed increased activation in the left posterior superior temporal region after training, while subjects who plateaued at lower levels showed increased activation in the right superior temporal region and right inferior frontal gyrus, which are associated with nonlinguistic pitch processing, and prefrontal and medial frontal areas, which are associated with increased working memory and attentional efforts. Furthermore, we found brain activation differences even before training between the two subject groups, including the superior temporal region. These results demonstrate an association between range of neural changes and degrees of language learning, specifically implicating the physiologic contribution of the left dorsal auditory cortex in learning success. Hum Brain Mapp 2006.
The Journal of Neuroscience | 2004
Patrick C. M. Wong; Lawrence M. Parsons; Michael J. Martinez; Randy L. Diehl
Auditory pitch patterns are significant ecological features to which nervous systems have exquisitely adapted. Pitch patterns are found embedded in many contexts, enabling different information-processing goals. Do the psychological functions of pitch patterns determine the neural mechanisms supporting their perception, or do all pitch patterns, regardless of function, engage the same mechanisms? This issue is pursued in the present study by using 150-water positron emission tomography to study brain activations when two subject groups discriminate pitch patterns in their respective native languages, one of which is a tonal language and the other of which is not. In a tonal language, pitch patterns signal lexical meaning. Native Mandarin-speaking and English-speaking listeners discriminated pitch patterns embedded in Mandarin and English words and also passively listened to the same stimuli. When Mandarin listeners discriminated pitch embedded in Mandarin lexical tones, the left anterior insular cortex was the most active. When they discriminated pitch patterns embedded in English words, the homologous area in the right hemisphere activated as it did in English-speaking listeners discriminating pitch patterns embedded in either Mandarin or English words. These results support the view that neural responses to physical acoustic stimuli depend on the function of those stimuli and implicate anterior insular cortex in auditory processing, with the left insular cortex especially responsive to linguistic stimuli.
Human Brain Mapping | 2009
Elizabeth Hellmuth Margulis; Lauren M. Mlsna; Ajith K. Uppunda; Todd B. Parrish; Patrick C. M. Wong
To appropriately adapt to constant sensory stimulation, neurons in the auditory system are tuned to various acoustic characteristics, such as center frequencies, frequency modulations, and their combinations, particularly those combinations that carry species‐specific communicative functions. The present study asks whether such tunings extend beyond acoustic and communicative functions to auditory self‐relevance and expertise. More specifically, we examined the role of the listening biography—an individuals long term experience with a particular type of auditory input—on perceptual‐neural plasticity. Two groups of expert instrumentalists (violinists and flutists) listened to matched musical excerpts played on the two instruments (J.S. Bach Partitas for solo violin and flute) while their cerebral hemodynamic responses were measured using fMRI. Our experimental design allowed for a comprehensive investigation of the neurophysiology (cerebral hemodynamic responses as measured by fMRI) of auditory expertise (i.e., when violinists listened to violin music and when flutists listened to flute music) and nonexpertise (i.e., when subjects listened to music played on the other instrument). We found an extensive cerebral network of expertise, which implicates increased sensitivity to musical syntax (BA 44), timbre (auditory association cortex), and sound–motor interactions (precentral gyrus) when listening to music played on the instrument of expertise (the instrument for which subjects had a unique listening biography). These findings highlight auditory self‐relevance and expertise as a mechanism of perceptual‐neural plasticity, and implicate neural tuning that includes and extends beyond acoustic and communication‐relevant structures. Hum Brain Mapp 2009.
Brain Research Bulletin | 2002
Patrick C. M. Wong
Pitch is used to signal different aspects of language such as speaker identity, intonation, emphatic stress, and word identity (as signaled by lexical tones). This article reviews research studies investigating hemispheric specialization of these pitch patterns in the context of two competing hypotheses. The functional hypothesis states that pitch patterns are lateralized to different hemispheres of the brain depending on their functions. Those pitch patterns that carry a greater linguistic load (e.g., lexical tones) are lateralized to the left hemisphere, while those that carry a less linguistic load (e.g., intonation patterns signaling affective moods) are lateralized to the right hemisphere. The alternative hypothesis, the acoustic hypothesis, states that all pitch patterns, regardless of their functions, are lateralized to one hemisphere (the right hemisphere in particular). Although most researchers support the functional hypothesis, a comprehensive review, which includes lesion, dichotic-listening, and functional imaging studies of different types of pitch patterns, does not support this view. Moreover, little evidence exists for the alternative hypothesis. Possible methodological problems of these studies, alternative hypotheses, and considerations for future research are noted.
The Journal of Neuroscience | 2011
Francis C. K. Wong; Bharath Chandrasekaran; Kyla Garibaldi; Patrick C. M. Wong
According to the dual stream model of auditory language processing, the dorsal stream is responsible for mapping sound to articulation and the ventral stream plays the role of mapping sound to meaning. Most researchers agree that the arcuate fasciculus (AF) is the neuroanatomical correlate of the dorsal steam; however, less is known about what constitutes the ventral one. Nevertheless, two hypotheses exist: one suggests that the segment of the AF that terminates in middle temporal gyrus corresponds to the ventral stream, and the other suggests that it is the extreme capsule that underlies this sound-to-meaning pathway. The goal of this study was to evaluate these two competing hypotheses. We trained participants with a sound-to-word learning paradigm in which they learned to use a foreign phonetic contrast for signaling word meaning. Using diffusion tensor imaging, a brain-imaging tool to investigate white matter connectivity in humans, we found that fractional anisotropy in the left parietal–temporal region positively correlated with the performance in sound-to-word learning. In addition, fiber tracking revealed a ventral pathway, composed of the extreme capsule and the inferior longitudinal fasciculus, that mediated auditory comprehension. Our findings provide converging evidence supporting the importance of the ventral steam, an extreme capsule system, in the frontal–temporal language network. Implications for current models of speech processing are also discussed.
Journal of the Acoustical Society of America | 2010
Bharath Chandrasekaran; Padma D. Sampath; Patrick C. M. Wong
Speech sound patterns can be discerned using multiple acoustic cues. The relative weighting of these cues is known to be language-specific. Speech-sound training in adults induces changes in cue-weighting such that relevant acoustic cues are emphasized. In the current study, the extent to which individual variability in cue weighting contributes to differential success in learning to use foreign sound patterns was examined. Sixteen English-speaking adult participants underwent a sound-to-meaning training paradigm, during which they learned to incorporate Mandarin linguistic pitch contours into words. In addition to cognitive tests, measures of pitch pattern discrimination and identification were collected from all participants. Reaction time data from the discrimination task was subjected to 3-way multidimensional scaling to extract dimensions underlying tone perception. Two dimensions relating to pitch height and pitch direction were found to underlie non-native tone space. Good learners attended more to pitch direction relative to poor learners, before and after training. Training increased the ability to identify and label pitch direction. The results demonstrate that variability in the ability to successfully learn to use pitch in lexical contexts can be explained by pre-training differences in cue-weighting.
Neuropsychologia | 2007
Tyler K. Perrachione; Patrick C. M. Wong
Brain imaging studies of voice perception often contrast activation from vocal and verbal tasks to identify regions uniquely involved in processing voice. However, such a strategy precludes detection of the functional relationship between speech and voice perception. In a pair of experiments involving identifying voices from native and foreign language speech we show that, even after repeated exposure to the same foreign language speakers, accurate talker identification is in a large part dependent on linguistic proficiency. These results suggest that a strong integration between the brain regions implicated in voice perception and speech perception accounts for the accurate identification of talkers.
Journal of Cognitive Neuroscience | 2004
Patrick C. M. Wong; Howard C. Nusbaum; Steven L. Small
To recognize phonemes across variation in talkers, listeners can use information about vocal characteristics, a process referred to as talker normalization. The present study investigates the cortical mechanisms underlying talker normalization using fMRI. Listeners recognized target words presented in either a spoken list produced by a single talker or a mix of different talkers. It was found that both conditions activate an extensive cortical network. However, recognizing words in the mixed-talker condition, relative to the blocked-talker condition, activated middle/superior temporal and superior parietal regions to a greater degree. This temporal parietal network is possibly associated with selectively attending and processing spectral and spatial acoustic cues required in recognizing speech in a mixed-talker condition.