Hwee Ling Lee
Max Planck Society
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hwee Ling Lee.
NeuroImage | 2001
Michael Wei-Liang Chee; Nicholas Hsueh Hsien Hon; Hwee Ling Lee; Chun Siong Soon
The effect of relative language proficiency on the spatial distribution and magnitude of BOLD signal change was evaluated by studying two groups of right-handed English-Mandarin bilingual participants with contrasting language proficiencies as they made semantic judgments with words and characters. Greater language proficiency corresponded to shorter response times and greater accuracy in the semantic judgment task. Within the left prefrontal and parietal regions, the change in BOLD signal was smaller in a participants more proficient language. The least proficient performance was associated with right, in addition to left, inferior frontal activation. The results highlight the importance of taking into consideration nature of task and relative language proficiency when drawing inferences from functional imaging studies of bilinguals.
The Journal of Neuroscience | 2007
Hwee Ling Lee; Joseph T. Devlin; Clare Shakeshaft; Lauren Stewart; Amanda Brennan; Jen Glensman; Katherine Pitcher; Jenny Crinion; Andrea Mechelli; Richard S. J. Frackowiak; David W. Green; Cathy J. Price
A surprising discovery in recent years is that the structure of the adult human brain changes when a new cognitive or motor skill is learned. This effect is seen as a change in local gray or white matter density that correlates with behavioral measures. Critically, however, the cognitive and anatomical mechanisms underlying these learning-related structural brain changes remain unknown. Here, we combined brain imaging, detailed behavioral analyses, and white matter tractography in English-speaking monolingual adolescents to show that a critical linguistic prerequisite (namely, knowledge of vocabulary) is proportionately related to relative gray matter density in bilateral posterior supramarginal gyri. The effect was specific to the number of words learned, regardless of verbal fluency or other cognitive abilities. The identified region was found to have direct connections to other inferior parietal areas that separately process either the sounds of words or their meanings, suggesting that the posterior supramarginal gyrus plays a role in linking the basic components of vocabulary knowledge. Together, these analyses highlight the cognitive and anatomical mechanisms that mediate an essential language skill.
Cerebral Cortex | 2012
O Parker Jones; David W. Green; Alice Grogan; Christos Pliatsikas; K Filippopolitis; N Ali; Hwee Ling Lee; S. Ramsden; K Gazarian; Susan Prejawa; Mohamed L. Seghier; Cathy J. Price
Using functional magnetic resonance imaging, we found that when bilinguals named pictures or read words aloud, in their native or nonnative language, activation was higher relative to monolinguals in 5 left hemisphere regions: dorsal precentral gyrus, pars triangularis, pars opercularis, superior temporal gyrus, and planum temporale. We further demonstrate that these areas are sensitive to increasing demands on speech production in monolinguals. This suggests that the advantage of being bilingual comes at the expense of increased work in brain areas that support monolingual word processing. By comparing the effect of bilingualism across a range of tasks, we argue that activation is higher in bilinguals compared with monolinguals because word retrieval is more demanding; articulation of each word is less rehearsed; and speech output needs careful monitoring to avoid errors when competition for word selection occurs between, as well as within, language.
Journal of Cognitive Neuroscience | 2003
Michael Wei-Liang Chee; Chun Siong Soon; Hwee Ling Lee
The effect of word repetition within and across languages was studied in English-Chinese bilinguals who read rapidly presented word pairs in a block design and an event-related fMRI study. Relatively less increase in MR signal was observed when the second word in a pair was identical in meaning to the first. This occurred in the English-only and mixed-languages conditions. Repetition-induced reductions in BOLD signal change were found in the left lateral prefrontal and lateral temporal regions in both types of conditions in the block experiment, suggesting that processing in these regions is sensitive to semantic features present in words and characters, and that part of the semantic neuronal networks serving English and Chinese is shared. In addition, these regions showed greater absolute signal change in the mixed-languages trials relative to the English-only trials. These findings were mostly replicated in an event-related experiment. Together, the experiments suggest that while the networks for Chinese and English word processing have shared components, there are also components that may be language specific.
Brain | 2010
Wei Hu; Hwee Ling Lee; Qiang Zhang; Tao Liu; Li Bo Geng; Mohamed L. Seghier; Clare Shakeshaft; Tae Twomey; David W. Green; Yi Ming Yang; Cathy J. Price
Previous neuroimaging studies have suggested that developmental dyslexia has a different neural basis in Chinese and English populations because of known differences in the processing demands of the Chinese and English writing systems. Here, using functional magnetic resonance imaging, we provide the first direct statistically based investigation into how the effect of dyslexia on brain activation is influenced by the Chinese and English writing systems. Brain activation for semantic decisions on written words was compared in English dyslexics, Chinese dyslexics, English normal readers and Chinese normal readers, while controlling for all other experimental parameters. By investigating the effects of dyslexia and language in one study, we show common activation in Chinese and English dyslexics despite different activation in Chinese versus English normal readers. The effect of dyslexia in both languages was observed as less than normal activation in the left angular gyrus and in left middle frontal, posterior temporal and occipitotemporal regions. Differences in Chinese and English normal reading were observed as increased activation for Chinese relative to English in the left inferior frontal sulcus; and increased activation for English relative to Chinese in the left posterior superior temporal sulcus. These cultural differences were not observed in dyslexics who activated both left inferior frontal sulcus and left posterior superior temporal sulcus, consistent with the use of culturally independent strategies when reading is less efficient. By dissociating the effect of dyslexia from differences in Chinese and English normal reading, our results reconcile brain activation results with a substantial body of behavioural studies showing commonalities in the cognitive manifestation of dyslexia in Chinese and English populations. They also demonstrate the influence of cognitive ability and learning environment on a common neural system for reading.
Proceedings of the National Academy of Sciences of the United States of America | 2011
Hwee Ling Lee; Uta Noppeney
Practicing a musical instrument is a rich multisensory experience involving the integration of visual, auditory, and tactile inputs with motor responses. This combined psychophysics–fMRI study used the musicians brain to investigate how sensory-motor experience molds temporal binding of auditory and visual signals. Behaviorally, musicians exhibited a narrower temporal integration window than nonmusicians for music but not for speech. At the neural level, musicians showed increased audiovisual asynchrony responses and effective connectivity selectively for music in a superior temporal sulcus-premotor-cerebellar circuitry. Critically, the premotor asynchrony effects predicted musicians’ perceptual sensitivity to audiovisual asynchrony. Our results suggest that piano practicing fine tunes an internal forward model mapping from action plans of piano playing onto visible finger movements and sounds. This internal forward model furnishes more precise estimates of the relative audiovisual timings and hence, stronger prediction error signals specifically for asynchronous music in a premotor-cerebellar circuitry. Our findings show intimate links between action production and audiovisual temporal binding in perception.
NeuroImage | 2008
Mohamed L. Seghier; Hwee Ling Lee; Thomas M. Schofield; Caroline Ellis; Cathy J. Price
Cognitive models of reading predict that high frequency regular words can be read in more than one way. We investigated this hypothesis using functional MRI and covariance analysis in 43 healthy skilled readers. Our results dissociated two sets of regions that were differentially engaged across subjects who were reading the same familiar words. Some subjects showed more activation in left inferior frontal and anterior occipito-temporal regions while other subjects showed more activation in right inferior parietal and left posterior occipito-temporal regions. To explore the behavioural correlates of these systems, we measured the difference between reading speed for irregularly spelled words relative to pseudowords outside the scanner in fifteen of our subjects and correlated this measure with fMRI activation for reading familiar words. The faster the lexical reading the greater the activation in left posterior occipito-temporal and right inferior parietal regions. Conversely, the slower the lexical reading the greater the activation in left anterior occipito-temporal and left ventral inferior frontal regions. Thus, the double dissociation in irregular and pseudoword reading behaviour predicted the double dissociation in neuronal activation for reading familiar words. We discuss the implications of these results which may be important for understanding how reading is learnt in childhood or re-learnt following brain damage in adulthood.
Frontiers in Psychology | 2014
Hwee Ling Lee; Uta Noppeney
This psychophysics study used musicians as a model to investigate whether musical expertise shapes the temporal integration window for audiovisual speech, sinewave speech, or music. Musicians and non-musicians judged the audiovisual synchrony of speech, sinewave analogs of speech, and music stimuli at 13 audiovisual stimulus onset asynchronies (±360, ±300 ±240, ±180, ±120, ±60, and 0 ms). Further, we manipulated the duration of the stimuli by presenting sentences/melodies or syllables/tones. Critically, musicians relative to non-musicians exhibited significantly narrower temporal integration windows for both music and sinewave speech. Further, the temporal integration window for music decreased with the amount of music practice, but not with age of acquisition. In other words, the more musicians practiced piano in the past 3 years, the more sensitive they became to the temporal misalignment of visual and auditory signals. Collectively, our findings demonstrate that music practicing fine-tunes the audiovisual temporal integration window to various extents depending on the stimulus class. While the effect of piano practicing was most pronounced for music, it also generalized to other stimulus classes such as sinewave speech and to a marginally significant degree to natural speech.
Journal of Cognitive Neuroscience | 2010
Marinella Cappelletti; Hwee Ling Lee; Elliot Freeman; Cathy J. Price
Neuropsychological and functional imaging studies have associated the conceptual processing of numbers with bilateral parietal regions (including intraparietal sulcus). However, the processes driving these effects remain unclear because both left and right posterior parietal regions are activated by many other conceptual, perceptual, attention, and response-selection processes. To dissociate parietal activation that is number-selective from parietal activation related to other stimulus or response-selection processes, we used fMRI to compare numbers and object names during exactly the same conceptual and perceptual tasks while factoring out activations correlating with response times. We found that right parietal activation was higher for conceptual decisions on numbers relative to the same tasks on object names, even when response time effects were fully factored out. In contrast, left parietal activation for numbers was equally involved in conceptual processing of object names. We suggest that left parietal activation for numbers reflects a range of processes, including the retrieval of learnt facts that are also involved in conceptual decisions on object names. In contrast, number selectivity in right parietal cortex reflects processes that are more involved in conceptual decisions on numbers than object names. Our results generate a new set of hypotheses that have implications for the design of future behavioral and functional imaging studies of patients with left and right parietal damage.
The Journal of Neuroscience | 2011
Hwee Ling Lee; Uta Noppeney
Face-to-face communication challenges the human brain to integrate information from auditory and visual senses with linguistic representations. Yet the role of bottom-up physical (spectrotemporal structure) input and top-down linguistic constraints in shaping the neural mechanisms specialized for integrating audiovisual speech signals are currently unknown. Participants were presented with speech and sinewave speech analogs in visual, auditory, and audiovisual modalities. Before the fMRI study, they were trained to perceive physically identical sinewave speech analogs as speech (SWS-S) or nonspeech (SWS-N). Comparing audiovisual integration (interactions) of speech, SWS-S, and SWS-N revealed a posterior–anterior processing gradient within the left superior temporal sulcus/gyrus (STS/STG): Bilateral posterior STS/STG integrated audiovisual inputs regardless of spectrotemporal structure or speech percept; in left mid-STS, the integration profile was primarily determined by the spectrotemporal structure of the signals; more anterior STS regions discarded spectrotemporal structure and integrated audiovisual signals constrained by stimulus intelligibility and the availability of linguistic representations. In addition to this “ventral” processing stream, a “dorsal” circuitry encompassing posterior STS/STG and left inferior frontal gyrus differentially integrated audiovisual speech and SWS signals. Indeed, dynamic causal modeling and Bayesian model comparison provided strong evidence for a parallel processing structure encompassing a ventral and a dorsal stream with speech intelligibility training enhancing the connectivity between posterior and anterior STS/STG. In conclusion, audiovisual speech comprehension emerges in an interactive process with the integration of auditory and visual signals being progressively constrained by stimulus intelligibility along the STS and spectrotemporal structure in a dorsal fronto-temporal circuitry.