Li Hsieh
Wayne State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Li Hsieh.
Journal of Cognitive Neuroscience | 2000
Jack Gandour; Donald Wong; Li Hsieh; Bret Weinzapfel; Diana Van Lancker; Gary D. Hutchins
In studies of pitch processing, a fundamental question is whether shared neural mechanisms at higher cortical levels are engaged for pitch perception of linguistic and nonlinguistic auditory stimuli. Positron emission tomography (PET) was used in a crosslinguistic study to compare pitch processing in native speakers of two tone languages (that is, languages in which variations in pitch patterns are used to distinguish lexical meaning), Chinese and Thai, with those of English, a nontone language. Five subjects from each language group were scanned under three active tasks (tone, pitch, and consonant) that required focused-attention, speeded-response, auditory discrimination judgments, and one passive baseline as silence. Subjects were instructed to judge pitch patterns of Thai lexical tones in the tone condition; pitch patterns of nonspeech stimuli in the pitch condition; syllable-initial consonants in the consonant condition. Analysis was carried out by paired-image subtraction. When comparing the tone to the pitch task, only the Thai group showed significant activation in the left frontal operculum. Activation of the left frontal operculum in the Thai group suggests that phonological processing of suprasegmental as well as segmental units occurs in the vicinity of Brocas area. Baseline subtractions showed significant activation in the anterior insular region for the English and Chinese groups, but not Thai, providing further support for the existence of possibly two parallel, separate pathways projecting from the temporo-parietal to the frontal language area. More generally, these differential patterns of brain activation across language groups and tasks support the view that pitch patterns are processed at higher cortical levels in a top-down manner according to their linguistic function in a particular language.
Brain and Language | 2001
Li Hsieh; Jack Gandour; Donald Wong; Gary D. Hutchins
A crosslinguistic, positron emission tomography (PET) study was conducted to determine the influence of linguistic experience on the perception of segmental (consonants and vowels) and suprasegmental (tones) information. Chinese and English subjects (10 per group) were presented binaurally with lists consisting of five Chinese monosyllabic morphemes (speech) or low-pass-filtered versions of the same stimuli (nonspeech). The first and last items were targeted for comparison; the time interval between target tones was filled with irrelevant distractor tones. A speeded-response, selective attention paradigm required subjects to make discrimination judgments of the target items while ignoring intervening distractor tones. PET scans were acquired for five tasks presented twice: one passive listening to pitch (nonspeech) and four active (speech = consonant, vowel, and tone; nonspeech = pitch). Significant regional changes in blood flow were identified from comparisons of group-averaged images of active tasks relative to passive listening. Chinese subjects show increased activity in left premotor cortex, pars opercularis, and pars triangularis across the four tasks. English subjects, on the other hand, show increased activity in left inferior frontal gyrus regions only in the vowel task and in right inferior frontal gyrus regions in the pitch task. Findings suggest that functional circuits engaged in speech perception depend on linguistic experience. All linguistic information signaled by prosodic cues engages left-hemisphere mechanisms. Storage and executive processes of working memory that are implicated in phonological processing are mediated in discrete regions of the left frontal lobe.
Journal of Child Language | 1999
Li Hsieh; Laurence B. Leonard; L. Lori Swanson
Grammatical inflections such as the English plural noun -s and third person singular verb -s are acquired at different points in time by young children. This finding is typically attributed to factors such as relative semantic salience or the distinction between lexical and functional categories. In this study input frequency, sentence position, and duration were examined as possible contributing factors. In both conversations with and stories aimed at young children, noun plural inflections were found to be more frequent than third singular verb inflections, especially in sentence-final position. Analysis of the speech of four mothers reading stories to their two-year-old children confirmed that duration differences also exist in the input. Because fricatives were lengthened in sentence-final position and plural nouns were much more likely to appear in these positions than were third singular verb forms, plural nouns were significantly longer than third singular inflections on average. The possible implications of these findings for language learnability theories and accounts of grammatical deficits in specific language impairment are discussed.
Brain Research | 2009
Li Hsieh; Richard A. Young; Susan M. Bowyer; John E. Moran; Richard J. Genik; Christopher C. Green; Yow Ren Chiang; Ya Ju Yu; Chia Cheng Liao; Sean Seaman
This neuroimaging study investigated the neural mechanisms of the effect of conversation on visual event detection during a driving-like scenario. The static load paradigm, established as predictive of visual reaction time in on-road driving, measured reaction times to visual events while subjects watched a real-world driving video. Behavioral testing with twenty-eight healthy volunteers determined the reaction time effects from overt and covert conversation tasks in this paradigm. Overt and covert conversation gave rise to longer visual event reaction times in the surrogate driving paradigm compared to just driving with no conversation, with negligible effect on miss rates. The covert conversation task was then undertaken by ten right-handed healthy adults in a 4-Tesla fMRI magnet. We identified a frontal-parietal network that maintained event detection performance during the conversation task while watching the driving video. Increased brain activations for conversation vs. no conversation during such simulated driving was found not only in language regions (Brocas and Wernickes areas), but also specific regions in bilateral inferior frontal gyrus, bilateral anterior insula and orbitofrontal cortex, bilateral lateral prefrontal cortex (right middle frontal gyrus and left frontal eye field), supplementary motor cortex, anterior and posterior cingulate gyrus, right superior parietal lobe, right intraparietal sulcus, right precuneus, and right cuneus. We propose an Asynchrony Model in which the frontal regions have a top-down influence on the synchrony of neural processes within the superior parietal lobe and extrastriate visual cortex that in turn modulate the reaction time to visual events during conversation while driving.
SAE transactions | 2005
Richard A. Young; Li Hsieh; Francis X. Graydon; I. I. Richard Genik; Mark D. Benton; Christopher C. Green; Susan M. Bowyer; John E. Moran; Norman Tepley
How do in-vehicle telematics devices influence mind-onthe-drive? We determined the spatio-temporal properties of the brain mechanisms during a simple visual event detection and motor response in a validated driving-like protocol. We used the safe and non-invasive brain imaging methods of functional magnetic resonance imaging (fMRI) and Magnetoencephalography (MEG) to locate the essential brain activated structures and their corresponding temporal dynamics. This study sets the foundation for determining the fundamental brain mechanisms by which secondary tasks (such as cell phone use) may affect the responses to visual events in a laboratory setting. Improved knowledge of the brain mechanisms underlying selective attention in such driving-like situations may give rise to methods for improving mind-on-the-drive.
SAE 2015 World Congress & Exhibition | 2015
Li Hsieh; Sean Seaman; Richard A. Young
Abstract As advanced electronic technology continues to be integrated into in-vehicle and portable devices, it is important to understand how drivers handle multitasking in order to maintain safe driving while reducing driver distraction. NHTSA has made driver distraction mitigation a major initiative. Currently, several types of Detection Response Tasks (DRTs) for assessing selective attention by detecting and responding to visual or tactile events while driving have been under development by an ISO WG8 DRT group. Among these DRTs, the tactile version (TDRT) is considered as a sensitive surrogate measure for driver attention without visual-manual interference in driving, according to the ISO DRT Draft Standard. In our previous study of cognitive demand, our results showed that the TDRT is the only surrogate DRT task with an acute sensitivity to a cognitive demand increase in an auditory-vocal task (i.e., n-Back verbal working memory task). At the same time, a specificity for responding to only increased cognitive demand, not to increased physical demand for a visual-manual task (i.e., Surrogate Reference Task or SuRT). Similar findings in both simulated and on-road driving confirmed that the TDRT is a sensitive, specific and reliable surrogate test for measuring the effects of secondary tasks on driver attention. The current paper further investigated eye glance patterns and subjective ratings, and their relationship with DRT metrics, allowing a more comprehensive understanding of the attentional effect of secondary tasks on driver performance.
automotive user interfaces and interactive vehicular applications | 2010
Li Hsieh; Sean Seaman; Richard A. Young
Evoked Response Potential (ERP) and functional Magnetic Resonance Imaging (fMRI) recordings in this study shed light on underlying neural mechanisms for higher cognitive processes and attention allocation during multitasking of cell phone conversations and driving. Behavioral results indicated that hands-free cellular phone conversations caused statistically significant but small reaction time effects for visual event detection during simulated and on-road driving. The validated Static Load driving paradigm gives rise to high correlations of red light reaction times between lab and on-road. Both ERP and fMRI findings suggested that cognitive distractions are correlated with increased cognitive load and attentional distribution. The novel contribution of this ERP and fMRI study is that adding an angry emotional valence to the speech increased the alertness level, resulting in reduced driver distraction, likely via increases in right frontoparietal networks and dampened or desynchronized left frontal activity.
Brain and Language | 2003
Jack Gandour; Mario Dzemidzic; Donald Wong; Mark J. Lowe; Yunxia Tong; Li Hsieh; Nakarin Satthamnuwong; Joseph T. Lurito
Brain Research | 2009
Susan M. Bowyer; Li Hsieh; John E. Moran; Richard A. Young; Arun Manoharan; Chia cheng Jason Liao; Kiran Malladi; Ya Ju Yu; Yow Ren Chiang; Norman Tepley
Transportation Research Part F-traffic Psychology and Behaviour | 2004
Francis X. Graydon; Richard A. Young; Mark D. Benton; Richard J. Genik; Stefan Posse; Li Hsieh; Christopher C. Green