Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chien-Te Wu is active.

Publication


Featured researches published by Chien-Te Wu.


Neuroscience Letters | 2002

Gender differences in neural correlates of recognition of happy and sad faces in humans assessed by functional magnetic resonance imaging

Tatia M. C. Lee; Ho Ling Liu; Rumjahn Hoosain; Wan Ting Liao; Chien-Te Wu; Kenneth S.L. Yuen; Chetwyn C.H. Chan; Peter T. Fox; Jia Hong Gao

To examine the effect of gender on the volume and pattern of brain activation during the viewing of alternating sets of faces depicting happy or sad expressions, 24 volunteers, 12 men and 12 women, participated in this functional magnetic resonance imaging study. The experimental stimuli were 12 photographs of Japanese adults selected from Matsumoto and Ekmans Pictures of Facial Affect. Four of these pictures depicted happy facial emotions, four sad, and four neutral. Half of the photographs were of men and the other half were of women. Consistent with previous findings, distinct sets of neural correlates for processing happy and sad facial emotions were noted. Furthermore, it was observed that male and female subjects used a rather different set of neural correlates when processing faces showing either happy or sad expressions. This was more noticeable when they were processing faces portraying sad emotions than happy emotions. Our findings provide some preliminary support for the speculation that the two genders may be associated with different areas of brain activation during emotion recognition of happy or sad facial expressions. This suggests that the generalizability of findings in regard to neural correlates of facial emotion recognition should consider the gender of the subjects.


Brain Research | 2007

The neural circuitry underlying the executive control of auditory spatial attention

Chien-Te Wu; Daniel H. Weissman; Kenneth C. Roberts; Marty G. Woldorff

Although a fronto-parietal network has consistently been implicated in the control of visual spatial attention, the network that guides spatial attention in the auditory domain is not yet clearly understood. To investigate this issue, we measured brain activity using functional magnetic resonance imaging while participants performed a cued auditory spatial attention task. We found that cued orienting of auditory spatial attention activated a medial-superior distributed fronto-parietal network. In addition, we found cue-triggered increases of activity in the auditory sensory cortex prior to the occurrence of an auditory target, suggesting that auditory attentional control operates in part by biasing processing in sensory cortex in favor of expected target stimuli. Finally, an exploratory cross-study comparison further indicated several common frontal and parietal regions as being involved in the control of both visual and auditory spatial attention. Thus, the present findings not only reveal the network of brain areas underlying endogenous spatial orienting in the auditory modality, but also suggest that the control of spatial attention in different sensory modalities is enabled in part by some common, supramodal neural mechanisms.


Current Biology | 2009

The Temporal Interplay between Conscious and Unconscious Perceptual Streams

Chien-Te Wu; Niko Busch; Michèle Fabre-Thorpe; Rufin VanRullen

An optimal correspondence of temporal information between the physical world and our perceptual world is important for survival. In the current study, we demonstrate a novel temporal illusion in which the cause of a perceptual event is perceived after the event itself. We used a paradigm referred to as motion-induced blindness (MIB), in which a static visual target presented on a constantly rotating background disappears and reappears from awareness periodically, with the dynamic characteristics of bistable perception. A sudden stimulus onset (e.g., a flash) presented during a period of perceptual suppression (i.e., during MIB) is known to trigger the almost instantaneous reappearance of the suppressed target. Surprisingly, however, we report here that although the sudden flash is the cause of the static targets reappearance (the corresponding effect), it is systematically perceived as occurring after this reappearance. Further investigation revealed that this illusory temporal reversal is caused by an approximately 100 ms advantage for the unconscious representation of the perceptually suppressed target to access consciousness, as compared to the newly presented flash. This new temporal illusion therefore reveals the normally hidden delays in bringing new visual events to awareness.


Frontiers in Human Neuroscience | 2008

Face processing is gated by visual spatial attention

Roy E. Crist; Chien-Te Wu; Chris Karp; Marty G. Woldorff

Human perception of faces is widely believed to rely on automatic processing by a domain-specific, modular component of the visual system. Scalp-recorded event-related potential (ERP) recordings indicate that faces receive special stimulus processing at around 170 ms poststimulus onset, in that faces evoke an enhanced occipital negative wave, known as the N170, relative to the activity elicited by other visual objects. As predicted by modular accounts of face processing, this early face-specific N170 enhancement has been reported to be largely immune to the influence of endogenous processes such as task strategy or attention. However, most studies examining the influence of attention on face processing have focused on non-spatial attention, such as object-based attention, which tend to have longer-latency effects. In contrast, numerous studies have demonstrated that visual spatial attention can modulate the processing of visual stimuli as early as 80 ms poststimulus – substantially earlier than the N170. These temporal characteristics raise the question of whether this initial face-specific processing is immune to the influence of spatial attention. This question was addressed in a dual-visual-stream ERP study in which the influence of spatial attention on the face-specific N170 could be directly examined. As expected, early visual sensory responses to all stimuli presented in an attended location were larger than responses evoked by those same stimuli when presented in an unattended location. More importantly, a significant face-specific N170 effect was elicited by faces that appeared in an attended location, but not in an unattended one. In summary, early face-specific processing is not automatic, but rather, like other objects, strongly depends on endogenous factors such as the allocation of spatial attention. Moreover, these findings underscore the extensive influence that top-down attention exercises over the processing of visual stimuli, including those of high natural salience.


Journal of Cognitive Neuroscience | 2015

At 120 msec you can spot the animal but you don't yet know it's a dog

Chien-Te Wu; Sébastien M. Crouzet; Simon J. Thorpe; Michèle Fabre-Thorpe

Earlier studies suggested that the visual system processes information at the basic level (e.g., dog) faster than at the subordinate (e.g., Dalmatian) or superordinate (e.g., animals) levels. However, the advantage of the basic category over the superordinate category in object recognition has been challenged recently, and the hierarchical nature of visual categorization is now a matter of debate. To address this issue, we used a forced-choice saccadic task in which a target and a distractor image were displayed simultaneously on each trial and participants had to saccade as fast as possible toward the image containing animal targets based on different categorization levels. This protocol enables us to investigate the first 100–120 msec, a previously unexplored temporal window, of visual object categorization. The first result is a surprising stability of the saccade latency (median RT ∼155 msec) regardless of the animal target category and the dissimilarity of target and distractor image sets. Accuracy was high (around 80% correct) for categorization tasks that can be solved at the superordinate level but dropped to almost chance levels for basic level categorization. At the basic level, the highest accuracy (62%) was obtained when distractors were restricted to another dissimilar basic category. Computational simulations based on the saliency map model showed that the results could not be predicted by pure bottom–up saliency differences between images. Our results support a model of visual recognition in which the visual system can rapidly access relatively coarse visual representations that provide information at the superordinate level of an object, but where additional visual analysis is required to allow more detailed categorization at the basic level.


Sensors | 2014

Emotion Recognition from Single-Trial EEG Based on Kernel Fisher’s Emotion Pattern and Imbalanced Quasiconformal Kernel Support Vector Machine

Yi-Hung Liu; Chien-Te Wu; Wei-Teng Cheng; Yu-Tsung Hsiao; Po-Ming Chen; Jyh-Tong Teng

Electroencephalogram-based emotion recognition (EEG-ER) has received increasing attention in the fields of health care, affective computing, and brain-computer interface (BCI). However, satisfactory ER performance within a bi-dimensional and non-discrete emotional space using single-trial EEG data remains a challenging task. To address this issue, we propose a three-layer scheme for single-trial EEG-ER. In the first layer, a set of spectral powers of different EEG frequency bands are extracted from multi-channel single-trial EEG signals. In the second layer, the kernel Fishers discriminant analysis method is applied to further extract features with better discrimination ability from the EEG spectral powers. The feature vector produced by layer 2 is called a kernel Fishers emotion pattern (KFEP), and is sent into layer 3 for further classification where the proposed imbalanced quasiconformal kernel support vector machine (IQK-SVM) serves as the emotion classifier. The outputs of the three layer EEG-ER system include labels of emotional valence and arousal. Furthermore, to collect effective training and testing datasets for the current EEG-ER system, we also use an emotion-induction paradigm in which a set of pictures selected from the International Affective Picture System (IAPS) are employed as emotion induction stimuli. The performance of the proposed three-layer solution is compared with that of other EEG spectral power-based features and emotion classifiers. Results on 10 healthy participants indicate that the proposed KFEP feature performs better than other spectral power features, and IQK-SVM outperforms traditional SVM in terms of the EEG-ER accuracy. Our findings also show that the proposed EEG-ER scheme achieves the highest classification accuracies of valence (82.68%) and arousal (84.79%) among all testing methods.


Psychological Medicine | 2016

Deficits in executive functions among youths with autism spectrum disorders:an age-stratified analysis

Shiow-Yi Chen; Yi-Ling Chien; Chien-Te Wu; Chi-Yung Shang; Yu-Yu Wu; Susan Shur-Fen Gau

Background Impaired executive function (EF) is suggested to be one of the core features in individuals with autism spectrum disorders (ASD); however, little is known about whether the extent of worse EF in ASD than typically developing (TD) controls is age-dependent. We used age-stratified analysis to reveal this issue. Method We assessed 111 youths with ASD (aged 12.5 ± 2.8 years, male 94.6%) and 114 age-, and sex-matched TD controls with Digit Span and four EF tasks of the Cambridge Neuropsychological Test Automated Battery (CANTAB): Spatial Span (SSP), Spatial Working Memory (SWM), Stockings of Cambridge (SOC), and Intradimensional/Extradimensional Shift Test (I/ED). Results Compared to TD controls, youths with ASD performed poorer on the Digit Span, SWM, SOC, and I/ED tasks. The performance of all the tasks improved with age for both groups. Age-stratified analyses were conducted due to significant age × group interactions in visuospatial planning (SOC) and set-shifting (I/ED) and showed that poorer performance on these two tasks in ASD than TD controls was found only in the child (aged 8–12 years) rather than the adolescent (aged 13–18 years) group. By contrast, youths with ASD had impaired working memory, regardless of age. The increased magnitude of group difference in visuospatial planning (SOC) with increased task demands differed between the two age groups but no age moderating effect on spatial working memory. Conclusions Our findings support deficits in visuospatial working memory and planning in youths with ASD; however, worse performance in set-shifting may only be demonstrated in children with ASD.


Journal of Vision | 2011

Sandwich masking eliminates both visual awareness of faces and face-specific brain activity through a feedforward mechanism

Joseph A. Harris; Chien-Te Wu; Marty G. Woldorff

It is generally agreed that considerable amounts of low-level sensory processing of visual stimuli can occur without conscious awareness. On the other hand, the degree of higher level visual processing that occurs in the absence of awareness is as yet unclear. Here, event-related potential (ERP) measures of brain activity were recorded during a sandwich-masking paradigm, a commonly used approach for attenuating conscious awareness of visual stimulus content. In particular, the present study used a combination of ERP activation contrasts to track both early sensory-processing ERP components and face-specific N170 ERP activations, in trials with versus without awareness. The electrophysiological measures revealed that the sandwich masking abolished the early face-specific N170 neural response (peaking at ~170 ms post-stimulus), an effect that paralleled the abolition of awareness of face versus non-face image content. Furthermore, however, the masking appeared to render a strong attenuation of earlier feedforward visual sensory-processing signals. This early attenuation presumably resulted in insufficient information being fed into the higher level visual system pathways specific to object category processing, thus leading to unawareness of the visual object content. These results support a coupling of visual awareness and neural indices of face processing, while also demonstrating an early low-level mechanism of interference in sandwich masking.


international conference of the ieee engineering in medicine and biology society | 2013

Single-trial EEG-based emotion recognition using kernel Eigen-emotion pattern and adaptive support vector machine

Yi-Hung Liu; Chien-Te Wu; Yung-Hwa Kao; Ya-Ting Chen

Single-trial electroencephalography (EEG)-based emotion recognition enables us to perform fast and direct assessments of human emotional states. However, previous works suggest that a great improvement on the classification accuracy of valence and arousal levels is still needed. To address this, we propose a novel emotional EEG feature extraction method: kernel Eigen-emotion pattern (KEEP). An adaptive SVM is also proposed to deal with the problem of learning from imbalanced emotional EEG data sets. In this study, a set of pictures from IAPS are used for emotion induction. Results based on seven participants show that KEEP gives much better classification results than the widely-used EEG frequency band power features. Also, the adaptive SVM greatly improves classification performance of commonly-adopted SVM classifier. Combined use of KEEP and adaptive SVM can achieve high average valence and arousal classification rates of 73.42% and 73.57%. The highest classification rates for valence and arousal are 80% and 79%, respectively. The results are very promising.


Sensors | 2017

Major Depression Detection from EEG Signals Using Kernel Eigen-Filter-Bank Common Spatial Patterns

Shih-Cheng Liao; Chien-Te Wu; Hao-Chuan Huang; Wei-Teng Cheng; Yi-Hung Liu

Major depressive disorder (MDD) has become a leading contributor to the global burden of disease; however, there are currently no reliable biological markers or physiological measurements for efficiently and effectively dissecting the heterogeneity of MDD. Here we propose a novel method based on scalp electroencephalography (EEG) signals and a robust spectral-spatial EEG feature extractor called kernel eigen-filter-bank common spatial pattern (KEFB-CSP). The KEFB-CSP first filters the multi-channel raw EEG signals into a set of frequency sub-bands covering the range from theta to gamma bands, then spatially transforms the EEG signals of each sub-band from the original sensor space to a new space where the new signals (i.e., CSPs) are optimal for the classification between MDD and healthy controls, and finally applies the kernel principal component analysis (kernel PCA) to transform the vector containing the CSPs from all frequency sub-bands to a lower-dimensional feature vector called KEFB-CSP. Twelve patients with MDD and twelve healthy controls participated in this study, and from each participant we collected 54 resting-state EEGs of 6 s length (5 min and 24 s in total). Our results show that the proposed KEFB-CSP outperforms other EEG features including the powers of EEG frequency bands, and fractal dimension, which had been widely applied in previous EEG-based depression detection studies. The results also reveal that the 8 electrodes from the temporal areas gave higher accuracies than other scalp areas. The KEFB-CSP was able to achieve an average EEG classification accuracy of 81.23% in single-trial analysis when only the 8-electrode EEGs of the temporal area and a support vector machine (SVM) classifier were used. We also designed a voting-based leave-one-participant-out procedure to test the participant-independent individual classification accuracy. The voting-based results show that the mean classification accuracy of about 80% can be achieved by the KEFP-CSP feature and the SVM classifier with only several trials, and this level of accuracy seems to become stable as more trials (i.e., <7 trials) are used. These findings therefore suggest that the proposed method has a great potential for developing an efficient (required only a few 6-s EEG signals from the 8 electrodes over the temporal) and effective (~80% classification accuracy) EEG-based brain-computer interface (BCI) system which may, in the future, help psychiatrists provide individualized and effective treatments for MDD patients.

Collaboration


Dive into the Chien-Te Wu's collaboration.

Top Co-Authors

Avatar

Rufin VanRullen

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ching-Lin Hsieh

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Yi-Hung Liu

Chung Yuan Christian University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gong-Hong Lin

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hsin-Mei Sun

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Wei-Teng Cheng

Chung Yuan Christian University

View shared research outputs
Top Co-Authors

Avatar

Ya-Ting Chen

National Taiwan University

View shared research outputs
Researchain Logo
Decentralizing Knowledge