Yuan-Pin Lin
University of California, San Diego
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yuan-Pin Lin.
IEEE Transactions on Biomedical Engineering | 2010
Yuan-Pin Lin; Chi-Hong Wang; Tzyy-Ping Jung; Tien-Lin Wu; Shyh-Kang Jeng; Jeng-Ren Duann; Jyh-Horng Chen
Ongoing brain activity can be recorded as electroen-cephalograph (EEG) to discover the links between emotional states and brain activity. This study applied machine-learning algorithms to categorize EEG dynamics according to subject self-reported emotional states during music listening. A framework was proposed to optimize EEG-based emotion recognition by systematically 1) seeking emotion-specific EEG features and 2) exploring the efficacy of the classifiers. Support vector machine was employed to classify four emotional states (joy, anger, sadness, and pleasure) and obtained an averaged classification accuracy of 82.29% ± 3.06% across 26 subjects. Further, this study identified 30 subject-independent features that were most relevant to emotional processing across subjects and explored the feasibility of using fewer electrodes to characterize the EEG dynamics during music listening. The identified features were primarily derived from electrodes placed near the frontal and the parietal lobes, consistent with many of the findings in the literature. This study might lead to a practical system for noninvasive assessment of the emotional states in practical or clinical applications.
multimedia signal processing | 2008
Yuan-Pin Lin; Chi-Hong Wang; Tien-Lin Wu; Shyh-Kang Jeng; Jyh-Horng Chen
An approach to recognize the emotion responses during multimedia presentation using the electroencephalogram (EEG) signals is proposed. The association between EEG signals and music-induced emotion responses was investigated in three factors, including: 1) the types of features, 2) the temporal resolutions of features, and 3) the components of EEG. The results showed that the spectrum power asymmetry index of EEG signal was a sensitive marker to reflect the brain activation related to emotion responses, especially for the low frequency bands of delta, theta and alpha components. Besides, the maximum classification accuracy was obtained around 92.73% by using support vector machine (SVM) based on 60 features derived from all EEG components with the feature temporal resolution of one second. As such, it will be able to provide key clues to develop EEG-inspired multimedia applications, in which multimedia contents could be offered interactively according to the userspsila immediate feedback.
ieee region 10 conference | 2007
Yuan-Pin Lin; Chi-Hong Wang; Tien-Lin Wu; Shyh-Kang Jeng; Jyh-Horng Chen
In this study an electroencephalography (EEG) signal-based emotion classification algorithm was investigated. Several excerpts of emotional music were used as stimulus for elicitation of emotion-specific EEG signal. Besides, the hemispheric asymmetry alpha power indices of brain activation were extracted as feature vector for training multilayer perceptron classifier (MLP) in order to learn four targeted emotion categories, including joy, angry, sadness, and pleasure. The results demonstrated that the average classification accuracy of MLP could be 69.69% in five subjects for four emotional categories, which is much higher than chance probability of 25%.
Neuroreport | 2010
Yuan-Pin Lin; Jeng-Ren Duann; Jyh-Horng Chen; Tzyy-Ping Jung
This study explores the electroencephalographic (EEG) correlates of emotional experience during music listening. Independent component analysis and analysis of variance were used to separate statistically independent spectral changes of the EEG in response to music-induced emotional processes. An independent brain process with equivalent dipole located in the fronto-central region exhibited distinct δ-band and &thgr;-band power changes associated with self-reported emotional states. Specifically, the emotional valence was associated with δ-power decreases and &thgr;-power increases in the frontal-central area, whereas the emotional arousal was accompanied by increases in both δ and &thgr; powers. The resultant emotion-related component activations that were less interfered by the activities from other brain processes complement previous EEG studies of emotion perception to music.
Frontiers in Neuroscience | 2014
Yuan-Pin Lin; Yi-Hsuan Yang; Tzyy-Ping Jung
Electroencephalography (EEG)-based emotion classification during music listening has gained increasing attention nowadays due to its promise of potential applications such as musical affective brain-computer interface (ABCI), neuromarketing, music therapy, and implicit multimedia tagging and triggering. However, music is an ecologically valid and complex stimulus that conveys certain emotions to listeners through compositions of musical elements. Using solely EEG signals to distinguish emotions remained challenging. This study aimed to assess the applicability of a multimodal approach by leveraging the EEG dynamics and acoustic characteristics of musical contents for the classification of emotional valence and arousal. To this end, this study adopted machine-learning methods to systematically elucidate the roles of the EEG and music modalities in the emotion modeling. The empirical results suggested that when whole-head EEG signals were available, the inclusion of musical contents did not improve the classification performance. The obtained performance of 74~76% using solely EEG modality was statistically comparable to that using the multimodality approach. However, if EEG dynamics were only available from a small set of electrodes (likely the case in real-life applications), the music modality would play a complementary role and augment the EEG results from around 61–67% in valence classification and from around 58–67% in arousal classification. The musical timber appeared to replace less-discriminative EEG features and led to improvements in both valence and arousal classification, whereas musical loudness was contributed specifically to the arousal classification. The present study not only provided principles for constructing an EEG-based multimodal approach, but also revealed the fundamental insights into the interplay of the brain activity and musical contents in emotion modeling.
international conference of the ieee engineering in medicine and biology society | 2013
Yuan-Pin Lin; Yijun Wang; Tzyy-Ping Jung
Recently, translating a steady-state visual-evoked potential (SSVEP)-based brain-computer interface (BCI) from laboratory settings to real-life applications has gained increasing attention. This study systematically tests the signal quality of SSVEP acquired by a mobile electroencephalogram (EEG) system, which features dry electrodes and wireless telemetry, under challenging (e.g. walking) recording conditions. Empirical results of this study demonstrated the robustness of canonical correlation analysis (CCA) to movement artifacts for SSVEP detection. This demonstration considerably improves the practicality of real-life applications of mobile and wireless BCI systems for users actively behaving in and interacting with their environments.
Frontiers in Human Neuroscience | 2014
Yuan-Pin Lin; Yijun Wang; Chun-Shu Wei; Tzyy-Ping Jung
Recent advances in mobile electroencephalogram (EEG) systems, featuring non-prep dry electrodes and wireless telemetry, have enabled and promoted the applications of mobile brain-computer interfaces (BCIs) in our daily life. Since the brain may behave differently while people are actively situated in ecologically-valid environments versus highly-controlled laboratory environments, it remains unclear how well the current laboratory-oriented BCI demonstrations can be translated into operational BCIs for users with naturalistic movements. Understanding inherent links between natural human behaviors and brain activities is the key to ensuring the applicability and stability of mobile BCIs. This study aims to assess the quality of steady-state visual-evoked potentials (SSVEPs), which is one of promising channels for functioning BCI systems, recorded using a mobile EEG system under challenging recording conditions, e.g., walking. To systematically explore the effects of walking locomotion on the SSVEPs, this study instructed subjects to stand or walk on a treadmill running at speeds of 1, 2, and 3 mile (s) per hour (MPH) while concurrently perceiving visual flickers (11 and 12 Hz). Empirical results of this study showed that the SSVEP amplitude tended to deteriorate when subjects switched from standing to walking. Such SSVEP suppression could be attributed to the walking locomotion, leading to distinctly deteriorated SSVEP detectability from standing (84.87 ± 13.55%) to walking (1 MPH: 83.03 ± 13.24%, 2 MPH: 79.47 ± 13.53%, and 3 MPH: 75.26 ± 17.89%). These findings not only demonstrated the applicability and limitations of SSVEPs recorded from freely behaving humans in realistic environments, but also provide useful methods and techniques for boosting the translation of the BCI technology from laboratory demonstrations to practical applications.
Journal of Neuroengineering and Rehabilitation | 2014
Yuan-Pin Lin; Jeng-Ren Duann; Wenfeng Feng; Jyh-Horng Chen; Tzyy-Ping Jung
BackgroundMusic conveys emotion by manipulating musical structures, particularly musical mode- and tempo-impact. The neural correlates of musical mode and tempo perception revealed by electroencephalography (EEG) have not been adequately addressed in the literature.MethodThis study used independent component analysis (ICA) to systematically assess spatio-spectral EEG dynamics associated with the changes of musical mode and tempo.ResultsEmpirical results showed that music with major mode augmented delta-band activity over the right sensorimotor cortex, suppressed theta activity over the superior parietal cortex, and moderately suppressed beta activity over the medial frontal cortex, compared to minor-mode music, whereas fast-tempo music engaged significant alpha suppression over the right sensorimotor cortex.ConclusionThe resultant EEG brain sources were comparable with previous studies obtained by other neuroimaging modalities, such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET). In conjunction with advanced dry and mobile EEG technology, the EEG results might facilitate the translation from laboratory-oriented research to real-life applications for music therapy, training and entertainment in naturalistic environments.
international conference of the ieee engineering in medicine and biology society | 2009
Yuan-Pin Lin; Tzyy-Ping Jung; Jyh-Horng Chen
This study explores the electroencephalographic (EEG) correlates of emotions during music listening. Principal component analysis (PCA) is used to correlate EEG features with complex music appreciation. This study also applies machine-leaning algorithms to demonstrate the feasibility of classifying EEG dynamics in four subjectively-reported emotional states. The high classification accuracy (81.58±3.74%) demonstrates the feasibility of using EEG features to assess emotional states of human subjects. Further, the spatial and spectral patterns of the EEG most relevant to emotions seem reproducible across subjects.
international ieee/embs conference on neural engineering | 2013
Chun-Shu Wei; Yuan-Pin Lin; Yijun Wang; Yu-Te Wang; Tzyy-Ping Jung
Steady-state visual evoked potential (SSVEP) is an electroencephalogram (EEG) activity elicited by periodic visual flickers. Frequency-coded SSVEP has been commonly adopted for functioning brain-computer interfaces (BCIs). Up to date, canonical correlation analysis (CCA), a multivariate statistical method, is considered to be state-of-the-art to robustly detect SSVEPs. However, the spectra of EEG signals often have a 1/f power-law distribution across frequencies, which inherently confines the CCA efficiency in discriminating between high-frequency SSVEPs and low-frequency background EEG activities. This study proposes a new SSVEP detection method, differential canonical correlation analysis (dCCA), by incorporating CCA with a notch-filtering procedure, to alleviate the frequency-dependent bias. The proposed dCCA approach significantly outperformed the standard CCA approach by around 6% in classifying SSVEPs at five frequencies (9-13Hz). This study could promote the development of high-performance SSVEP-based BCI systems.