Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ying Zeng is active.

Publication


Featured researches published by Ying Zeng.


BioMed Research International | 2015

Automatic Artifact Removal from Electroencephalogram Data Based on A Priori Artifact Information.

Chi Zhang; Li Tong; Ying Zeng; Jingfang Jiang; Haibing Bu; Bin Yan; Jianxin Li

Electroencephalogram (EEG) is susceptible to various nonneural physiological artifacts. Automatic artifact removal from EEG data remains a key challenge for extracting relevant information from brain activities. To adapt to variable subjects and EEG acquisition environments, this paper presents an automatic online artifact removal method based on a priori artifact information. The combination of discrete wavelet transform and independent component analysis (ICA), wavelet-ICA, was utilized to separate artifact components. The artifact components were then automatically identified using a priori artifact information, which was acquired in advance. Subsequently, signal reconstruction without artifact components was performed to obtain artifact-free signals. The results showed that, using this automatic online artifact removal method, there were statistical significant improvements of the classification accuracies in both two experiments, namely, motor imagery and emotion recognition.


BioMed Research International | 2017

Emotion Recognition from EEG Signals Using Multidimensional Information in EMD Domain

Ning Zhuang; Ying Zeng; Li Tong; Chi Zhang; Hanming Zhang; Bin Yan

This paper introduces a method for feature extraction and emotion recognition based on empirical mode decomposition (EMD). By using EMD, EEG signals are decomposed into Intrinsic Mode Functions (IMFs) automatically. Multidimensional information of IMF is utilized as features, the first difference of time series, the first difference of phase, and the normalized energy. The performance of the proposed method is verified on a publicly available emotional database. The results show that the three features are effective for emotion recognition. The role of each IMF is inquired and we find that high frequency component IMF1 has significant effect on different emotional states detection. The informative electrodes based on EMD strategy are analyzed. In addition, the classification accuracy of the proposed method is compared with several classical techniques, including fractal dimension (FD), sample entropy, differential entropy, and discrete wavelet transform (DWT). Experiment results on DEAP datasets demonstrate that our method can improve emotion recognition performance.


Computational and Mathematical Methods in Medicine | 2013

Principal feature analysis: a multivariate feature selection method for fMRI data.

Lijun Wang; Yu Lei; Ying Zeng; Li Tong; Bin Yan

Brain decoding with functional magnetic resonance imaging (fMRI) requires analysis of complex, multivariate data. Multivoxel pattern analysis (MVPA) has been widely used in recent years. MVPA treats the activation of multiple voxels from fMRI data as a pattern and decodes brain states using pattern classification methods. Feature selection is a critical procedure of MVPA because it decides which features will be included in the classification analysis of fMRI data, thereby improving the performance of the classifier. Features can be selected by limiting the analysis to specific anatomical regions or by computing univariate (voxel-wise) or multivariate statistics. However, these methods either discard some informative features or select features with redundant information. This paper introduces the principal feature analysis as a novel multivariate feature selection method for fMRI data processing. This multivariate approach aims to remove features with redundant information, thereby selecting fewer features, while retaining the most information.


international ieee/embs conference on neural engineering | 2015

Prior artifact information based automatic artifact removal from EEG data

Chi Zhang; Haibing Bu; Ying Zeng; Jingfang Jiang; Bin Yan; Jianxin Li

Electroencephalogram (EEG) is susceptible to various non-neural physiological artifacts. Automatic artifact removal from EEG remains a great challenge for extracting relevant information from brain activities. In order to adapt to variable subjects and EEG acquisition environments, this paper presents a novel automatic artifact removal method based on prior artifact information. First, the wavelet-ICA algorithm, which combines of discrete wavelet transform (DWT) and independent component analysis (ICA), is utilized to separate artifact components. Then the artifact components are automatically identified using the prior artifact information, which is acquired in advance. Subsequently, signal reconstruction is performed without the identified artifact components to obtain the artifact free signals. At last, the method is validated by the improvements of the classification accuracies in a motor imagery experiment.


international ieee/embs conference on neural engineering | 2017

Real-time EEG-based person authentication system using face rapid serial visual presentation

Qunjian Wu; Ying Zeng; Zhimin Lin; Xiaojuan Wang; Bin Yan

As a new biometric, the Electroencephalogram (EEG) signal has the advantages of invisibility, non-clonability, and non-coercion compare to traditional biometrics. However, the real-time and stability are the difficulties that the current EEG-based person authentication systems face. In this paper, we design a real-time and stable person authentication system using EEG signals, which are elicited by self- and non-self-face rapid serial visual presentation (RSVP). Convolutional neural network (CNN) is applied to dig the specific feature of different individuals. The mean accuracy of 85.03% and 91.27% is achieved with the login time of 3 seconds and 6 seconds respectively, which illustrates the precision and real-time of the system.


Sensors | 2018

An EEG-Based Person Authentication System with Open-Set Capability Combining Eye Blinking Signals

Qunjian Wu; Ying Zeng; Chi Zhang; Li Tong; Bin Yan

The electroencephalogram (EEG) signal represents a subject’s specific brain activity patterns and is considered as an ideal biometric given its superior forgery prevention. However, the accuracy and stability of the current EEG-based person authentication systems are still unsatisfactory in practical application. In this paper, a multi-task EEG-based person authentication system combining eye blinking is proposed, which can achieve high precision and robustness. Firstly, we design a novel EEG-based biometric evoked paradigm using self- or non-self-face rapid serial visual presentation (RSVP). The designed paradigm could obtain a distinct and stable biometric trait from EEG with a lower time cost. Secondly, the event-related potential (ERP) features and morphological features are extracted from EEG signals and eye blinking signals, respectively. Thirdly, convolutional neural network and back propagation neural network are severally designed to gain the score estimation of EEG features and eye blinking features. Finally, a score fusion technology based on least square method is proposed to get the final estimation score. The performance of multi-task authentication system is improved significantly compared to the system using EEG only, with an increasing average accuracy from 92.4% to 97.6%. Moreover, open-set authentication tests for additional imposters and permanence tests for users are conducted to simulate the practical scenarios, which have never been employed in previous EEG-based person authentication systems. A mean false accepted rate (FAR) of 3.90% and a mean false rejected rate (FRR) of 3.87% are accomplished in open-set authentication tests and permanence tests, respectively, which illustrate the open-set authentication and permanence capability of our systems.


BioMed Research International | 2017

Multirapid Serial Visual Presentation Framework for EEG-Based Target Detection

Zhimin Lin; Ying Zeng; Hui Gao; Li Tong; Chi Zhang; Xiaojuan Wang; Qunjian Wu; Bin Yan

Target image detection based on a rapid serial visual presentation (RSVP) paradigm is a typical brain-computer interface system with various applications, such as image retrieval. In an RSVP paradigm, a P300 component is detected to determine target images. This strategy requires high-precision single-trial P300 detection methods. However, the performance of single-trial detection methods is relatively lower than that of multitrial P300 detection methods. Image retrieval based on multitrial P300 is a new research direction. In this paper, we propose a triple-RSVP paradigm with three images being presented simultaneously and a target image appearing three times. Thus, multitrial P300 classification methods can be used to improve detection accuracy. In this study, these mechanisms were extended and validated, and the characteristics of the multi-RSVP framework were further explored. Two different P300 detection algorithms were also utilized in multi-RSVP to demonstrate that the scheme is universally applicable. Results revealed that the detection accuracy of the multi-RSVP paradigm was higher than that of the standard RSVP paradigm. The results validate the effectiveness of the proposed method, and this method can provide a whole new idea in the field of EEG-based target detection.


software engineering artificial intelligence networking and parallel distributed computing | 2016

Single-trial ERP detecting for emotion recognition

Jingfang Jiang; Ying Zeng; Li Tong; Chi Zhang; Bin Yan

Emotion recognition, as an important part of human-computer interaction, has been extensively researched. Various studies have already verified the relationship between emotion and the event-related potentials (ERPs). In this paper, a new methodology for emotion recognition is investigated by detecting single-trial ERPs related to some specific level of emotions. First, a spatial filter is constructed to estimate the ERP components. Then the most discriminative spatial and temporal features of the entire ERP waveform are extracted with linear discriminant analysis. The performance of this method is tested by classifying the emotional valence on three levels, the extremely negative, the moderately negative and the neutral, with the support vector machine (SVM). The result shows that the proposed method is effective.


Bio-medical Materials and Engineering | 2014

Sparse models for visual image reconstruction from fMRI activity.

Linyuan Wang; Li Tong; Bin Yan; Yu Lei; Lijun Wang; Ying Zeng; Guoen Hu

Statistical model is essential for constraint-free visual image reconstruction, as it may overfit training data and have poor generalization. In this study, we investigate the sparsity of the distributed patterns of visual representation and introduce a suitable sparse model for the visual image reconstruction experiment. We use elastic net regularization to model the sparsity of the distributed patterns for local decoder training. We also investigate the relationship between the sparsity of the visual representation and sparse models with different parameters. Our experimental results demonstrate that the sparsity needed by visual reconstruction models differs from the sparsest one, and the l2-norm regularization introduced in the EN model improves not only the robustness of the model but also the generalization performance of the learning results. We therefore conclude that the sparse learning model for visual image reconstruction should reflect the spasity of visual perceptual experience, and have a solution with high but not the highest sparsity, and some robustness as well.


Sensors | 2018

Investigating Patterns for Self-Induced Emotion Recognition from EEG Signals

Ning Zhuang; Ying Zeng; Kai Yang; Chi Zhang; Li Tong; Bin Yan

Most current approaches to emotion recognition are based on neural signals elicited by affective materials such as images, sounds and videos. However, the application of neural patterns in the recognition of self-induced emotions remains uninvestigated. In this study we inferred the patterns and neural signatures of self-induced emotions from electroencephalogram (EEG) signals. The EEG signals of 30 participants were recorded while they watched 18 Chinese movie clips which were intended to elicit six discrete emotions, including joy, neutrality, sadness, disgust, anger and fear. After watching each movie clip the participants were asked to self-induce emotions by recalling a specific scene from each movie. We analyzed the important features, electrode distribution and average neural patterns of different self-induced emotions. Results demonstrated that features related to high-frequency rhythm of EEG signals from electrodes distributed in the bilateral temporal, prefrontal and occipital lobes have outstanding performance in the discrimination of emotions. Moreover, the six discrete categories of self-induced emotion exhibit specific neural patterns and brain topography distributions. We achieved an average accuracy of 87.36% in the discrimination of positive from negative self-induced emotions and 54.52% in the classification of emotions into six discrete categories. Our research will help promote the development of comprehensive endogenous emotion recognition methods.

Collaboration


Dive into the Ying Zeng's collaboration.

Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge