Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bao-Liang Lu is active.

Publication


Featured researches published by Bao-Liang Lu.


IEEE Transactions on Neural Networks | 1999

Task decomposition and module combination based on class relations: a modular neural network for pattern classification

Bao-Liang Lu; Masami Ito

In this paper, we propose a new method for decomposing pattern classification problems based on the class relations among training data. By using this method, we can divide a K-class classification problem into a series of ((2)(K)) two-class problems. These two-class problems are to discriminate class Ci from class Cj for i=1, ..., K and j = i+1, while the existence of the training data belonging to the other K-2 classes is ignored. If the two-class problem of discriminating class Ci from class Cj is still hard to be learned, we can further break down it into a set of two-class subproblems as small as we expect. Since each of the two-class problems can be treated as a completely separate classification problem with the proposed learning framework, all of the two-class problems can be learned in parallel. We also propose two module combination principles which give practical guidelines in integrating individual trained network modules. After learning of each of the two-class problems with a network module, we can easily integrate all of the trained modules into a min-max modular (M3) network according to the module combination principles and obtain a solution to the original problem. Consequently, a large-scale and complex K-class classification problem can be solved effortlessly and efficiently by learning a series of smaller and simpler two-class problems in parallel.


international conference on acoustics, speech, and signal processing | 2007

Person-Specific SIFT Features for Face Recognition

Jun Luo; Yong Ma; Erina Takikawa; Shihong Lao; Masato Kawade; Bao-Liang Lu

Scale invariant feature transform (SIFT) proposed by Lowe has been widely and successfully applied to object detection and recognition. However, the representation ability of SIFT features in face recognition has rarely been investigated systematically. In this paper, we proposed to use the person-specific SIFT features and a simple non-statistical matching strategy combined with local and global similarity on key-points clusters to solve face recognition problems. Large scale experiments on FERET and CAS-PEAL face databases using only one training sample per person have been carried out to compare it with other non person-specific features such as Gabor wavelet feature and local binary pattern feature. The experimental results demonstrate the robustness of SIFT features to expression, accessory and pose variations.


Neurocomputing | 2014

Emotional state classification from EEG data using machine learning approach

Xiao-Wei Wang; Dan Nie; Bao-Liang Lu

Recently, emotion classification from EEG data has attracted much attention with the rapid development of dry electrode techniques, machine learning algorithms, and various real-world applications of brain-computer interface for normal people. Until now, however, researchers had little understanding of the details of relationship between different emotional states and various EEG features. To improve the accuracy of EEG-based emotion classification and visualize the changes of emotional states with time, this paper systematically compares three kinds of existing EEG features for emotion classification, introduces an efficient feature smoothing method for removing the noise unrelated to emotion task, and proposes a simple approach to tracking the trajectory of emotion changes with manifold learning. To examine the effectiveness of these methods introduced in this paper, we design a movie induction experiment that spontaneously leads subjects to real emotional states and collect an EEG data set of six subjects. From experimental results on our EEG data set, we found that (a) power spectrum feature is superior to other two kinds of features; (b) a linear dynamic system based feature smoothing method can significantly improve emotion classification accuracy; and (c) the trajectory of emotion changes can be visualized by reducing subject-independent features with manifold learning.


international ieee/embs conference on neural engineering | 2011

EEG-based emotion recognition during watching movies

Dan Nie; Xiao-Wei Wang; Li-Chen Shi; Bao-Liang Lu

This study aims at finding the relationship between EEG signals and human emotions. EEG signals are used to classify two kinds of emotions, positive and negative. First, we extracted features from original EEG data and used a linear dynamic system approach to smooth these features. An average test accuracy of 87.53% was obtained by using all of the features together with a support vector machine. Next, we reduced the dimension of features through correlation coefficients. The top 100 and top 50 subject-independent features were achieved, with average test accuracies of 89.22% and 84.94%, respectively. Finally, a manifold model was applied to find the trajectory of emotion changes.


international symposium on neural networks | 2006

Multi-view gender classification using local binary patterns and support vector machines

Hui-Cheng Lian; Bao-Liang Lu

In this paper, we present a novel approach to multi-view gender classification considering both shape and texture information to represent facial image. The face area is divided into small regions, from which local binary pattern(LBP) histograms are extracted and concatenated into a single vector efficiently representing the facial image. The classification is performed by using support vector machines(SVMs), which had been shown to be superior to traditional pattern classifiers in gender classification problem. The experiments clearly show the superiority of the proposed method over support gray faces on the CAS-PEAL face database and a highest correct classification rate of 96.75% is obtained. In addition, the simplicity of the proposed method leads to very fast feature extraction, and the regional histograms and global description of the face allow for multi-view gender classification.


international conference of the ieee engineering in medicine and biology society | 2009

Emotion classification based on gamma-band EEG

Mu Li; Bao-Liang Lu

In this paper, we use EEG signals to classify two emotions-happiness and sadness. These emotions are evoked by showing subjects pictures of smile and cry facial expressions. We propose a frequency band searching method to choose an optimal band into which the recorded EEG signal is filtered. We use common spatial patterns (CSP) and linear-SVM to classify these two emotions. To investigate the time resolution of classification, we explore two kinds of trials with lengths of 3s and 1s. Classification accuracies of 93.5% ± 6.7% and 93.0%±6.2% are achieved on 10 subjects for 3s-trials and 1s-trials, respectively. Our experimental results indicate that the gamma band (roughly 30–100 Hz) is suitable for EEG-based emotion classification.


international conference on neural information processing | 2011

EEG-based emotion recognition using frequency domain features and support vector machines

Xiao-Wei Wang; Dan Nie; Bao-Liang Lu

Information about the emotional state of users has become more and more important in human-machine interaction and brain-computer interface. This paper introduces an emotion recognition system based on electroencephalogram (EEG) signals. Experiments using movie elicitation are designed for acquiring subjects EEG signals to classify four emotion states, joy, relax, sad, and fear. After pre-processing the EEG signals, we investigate various kinds of EEG features to build an emotion recognition system. To evaluate classification performance, k-nearest neighbor (kNN) algorithm, multilayer perceptron and support vector machines are used as classifiers. Further, a minimum redundancy-maximum relevance method is used for extracting common critical features across subjects. Experimental results indicate that an average test accuracy of 66.51% for classifying four emotion states can be obtained by using frequency domain features and support vector machines.


european conference on computer vision | 2010

Max-margin dictionary learning for multiclass image categorization

Xiao-Chen Lian; Zhiwei Li; Bao-Liang Lu; Lei Zhang

Visual dictionary learning and base (binary) classifier training are two basic problems for the recently most popular image categorization framework, which is based on the bag-of-visual-terms (BOV) models and multiclass SVM classifiers. In this paper, we study new algorithms to improve performance of this framework from these two aspects. Typically SVM classifiers are trained with dictionaries fixed, and as a result the traditional loss function can only be minimized with respect to hyperplane parameters (w and b). We propose a novel loss function for a binary classifier, which links the hinge-loss term with dictionary learning. By doing so, we can further optimize the loss function with respect to the dictionary parameters. Thus, this framework is able to further increase margins of binary classifiers, and consequently decrease the error bound of the aggregated classifier. On two benchmark dataset, Graz [1] and the fifteen scene category dataset [2], our experiment results significantly outperformed state-of-the-art works.


IEEE Transactions on Autonomous Mental Development | 2015

Investigating Critical Frequency Bands and Channels for EEG-Based Emotion Recognition with Deep Neural Networks

Wei-Long Zheng; Bao-Liang Lu

To investigate critical frequency bands and channels, this paper introduces deep belief networks (DBNs) to constructing EEG-based emotion recognition models for three emotions: positive, neutral and negative. We develop an EEG dataset acquired from 15 subjects. Each subject performs the experiments twice at the interval of a few days. DBNs are trained with differential entropy features extracted from multichannel EEG data. We examine the weights of the trained DBNs and investigate the critical frequency bands and channels. Four different profiles of 4, 6, 9, and 12 channels are selected. The recognition accuracies of these four profiles are relatively stable with the best accuracy of 86.65%, which is even better than that of the original 62 channels. The critical frequency bands and channels determined by using the weights of trained DBNs are consistent with the existing observations. In addition, our experiment results show that neural signatures associated with different emotions do exist and they share commonality across sessions and individuals. We compare the performance of deep models with shallow models. The average accuracies of DBN, SVM, LR, and KNN are 86.08%, 83.99%, 82.70%, and 72.60%, respectively.


International Journal of Neural Systems | 2007

MULTI-VIEW GENDER CLASSIFICATION USING MULTI-RESOLUTION LOCAL BINARY PATTERNS AND SUPPORT VECTOR MACHINES

Hui-Cheng Lian; Bao-Liang Lu

In this paper, we present a novel method for multi-view gender classification considering both shape and texture information to represent facial images. The face area is divided into small regions from which local binary pattern (LBP) histograms are extracted and concatenated into a single vector efficiently representing a facial image. Following the idea of local binary pattern, we propose a new feature extraction approach called multi-resolution LBP, which can retain both fine and coarse local micro-patterns and spatial information of facial images. The classification tasks in this work are performed by support vector machines (SVMs). The experiments clearly show the superiority of the proposed method over both support gray faces and support Gabor faces on the CAS-PEAL face database. A higher correct classification rate of 96.56% and a higher cross validation average accuracy of 95.78% have been obtained. In addition, the simplicity of the proposed method leads to very fast feature extraction, and the regional histograms and fine-to-coarse description of facial images allow for multi-view gender classification.

Collaboration


Dive into the Bao-Liang Lu's collaboration.

Top Co-Authors

Avatar

Hai Zhao

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Wei-Long Zheng

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Yong Peng

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Masao Utiyama

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar

Li-Chen Shi

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Xiaolin Wang

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

James Tin-Yau Kwok

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jia-Yi Zhu

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Yang Yang

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Michinori Ichikawa

RIKEN Brain Science Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge