Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhen-Tao Liu is active.

Publication


Featured researches published by Zhen-Tao Liu.


Algorithms | 2017

An Improved Brain-Inspired Emotional Learning Algorithm for Fast Classification

Ying Mei; Guanzheng Tan; Zhen-Tao Liu

Classification is an important task of machine intelligence in the field of information. The artificial neural network (ANN) is widely used for classification. However, the traditional ANN shows slow training speed, and it is hard to meet the real-time requirement for large-scale applications. In this paper, an improved brain-inspired emotional learning (BEL) algorithm is proposed for fast classification. The BEL algorithm was put forward to mimic the high speed of the emotional learning mechanism in mammalian brain, which has the superior features of fast learning and low computational complexity. To improve the accuracy of BEL in classification, the genetic algorithm (GA) is adopted for optimally tuning the weights and biases of amygdala and orbitofrontal cortex in the BEL neural network. The combinational algorithm named as GA-BEL has been tested on eight University of California at Irvine (UCI) datasets and two well-known databases (Japanese Female Facial Expression, Cohn–Kanade). The comparisons of experiments indicate that the proposed GA-BEL is more accurate than the original BEL algorithm, and it is much faster than the traditional algorithm.


Neurocomputing | 2018

Speech emotion recognition based on feature selection and extreme learning machine decision tree

Zhen-Tao Liu; Min Wu; Weihua Cao; Jun-Wei Mao; Jian-Ping Xu; Guanzheng Tan

Abstract Feature selection is a crucial step in the development of a system for identifying emotions in speech. Recently, the interaction between features generated from the same audio source was rarely considered, which may produce redundant features and increase the computational costs. To solve this problem, feature selection method based on correlation analysis and Fisher is proposed, which can remove the redundant features that have close correlations with each other. To improve the recognition performance of the feature subset after proposal feature selection further, an emotion recognition method based on extreme learning machine (ELM) decision tree is proposed according to the confusion degree among different basic emotions. A framework of speech emotion recognition is proposed and the classification experiments based on proposed classification method by using Chinese speech database from institute of automation of Chinese academy of sciences (CASIA) are performed. And the experimental results show that the proposal achieved 89.6% recognition rate on average. By proposal, it would be fast and efficient to discriminate emotional states of different speakers from speech, and it would make it possible to realize the interaction between speaker-independent and computer/robot in the future.


IEEE/CAA Journal of Automatica Sinica | 2017

A facial expression emotion recognition based human-robot interaction system

Zhen-Tao Liu; Min Wu; Weihua Cao; Luefeng Chen; Jian-Ping Xu; Ri Zhang; Mengtian Zhou; Jun-Wei Mao

A facial expression emotion recognition based human-robot interaction U+0028 FEER-HRI U+0029 system is proposed, for which a four-layer system framework is designed. The FEER-HRI system enables the robots not only to recognize human emotions, but also to generate facial expression for adapting to human emotions. A facial emotion recognition method based on 2D-Gabor, uniform local binary pattern U+0028 LBP U+0029 operator, and multiclass extreme learning machine U+0028 ELM U+0029 classifier is presented, which is applied to real-time facial expression recognition for robots. Facial expressions of robots are represented by simple cartoon symbols and displayed by a LED screen equipped in the robots, which can be easily understood by human. Four scenarios, i.e., guiding, entertainment, home service and scene simulation are performed in the human-robot interaction experiment, in which smooth communication is realized by facial expression recognition of humans and facial expression generation of robots within 2 seconds. As a few prospective applications, the FEER-HRI system can be applied in home service, smart home, safe driving, and so on.


Neurocomputing | 2018

Speech emotion recognition based on an improved brain emotion learning model

Zhen-Tao Liu; Qiao Xie; Min Wu; Weihua Cao; Ying Mei; Jun-Wei Mao

Abstract Human-robot emotional interaction has developed rapidly in recent years, in which speech emotion recognition plays a significant role. In this paper, a speech emotion recognition method based on an improved brain emotional learning (BEL) model is proposed, which is inspired by the emotional processing mechanism of the limbic system in the brain. The reinforcement learning rule of BEL model, however, makes it have poor adaptation and affects its performance. To solve these problems, Genetic Algorithm (GA) is employed to update the weights of BEL model. The proposal is tested on the CASIA Chinese emotion corpus, SAVEE emotion corpus, and FAU Aibo dataset, in which MFCC related features and their 1st order delta coefficients are extracted. In addition, the proposal is tested on INTERSPEECH 2009 standard feature set, in which three dimensionality reduction methods of Linear Discriminant Analysis (LDA), Principal Component Analysis (PCA), and PCA+LDA are used to reduce the dimension of feature set. The experimental results show that the proposed method obtains average recognition accuracy of 90.28% (CASIA), 76.40% (SAVEE), and 71.05% (FAU Aibo) for speaker-dependent (SD) speech emotion recognition and the highest average accuracy of 38.55% (CASIA), 44.18% (SAVEE), 64.60% (FAU Aibo) for speaker-independent (SI) speech emotion recognition are obtained, which shows that the proposal is feasible in speech emotion recognition.


ieee international conference on fuzzy systems | 2011

Multimodal gesture recognition based on Choquet integral

Kaoru Hirota; Hai An Vu; Phuc Quang Le; Chastine Fatichah; Zhen-Tao Liu; Yongkang Tang; Martin Leonard Tangel; Z. Mu; Bo Sun; Fei Yan; Daisuke Masano; Oohan Thet; Masashi Yamaguchi; Fangyan Dong; Yoichi Yamazaki

A multimodal gesture recognition method is proposed based on Choquet integral by fusing information from camera and 3D accelerometer data. By calculating the optimal fuzzy measures for the camera recognition module and the accelerometer recognition module, the proposal obtains enough recognition rate 92.7% in average for 8 types of gestures by improving the recognition rate approximate 20% compared to that of each module. The proposed method aims to realize the casual communication from humans to robots by integrating nonverbal gesture messages and verbal messages.


Journal of Advanced Computational Intelligence and Intelligent Informatics | 2013

Concept of Fuzzy Atmosfield for Representing Communication Atmosphere and its Application to Humans-Robots Interaction

Zhen-Tao Liu; MinWu; Dan-Yun Li; Luefeng Chen; Fangyan Dong; Yoichi Yamazaki; Kaoru Hirota


congress on evolutionary computation | 2010

Gesture recognition using combination of acceleration sensor and images for casual communication between robots and humans

Yoichi Yamazaki; Hai An Vu; Phuc Quang Le; Zhen-Tao Liu; Chastine Fatichah; Mian Dai; Hitoho Oikawa; Daisuke Masano; Oohan Thet; Yongkang Tang; Nobuaki Nagashima; Martin Leonard Tangel; Fangyan Dong; Kaoru Hirota


chinese control conference | 2016

A multimodal emotional communication based humans-robots interaction system

Zhen-Tao Liu; Fang-Fang Pan; Min Wu; Weihua Cao; Luefeng Chen; Jian-Ping Xu; Ri Zhang; Mengtian Zhou


International Journal of Social Robotics | 2015

Emotion-Age-Gender-Nationality Based Intention Understanding in Human–Robot Interaction Using Two-Layer Fuzzy Support Vector Regression

Luefeng Chen; Zhen-Tao Liu; Min Wu; Min Ding; Fangyan Dong; Kaoru Hirota


Journal on Multimodal User Interfaces | 2014

Multi-robot behavior adaptation to local and global communication atmosphere in humans-robots interaction

Luefeng Chen; Zhen-Tao Liu; Min Wu; Fangyan Dong; Yoichi Yamazaki; Kaoru Hirota

Collaboration


Dive into the Zhen-Tao Liu's collaboration.

Top Co-Authors

Avatar

Min Wu

China University of Geosciences

View shared research outputs
Top Co-Authors

Avatar

Luefeng Chen

China University of Geosciences

View shared research outputs
Top Co-Authors

Avatar

Weihua Cao

China University of Geosciences

View shared research outputs
Top Co-Authors

Avatar

Kaoru Hirota

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Fangyan Dong

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dan-Yun Li

Central South University

View shared research outputs
Top Co-Authors

Avatar

Jinhua She

China University of Geosciences

View shared research outputs
Top Co-Authors

Avatar

Man Hao

China University of Geosciences

View shared research outputs
Top Co-Authors

Avatar

Jian-Ping Xu

China University of Geosciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge