Meng-Ju Han
National Chiao Tung University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Meng-Ju Han.
IEEE Transactions on Systems, Man, and Cybernetics | 2013
Meng-Ju Han; Chia-How Lin; Kai-Tai Song
This paper presents a method of mood transition design of a robot for autonomous emotional interaction with humans. A 2-D emotional model is proposed to combine robot emotion, mood, and personality in order to generate emotional expressions. In this design, the robot personality is programmed by adjusting the factors of the five factor model proposed by psychologists. From Big Five personality traits, the influence factors of robot mood transition are determined. Furthermore, a method to fuse basic robotic emotional behaviors is proposed in order to manifest robotic emotional states via continuous facial expressions. An artificial face on a screen is a way to provide a robot with a humanlike appearance, which might be useful for human-robot interaction. An artificial face simulator has been implemented to show the effectiveness of the proposed methods. Questionnaire surveys have been carried out to evaluate the effectiveness of the proposed method by observing robotic responses to a users emotional expressions. Preliminary experimental results on a robotic head show that the proposed mood state transition scheme appropriately responds to a users emotional changes in a continuous manner.
Journal of Computers | 2008
Meng-Ju Han; Jing-Huai Hsu; Kai-Tai Song; Fuh-Yu Chang
Emotion recognition has become a popular area in human-robot interaction research. Through recognizing facial expressions, a robot can interact with a person in a more friendly manner. In this paper, we proposed a bimodal emotion recognition system by combining image and speech signals. A novel probabilistic strategy has been studied for a support vector machine (SVM)-based classification design to assign statistically information-fusion weights for two feature modalities. The fusion weights are determined by the distance between test data and the classification hyperplane and the standard deviation of training samples. In the latter bimodal SVM classification, the recognition result with higher weight is selected. The complete procedure has been implemented in a DSP-based embedded system to recognize five facial expressions on-line in real time. The experimental results show that an average recognition rate of 86.9% is achieved, a 5% improvement compared to using only image information.
systems, man and cybernetics | 2007
Meng-Ju Han; Jing-Huai Hsu; Kai-Tai Song; Fuh-Yu Chang
Emotion recognition has become an important research area for advanced human-robot interaction. Through recognizing facial expressions, a robot can interact with a person with a more friendly manner. In this paper, we proposed a bimodal emotion recognition system by combining image and speech information. A novel information fusion strategy is proposed to set proper weights to two feature modalities based on their recognition reliability. The fusion weights are determined by the distance between test data and the classification hyperplane and the standard deviation of training samples. After normalization using the mean distance between training samples and the hyperplane, the fusion weight is set to represent the classification reliability of individual modality. In the latter bimodal SVM classification, the recognition result with higher weight is selected. The complete procedure has been implemented in a DSP-based system to recognize five facial expressions on-line in real time. The experimental results show that a recognition rate of 86.9% is achieved, an improvement of 5% compared with using only image information.
Intelligent Service Robotics | 2010
Kai-Tai Song; Meng-Ju Han; Jung-Wei Hong
In order to serve people and support them in daily life, a domestic or service robot needs to accommodate itself to various individuals. Emotional and intelligent human–robot interaction plays an important role for a robot to gain attention of its users. Facial expression recognition is a key factor in interactive robotic applications. In this paper, an image-based facial expression recognition system that adapts online to a new face is proposed. The main idea of the proposed learning algorithm is to adjust parameters of the support vector machine (SVM) hyperplane for learning facial expressions of a new face. After mapping the input space to Gaussian-kernel space, support vector pursuit learning (SVPL) is employed to retrain the hyperplane in the new feature space. To expedite the retraining step, we propose to retrain a new SVM classifier by using only samples classified incorrectly in previous iteration in combination with critical historical sets. After adjusting the hyperplane parameters, the new classifier will recognize more effectively previous unrecognizable facial datasets. Experiments of using an embedded imaging system show that the proposed system recognizes new facial datasets with a recognition rate of 92.7%. Furthermore, it also maintains a satisfactory recognition rate of 82.6% of old facial samples.
systems, man and cybernetics | 2009
Meng-Ju Han; Chia-How Lin; Kai-Tai Song
This paper presents a novel design of autonomous robotic facial expression generation using a fuzzy-neuro network. The emotional characteristics of a robot can be specified in this design by assigning weights in the fuzzy-neuro network. The robotic face generates appropriate expressions in responding to the image recognition results of a users emotional state and the assigned robotic emotional characteristics. In this study, two contrast emotional characteristics, optimistic and pessimistic, have been designed and tested to verify their responses to the recognized user emotional states (neutral, happiness, sadness and anger). Computer simulations of a robotic face show that the proposed facial expression generation system effectively responds to a users emotional states in a human-like manner by fusing outputs of four facial expressions (boring, happiness, sadness and surprise).
Journal of The Chinese Institute of Engineers | 2014
Kai-Tai Song; Meng-Ju Han; Shih-Chieh Wang
By recognizing sensory information, through touch, vision, or voice sensory modalities, a robot can interact with people in a more intelligent manner. In human–robot interaction (HRI), emotion recognition has been a popular research topic in recent years. This paper proposes a method for emotion recognition, using a speech signal to recognize several basic human emotional states, for application in an entertainment robot. The proposed method uses voice signal processing and classification. Firstly, end-point detection and frame setting are accomplished in the pre-processing stage. Then, the statistical features of the energy contour are computed. Fisher’s linear discriminant analysis (FLDA) is used to enhance the recognition rate. In the final stage, a support vector machine (SVM) is used to complete the emotional state classification. In order to determine the effectiveness of emotional HRI, an embedded system was constructed and integrated with a self-built entertainment robot. The experimental results for the entertainment robot show that the robot interacts with a person in a responsive manner. The average recognition rate for five emotional states is 73.8% using the database constructed in the authors’ lab.
computational intelligence in robotics and automation | 2007
Jung-Wei Hong; Meng-Ju Han; Kai-Tai Song; Fuh-Yu Chang
The capability of robotic emotion recognition is an important factor for human-robot interaction. In order to facilitate a robot to function in daily live environments, a emotion recognition system needs to accommodate itself to various persons. In this paper, an emotion recognition system that can adapt to new facial data is proposed. The main idea of the proposed learning algorithm is to adjust parameters of SVM hyperplane for learning emotional expressions of a new face. After mapping the input space to Gaussian-kernel space, support vector pursuit learning (SVPL) is applied to retrain the hyperplane in the new feature space. To expedite the retraining procedure, only samples classified incorrectly in previous iteration are combined with critical historical sets to restrain a new SVM classifier. After adjusting hyperplane parameters, the new classifier will recognize previous erroneous facial data. Experimental results show that the proposed system recognize new facial data with high correction rates after fast retraining the hyperplane. Moreover, the proposed method also keeps satisfactory recognition rate of old facial samples.
international conference on system science and engineering | 2010
Yi-Wen Chen; Meng-Ju Han; Kai-Tai Song; Yu-Lun Ho
Image recognition of a user plays an important role in designing intelligent and interactive behaviors for a domestic or service robot. In this paper, an image-based age-group classification method is proposed to estimate three levels of age groups, namely child, adult and the elderly. After face detection from the acquired image frame, human facial area is extracted and 52 feature points are located by using Lucas-Kanade image alignment method. These feature points and corresponding located facial area are used to build an active appearance model (AAM). After facial image warping, the texture features are sent to a support vector machine (SVM) to estimate the level of age group. In the experimental results, the average recognition rate of the proposed method is 87%. It will improve the interaction capability of robot in a friendly manner.
advanced robotics and its social impacts | 2009
Kai-Tai Song; Shih-Chieh Wang; Meng-Ju Han; Ching-Yi Kuo
In this paper, a pose-variant face recognition system is presented for study of human-robot interaction design. An iterative fitting algorithm is proposed to extract feature-point positions based on active appearance model (AAM). Comparing with the traditional Lucas-Kanade algorithm, the proposed iterative algorithm improves the capability of correct convergence as a larger variation of head posture occurs. After obtaining the location of feature points, the dimension of texture model is reduced and then sent to a back propagation neural network (BPNN) to recognize family-members. The proposed pose-variant face recognition system has been implemented on an embedded image processing system and integrated with a pet robot. Experimental results show that the robot can interact with a person in a responding manner. Tested with the database from UMIST and the database built in the lab, the proposed method achieved average recognition rates of 91.0% and 95.6% respectively.
Key Engineering Materials | 2008
Kai-Tai Song; Meng-Ju Han; Fuh-Yu Chang; S.H. Chang
The capability of recognizing human facial expression plays an important role in advanced human-robot interaction development. Through recognizing facial expressions, a robot can interact with a user in a more natural and friendly manner. In this paper, we proposed a facial expression recognition system based on an embedded image processing platform to classify different facial expressions on-line in real time. A low-cost embedded vision system has been designed and realized for robotic applications using a CMOS image sensor and digital signal processor (DSP). The current design acquires thirty 640x480 image frames per second (30 fps). The proposed emotion recognition algorithm has been successfully implemented on the real-time vision system. Experimental results on a pet robot show that the robot can interact with a person in a responding manner. The developed image processing platform is effective for accelerating the recognition speed to 25 recognitions per second with an average on-line recognition rate of 74.4% for five facial expressions.