P. Suja
Amrita Vishwa Vidyapeetham
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by P. Suja.
First International Symposium on Signal Processing and Intelligent Recognition Systems - SIRS 2014 | 2014
P. Suja; Shikha Tripathi; J. Deepthy
An emotion recognition system from facial expression is used for recognizing expressions from the facial images and classifying them into one of the six basic emotions. Feature extraction and classification are the two main steps in an emotion recognition system. In this paper, two approaches viz., cropped face and whole face methods for feature extraction are implemented separately on the images taken from Cohn-Kanade (CK) and JAFFE database. Transform techniques such as Dual – Tree Complex Wavelet Transform (DT-CWT) and Gabor Wavelet Transform are considered for the formation of feature vectors along with Neural Network (NN) and K-Nearest Neighbor (KNN) as the Classifiers. These methods are combined in different possible combinations with the two aforesaid approaches and the databases to explore their efficiency. The overall average accuracy is 93% and 80% for NN and KNN respectively. The results are compared with those existing in literature and prove to be more efficient. The results suggest that cropped face approach gives better results compared to whole face approach. DT-CWT outperforms Gabor wavelet technique for both classifiers.
international conference on signal processing | 2016
Suchitra; P. Suja; Shikha Tripathi
In present day technology human-machine interaction is growing in demand and machine needs to understand human gestures and emotions. If a machine can identify human emotions, it can understand human behavior better, thus improving the task efficiency. Emotions can understand by text, vocal, verbal and facial expressions. Facial expressions play big role in judging emotions of a person. It is found that limited work is done in field of real time emotion recognition using facial images. In this paper, we propose a method for real time emotion recognition from facial image. In the proposed method we use three steps face detection using Haar cascade, features extraction using Active shape Model(ASM), (26 facial points extracted) and Adaboost classifier for classification of five emotions anger, disgust, happiness, neutral and surprise. The novelty of our proposed method lies in the implementation of emotion recognition at real time on Raspberry Pi II and an average accuracy of 94% is achieved at real time. The Raspberry Pi II when mounted on a mobile robot can recognize emotions dynamically in real time under social/service environments where emotion recognition plays a major role.
international conference on contemporary computing | 2015
P. Suja; Kalyan Kumar V P; Shikha Tripathi
Emotions are characterized as responses to internal and external events of a person. Emotion recognition through facial expressions from videos plays a vital role in human computer interaction where the dynamic changes in face movements needs to be realized quickly. In this work, we propose a simple method, using the geometrical based approach for the recognition of six basic emotions in video sequences of BU-4DFE database. We have chosen optimum feature points out of the 83 feature points provided in the BU-4DFE database. A video expressing emotion will have frames containing neutral, onset, apex and offset of that emotion. We have dynamically identified the frame that is most expressive for an emotion (apex). The Euclidean distance between the feature points in apex and neutral frame is determined and their difference in corresponding neutral and the apex frame is calculated to form the feature vector. The feature vectors thus formed for all the emotions and subjects are given to Neural Networks (NN) and Support Vector Machine (SVM) with different kernels for classification. We have compared the accuracy obtained by NN & SVM. Our proposed method is simple, uses only two frames and yields good accuracy for BU-4DFE database. Very complex algorithms exist in literature using BU-4DFE database and our proposed simple method gives comparable results. It can be applied for real time implementation and kinesics in future.
SIRS | 2016
V. P. Kalyan Kumar; P. Suja; Shikha Tripathi
Emotions are important to understand human behavior. Several modalities of emotion recognition are text, speech, facial expression or gesture. Emotion recognition through facial expressions from video play a vital role in human computer interaction where the facial feature movements that convey the emotion expressed need to be recognized quickly. In this work, we propose a novel method for the recognition of six basic emotions in 4D video sequences of BU-4DFE database using geometric based approach. We have selected key facial points out of the 83 feature points provided in the BU-4DFE database. A video expressing emotion has frames containing neutral, onset, apex and offset of that emotion. We have identified the apex frame from a video sequence automatically. The Euclidean distance between the feature points in apex and neutral frame is determined and their difference in corresponding neutral and the apex frame is calculated to form the feature vector. The feature vectors thus formed for all the emotions and subjects are given to Random Forests and Support Vector Machine (SVM) for classification. We have compared the accuracy obtained by the two classifiers. Our proposed method is simple, uses only two frames and yields good accuracy for BU-4DFE database. We have determined optimum number of key facial points that could provide better recognition rate using the computed distance vectors. Our proposed method gives better results compared with literature and can be applied for real time implementation using SVM classifier and kinesics in future.
ieee india conference | 2015
P. Suja; D. KrishnaSri; Shikha Tripathi
Information about the emotional state of a person can be inferred from facial expressions. Emotion recognition has become an active research area in recent years in various fields such as Human Robot Interaction (HRI), medicine, intelligent vehicle, etc., The challenges in emotion recognition from images with pose variations, motivates researchers to explore further. In this paper, we have proposed a method based on geometric features, considering images of 7 yaw angles (-45°,-30°,-15°,0°,+15°,+30°,+45°) from BU3DFE database. Most of the work that has been reported considered only positive yaw angles. In this work, we have included both positive and negative yaw angles. In the proposed method, feature extraction is carried out by concatenating distance and angle vectors between the feature points, and classification is performed using neural network. The results obtained for images with pose variations are encouraging and comparable with literature where work has been performed on pitch and yaw angles. Using our proposed method non-frontal views achieve similar accuracy when compared to frontal view thus making it pose invariant. The proposed method may be implemented for pitch and yaw angles in future.
Advances in Signal Processing and Intelligent Recognition Systems | 2016
D. KrishnaSri; P. Suja; Shikha Tripathi
Over the last decade emotion recognition has gained prominence for its applications in the field of Human Robot Interaction (HRI), intelligent vehicle, patient health monitoring, etc. The challenges in emotion recognition from non-frontal images, motivates researchers to explore further. In this paper, we have proposed a method based on geometric features, considering 4 yaw angles (0˚, +15˚, +30˚, +45˚) from BU-3DFE database. The novelty in our proposed work lies in identifying the most appropriate set of feature points and formation of feature vector using two different approaches. Neural network is used for classification. Among the 6 basic emotions four emotions i.e., anger, happy, sad and surprise are considered. The results are encouraging. The proposed method may be implemented for combination of pitch and yaw angles in future.
International Journal of Advanced Intelligence Paradigms | 2015
P. Suja; Shikha Tripathi
Facial expressions are non-verbal signs that play an important role in interpersonal communications. There are six basic universally accepted emotions viz., happiness, surprise, anger, sadness, fear and disgust. An emotion recognition system is used for recognising different expressions from the facial images/videos and classifying them into one of the six basic emotions. Spatial domain methods are more popularly used in literature in comparison to transform domain methods. In this paper, two approaches viz., cropped face and whole face methods for feature extraction are implemented separately on the images taken from Cohn-Kanade CK and JAFFE databases. Classification is performed using K-nearest neighbour and neural network. The results are compared and analysed. The results suggest that transform domain techniques yield better accuracy than spatial domain techniques and cropped face approach outperforms whole face approach for both the databases for few feature extraction methods. Such systems find application in human computer interaction, entertainment industry and could be used for clinical diagnosis.
soft computing | 2016
P. Suja; Sherin Mariam Thomas; Shikha Tripathi; V. K. Madan
Facial expressions are one of the most powerful and immediate means for human beings to communicate their emotions. Recognizing human emotion has varied range of applications in humanoid robots, animation industry, psychology, forensic analysis, medical aid, automotive industry, etc. This work focuses on emotion recognition under various illumination conditions using images from CMU-MultiPIE database. The database is provided with five basic expressions like neutral, happiness, anger, disgust and surprise with varying pose and illuminations. The experiment has been conducted on images with varying illuminations initially without pre-processing and also by applying a proposed ratio-based pre-processing method followed by feature extraction and classification. Dual—Tree-Complex Wavelet Transform (DT-CWT) was applied for the formation of feature vectors along with K-Nearest Neighbour (KNN) as the classifier. The result shows that pre-processed images give better results than original images. It is thus concluded that varying illumination has effect on emotion recognition and the pre-processing algorithm demonstrates improvement in accuracy of recognition. Future work may include a broader perspective of using body language and speech data for emotion recognition.
intelligent human computer interaction | 2016
S Sai Prathusha; P. Suja; Shikha Tripathi; R Louis
In this paper, we propose and compare three methods for recognizing emotions from facial expressions using 4D videos. In the first two methods, the 3D faces are re-sampled by using curves to extract the feature information. Two different methods are presented to resample the faces in an intelligent way using parallel curves and radial curves. The movement of the face is measured through these curves using two frames: neutral and peak frame. The deformation matrix is formed by computing the distance point to point on the corresponding curves of the neutral frame and peak frame. This matrix is used to create the feature vector that will be used for classification using Support Vector Machine (SVM). The third method proposed is to extract the feature information from the face by using surface normals. At every point on the frame, surface normals are extracted. The deformation matrix is formed by computing the Euclidean distances between the corresponding normals at a point on neutral and peak frames. This matrix is used to create the feature vector that will be used for classification of emotions using SVM. The proposed methods are analyzed and they showed improvement over existing literature.
2017 International Conference On Smart Technologies For Smart Nation (SmartTechCon) | 2017
Gowri Patil; P. Suja