Ruchir Srivastava
National University of Singapore
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ruchir Srivastava.
ieee region 10 conference | 2009
Ruchir Srivastava; Sujoy Roy
In this paper, we propose an approach using spatial displacement (or residue) of facial points, for Facial Expression Recognition using 3D facial models. It is shown that residues of facial points, have more information about facial expressions than the static expressive face itself. This approach overcomes some of the inherent defects in just taking the expressive face as the basic model to work on. With the proposed approach, we get an average recognition rate of 91.7% for six expressions and the expression of surprise is recognized successfully a rate of 100%.
Face and Gesture 2011 | 2011
Ruchir Srivastava; Sujoy Roy; Shuicheng Yan; Terence Sim
This paper details the method and experiments conducted towards our submission to the FERA 2011 facial expression recognition benchmarking evaluations. The benchmarking evaluation task involves recognizing 5 emotion classes in videos. Our method for detecting facial expressions is a fusion of the decisions of two FER approaches based on two different feature representations, namely using motion information from facial regions and facial feature point displacement information. The main observation motivating the approach we took is that different feature representations are discriminative in detecting different facial expressions. Hence a fusion approach could complement each other to improve recognition performance. Experiments were conducted on the GEMEP-FERA data set provided by the organizers.
conference on multimedia modeling | 2011
Ruchir Srivastava; Sujoy Roy; Shuicheng Yan; Terence Sim
Approaches for emotion recognition in movie scenes using high level features, consider emotion of only a single actor. The contribution of this paper is to analyze using emotional information from multiple actors present in the scene instead of just one actor. A bimodal approach is proposed for fusing emotional cues from different actors using two different fusion methods. Emotional cues are obtained from facial expressions and dialogs. Experimental observations show that emotions of other actors do not necessarily provide helpful information about the emotion of the scene and recognition accuracy is better when emotions of only the speaker are considered.
Multimedia Tools and Applications | 2014
Ruchir Srivastava; Sujoy Roy
This paper presents an approach to recognize Facial Expressions of different intensities using 3D flow of facial points. 3D flow is the geometrical displacement (in 3D) of a facial point from its position in a neutral face to that in the expressive face. Experiments are performed on 3D face models from the BU-3DFE database. Four different intensities of expressions are used for analyzing the relevance of intensity of the expression for the task of FER. It was observed that high intensity expressions are easier to recognize and there is a need to develop algorithms for recognizing low intensity facial expressions. The proposed features outperform difference of facial distances and 2D optical flow. Performances of two classifiers, SVM and LDA are compared wherein SVM performs better. Feature selection did not prove useful.
advances in multimedia | 2012
Ruchir Srivastava; Sujoy Roy; Tan Dat Nguyen; Shuicheng Yan
Recommendation Systems involve effort from the user to elicit their preference for the item to be recommended. The contribution of this paper is in eliminating such effort by automatically assessing users personality and using the personality scores for recommending music tracks to them. Automatic personality assessment is performed by automatically answering a personality questionnaire by observing users audiovisual recordings. To obtain personality scores, traditionally the answers to the questionnaire are combined using a set of rules specific to the questionnaire to get personality scores. As a second contribution, an approach is proposed to automatically predict personality scores from answers to a questionnaire when the rules to combine the answers may not be known. Promising results on a dataset of 50 movie characters support the proposed approaches.
international conference on multimedia and expo | 2010
Ruchir Srivastava; Sujoy Roy; Terence Sim
Facial Expression Recognition has mostly been done on frontal or near frontal faces. However, most of the faces in real life are non-frontal. This paper deals with in-plane rotation of faces in image sequences and considers the six universal facial expressions. The proposed approach does not need to rotate the image to frontal position. FER by rotating images to frontal is sensitive to determination of rotation angle and can involve errors in tracking facial points. Directions of motion of Facial Feature Points(FFPs) is used for feature extraction. In training for six expressions, Gaussian Mixture Models are fit to the distribution of angles representing these directions of motion. These models are used for further classification of test sequences using SVM. Gaussian Mixture Modeling is experimentally found to be robust to errors in position of FFPs. For dimensionality reduction, feature selection is performed using Fisher ratio test.
Archive | 2011
P. Krishnan; S. Dam Roy; Ruchir Srivastava; A Anand; S Murugesan; M. Kaliyamoorthy; N Vikas; R Soundararajan
acm multimedia | 2012
Ruchir Srivastava; Jiashi Feng; Sujoy Roy; Shuicheng Yan; Terence Sim
Archive | 2011
Ruchir Srivastava; Shuicheng Yan; Terence Sim; Surendra Ranganath
Proceedings of SPIE, the International Society for Optical Engineering | 2010
Ruchir Srivastava; Terence Sim; Shuicheng Yan; Surendra Ranganath