Aruna Chakraborty
St. Thomas' College of Engineering and Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Aruna Chakraborty.
systems man and cybernetics | 2013
Anisha Halder; Amit Konar; Rajshree Mandal; Aruna Chakraborty; Pavel Bhowmik; Nikhil R. Pal; Atulya K. Nagar
Facial expressions of a person representing similar emotion are not always unique. Naturally, the facial features of a subject taken from different instances of the same emotion have wide variations. In the presence of two or more facial features, the variation of the attributes together makes the emotion recognition problem more complicated. This variation is the main source of uncertainty in the emotion recognition problem, which has been addressed here in two steps using type-2 fuzzy sets. First a type-2 fuzzy face space is constructed with the background knowledge of facial features of different subjects for different emotions. Second, the emotion of an unknown facial expression is determined based on the consensus of the measured facial features with the fuzzy face space. Both interval and general type-2 fuzzy sets (GT2FS) have been used separately to model the fuzzy face space. The interval type-2 fuzzy set (IT2FS) involves primary membership functions for m facial features obtained from n-subjects, each having l-instances of facial expressions for a given emotion. The GT2FS in addition to employing the primary membership functions mentioned above also involves the secondary memberships for individual primary membership curve, which has been obtained here by formulating and solving an optimization problem. The optimization problem here attempts to minimize the difference between two decoded signals: the first one being the type-1 defuzzification of the average primary membership functions obtained from the n-subjects, while the second one refers to the type-2 defuzzified signal for a given primary membership function with secondary memberships as unknown. The uncertainty management policy adopted using GT2FS has resulted in a classification accuracy of 98.333% in comparison to 91.667% obtained by its interval type-2 counterpart. A small improvement (approximately 2.5%) in classification accuracy by IT2FS has been attained by pre-processing measurements using the well-known interval approach.
PerMIn'12 Proceedings of the First Indo-Japan conference on Perception and Machine Intelligence | 2012
Amit Konar; Aruna Chakraborty; Anisha Halder; Rajshree Mandal; Ramadoss Janarthanan
The paper proposes a new approach to emotion recognition from facial expression of a subject by constructing an Interval type-2 fuzzy model. An interval type-2 fuzzy face-space is first constructed with the background knowledge of facial features of different subjects for different emotions. The fuzzy face-space thus created comprises primary membership distributions for m facial features, obtained from n subjects, each having
nature and biologically inspired computing | 2009
Sauvik Das; Anisha Halder; Pavel Bhowmik; Aruna Chakraborty; Amit Konar; Ramadoss Janarthanan
\textit{l}
international symposium on neural networks | 2014
Reshma Kar; Amit Konar; Aruna Chakraborty; Atulya K. Nagar
-instances of facial expression for a given emotion. Second, the emotion of an unknown facial expression is determined based on the consensus of the measured facial features with the fuzzy face-space.The classification accuracy of the proposed method is as high as 88.66 %.
ieee international conference on fuzzy systems | 2012
Anisha Halder; Pratyusha Rakshit; Sumantra Chakraborty; Amit Konar; Aruna Chakraborty; Eunjin Kim; Atulya K. Nagar
The paper provides a novel approach to emotion recognition from facial expression and voice of subjects. The subjects are asked to manifest their emotional exposure in both facial expression and voice, while uttering a given sentence. Facial features including mouth-opening, eye-opening, eyebrow-constriction, and voice features including, first three formants: F1, F2, and F3, and respective powers at those formants, and pitch are extracted for 7 different emotional expressions of each subject. A linear Support Vector Machine classifier is used to classify the extracted feature vectors into different emotion classes. Sensitivity of the classifier to Gaussian noise is studied, and experimental results confirm that the recognition accuracy of emotion up to a level of 95% is maintained, even when the mean and standard deviation of noise are as high as 5% and 20% respectively over the individual features. A further analysis to identify the importance of individual features reveals that mouthopening and eye-opening are primary features, in absence of which classification accuracy falls off by a large margin of more than 22%.
ieee international conference on fuzzy systems | 2011
Rajshree Mandal; Anisha Halder; Pavel Bhowmik; Amit Konar; Aruna Chakraborty; Atulya K. Nagar
Neuroscientists usually determine similarity between EEG electrode signals, by a measure of pairwise linear dependence among them. However, recent research indicates the drawbacks of analyzing the pairwise dependence of signals instead of analyzing the simultaneous joint interdependence among them. To overcome this problem we propose a novel similarity measure known as probabilistic relative correlation. Our approach is unique because our similarity measure allows the electrodes to have probabilistic similarity measures and recognizes emotion dependent structures even from mismatched sequences of correlation. We further validate our proposed similarity measure by testing it on the well-known emotion recognition problem. Our experiments have noteworthy implications towards realizing the neural signatures of discrete emotions and will allow for the better understanding of neurological pathways associated with different emotional states. To identify the most active neurological pathways in brain during an emotion, we adapt the minimal spanning tree algorithm.
Computers in healthcare | 2010
Pavel Bhowmik; Sauvik Das; Amit Konar; D. Nandi; Aruna Chakraborty
The essence of the paper is to reduce uncertainty in interval type-2 fuzzy sets, and demonstrate the merit of uncertainty reduction in pattern classification problem. The area under the footprint of uncertainty has been used as the measure of uncertainty. A mathematical approach to reduce the area under the footprint of uncertainty has been proposed. Experiments have been designed to compare the relative performance of the classical interval type-2 fuzzy sets with its revised counterpart in emotion recognition from facial expression. Statistical tests performed favor the proposed results of uncertainty reduction. The proposed uncertainty reduction scheme helps in saving approximately 6% gain in classification accuracy with respect to one published work when applied to emotion recognition problem.
swarm evolutionary and memetic computing | 2013
Reshma Kar; Aruna Chakraborty; Amit Konar; Ramadoss Janarthanan
Manifestation of a given emotion on facial expression is not always unique, as the facial attributes in different instances of similar emotional experiences may vary widely. When a number of facial attributes are used to recognize the emotion of a subject, the variation of individual attributes together makes the problem more complicated. This variation is the main source of uncertainty in the emotion recognition problem, which has been addressed here in two steps using type-2 fuzzy sets. First a type-2 fuzzy face-space is constructed with the background knowledge of facial features of different subjects for different emotions. Second, the emotion of the unknown subject is determined based on the consensus of the measured facial features with the fuzzy face-space. The face-space comprises both primary and secondary membership distributions. The primary membership distributions here have been constructed based on the highest frequency of occurrence of the individual attributes. Naturally, the membership values of an attribute at all except the point of highest frequency of occurrence suffer from inaccuracy, which has been taken care of by secondary memberships. An algorithm for the evaluation of the secondary membership distribution from its type-2 primary counterpart has been proposed. The uncertainty management policy adopted using general type-2 fuzzy set has a classification accuracy of 96.67% in comparison to 88.67% obtained by interval type-2 counterpart only.
trans. computational science | 2015
Reshma Kar; Amit Konar; Aruna Chakraborty
The paper proposes an alternative approach to emotion recognition from stimulated EEG signals using Duffing oscillator. Reported works on emotion clustering generally employ the principles of supervised learning. Unfortunately, because of noisy and limited feature set, the classification problem often suffers from high inaccuracy. This has been overcome in this paper by submitting the EEG signals directly to a Duffing oscillator and the phase portraits constructed from its time-response demonstrate structural similarity to similar emotion excitatory stimuli. The accuracy in clustering was experimentally validated even with injection of Gaussian noise over the EEG signal up to a signal-to-noise ratio of 25 dB. The results of clustering in presence of low signal-to-noise ratio confirm the robustness of the proposed scheme.
systems, man and cybernetics | 2009
Aruna Chakraborty; Pavel Bhowmik; Swagatam Das; Anisha Halder; Amit Konar; Atulya K. Nagar
Gestures have been called the leaky source of emotional information. Also gestures are easy to retrieve from a distance by ordinary cameras. Thus as many would agree gestures become an important clue to the emotional state of a person. In this paper we have worked on recognizing emotions of a person by analyzing only gestural information. Subjects are initially trained to perform emotionally expressive gestures by a professional actor. The same actor trained the system to recognize the emotional context of gestures. Finally the gestural performances of the subjects are evaluated by the system to identify the class of emotion indicated. Our system yields an accuracy of 94.4% with a training set of only one gesture per emotion. Apart from this our system is also computationally efficient. Our work analyses emotions from only gestures, which is a significant step towards reducing the cost efficiency of emotion recognition. It may be noted here that this system may also be used for the purpose of general gesture recognition. We have proposed new features and a new classifying approach using fuzzy sets. We have achieved state of art accuracy with minimal complexity as each motion trajectory along each axis generates only 4 displacement features. Each axis generates a trajectory and only 6 joint trajectories among all joint trajectories are compared. The 6 motion trajectories are selected based on maximum motion, as maximum moving regions give more information on gestures. The experiments have been performed on data obtained from Microsoft Kinect sensors. Training and Testing were subject gender independent.