Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rupayan Chakraborty is active.

Publication


Featured researches published by Rupayan Chakraborty.


international conference on multimedia and expo | 2016

Improved speech emotion recognition using error correcting codes

Rupayan Chakraborty; Sunil Kumar Kopparapu

We propose the use of the popular error correcting codes (ECC) in a multi-class audio emotion recognition scenario to improve the emotion recognition accuracy in spoken speech. In this paper, we visualize the emotion recognition system as a noisy communication channel, thus motivating the use of ECC in the emotion recognition process. We assume the emotion recognition process consists of an audio feature extraction module followed by an artificial neural network (ANN) for emotion (represented by a binary string) classification. The noisy communication channel, in our formulation, is the insufficiently learnt ANN classifier which in turn results in an erroneous (binary string) emotion classification. In our system, we use ECC to encode the binary string representing the emotion class using a Block Coder (BC). We show through rigorous experimentation, on Emo-DB database, that the use of ECC improves the recognition accuracy of the emotion classification system in the range of (4.6 - 9.35)% in comparison to the baseline ANN-based emotion classification system.


Procedia Computer Science | 2016

Knowledge-based Framework for Intelligent Emotion Recognition in Spontaneous Speech

Rupayan Chakraborty; Meghna Pandharipande; Sunil Kumar Kopparapu

Automatic speech emotion recognition plays an important role in intelligent human computer interaction. Identifying emotion in natural, day to day, spontaneous conversational speech is difficult because most often the emotion expressed by the speaker are not necessarily as prominent as in acted speech. In this paper, we propose a novel spontaneous speech emotion recognition framework that makes use of the available knowledge. The framework is motivated by the observation that there is significant disagreement amongst human annotators when they annotate spontaneous speech; the disagreement largely reduces when they are provided with additional knowledge related to the conversation. The proposed framework makes use of the contexts (derived from linguistic contents) and the knowledge regarding the time lapse of the spoken utterances in the context of an audio call to reliably recognize the current emotion of the speaker in spontaneous audio conversations. Our experimental results demonstrate that there is a significant improvement in the performance of spontaneous speech emotion recognition using the proposed framework.


ieee region 10 conference | 2015

Event based emotion recognition for realistic non-acted speech

Rupayan Chakraborty; Meghna Pandharipande; Sunil Kumar Kopparapu

Estimating emotion from speech is an active and ongoing area of research, however most of the literature addresses acted speech and not natural day to day conversational speech. Identifying emotion from the latter is difficult because the emotion expressed by non-actors is not necessarily prominent. In this paper we validate the hypothesis, which is based on the observations that human annotators show large inter and intra person variations in annotating emotions expressed in realistic speech as compared to the acted speech. We propose a method to recognize emotions using the knowledge of events in an interactive voice response setup. The main contribution of the paper is the use of event based knowledge to enhance the identification of emotions in real natural speech.


Archive | 2017

Do You Mean What You Say? Recognizing Emotions in Spontaneous Speech

Rupayan Chakraborty; Meghna Pandharipande; Sunil Kumar Kopparapu

Emotions when explicitly demonstrated by an actor are easy for a machine to recognize by analyzing their speech. However in case of day to day, naturally spoken spontaneous speech it is not easy for machines to identify the expressed emotion even though emotion of the speaker are embedded in their speech. One of the main reasons for this is that people, especially non-actors, do not explicitly demonstrate their emotion when they speak, thus making it difficult to recognize the emotion embedded in their spoken speech. In this paper, based on some of our previous published work (example, Chakraborty et al. in Proceedings of the 20th International Conference KES-2016 96:587–596, 2016 [1], Chakraborty et al. in TENCON 2015—2015 IEEE Region 10 Conference 1–5, 2015 [2], Chakraborty et al. in PACLIC, 2016 [3], Pandharipande and Kopparapu in TENCON 2015—2015 IEEE Region 10 Conference 1–4, 2015 [4], Kopparapu in Non-Linguistic Analysis of Call Center Conversations, 2014 [5], Pandharipande and Kopparapu in ECTI Trans Comput Inf Technol 7(2):146–155, 2013 [6], Chakraborty and Kopparapu in 2016 IEEE International Conference on Multimedia and Expo Workshops, 1–6, 2016 [7]) we identify the challenges in recognizing emotions in spontaneous speech and suggest a framework that can assist in determining the emotions expressed in spontaneous speech.


international conference on pattern recognition | 2016

Spontaneous speech emotion recognition using prior knowledge

Rupayan Chakraborty; Meghna Pandharipande; Sunil Kumar Kopparapu

Automatic and spontaneous speech emotion recognition is an important part of a human-computer interactive system. However, emotion identification in spontaneous speech is difficult because most often the emotion expressed by the speaker are not necessarily as prominent as in acted speech. In this paper, we propose a spontaneous speech emotion recognition framework that makes use of the associated knowledge. The framework is motivated by the observation that there is significant disagreement amongst human annotators when they annotate spontaneous speech; the disagreement largely reduces when they are provided with additional knowledge related to the conversation. The proposed framework makes use of the contexts (derived from linguistic contents) and the knowledge regarding the time lapse of the spoken utterances in the context of an audio call to reliably recognize the current emotion of the speaker in spontaneous audio conversations. Our experimental results demonstrate that there is a significant improvement in the performance of spontaneous speech emotion recognition using the proposed framework.


international joint conference on artificial intelligence | 2018

A Novel Data Representation for Effective Learning in Class Imbalanced Scenarios

Sri Harsha Dumpala; Rupayan Chakraborty; Sunil Kumar Kopparapu

Class imbalance refers to the scenario where certain classes are highly under-represented compared to other classes in terms of the availability of training data. This situation hinders the applicability of conventional machine learning algorithms to most of the classification problems where class imbalance is prominent. Most existing methods addressing class imbalance either rely on sampling techniques or cost-sensitive learning methods; thus inheriting their shortcomings. In this paper, we introduce a novel approach that is different from sampling or cost-sensitive learning based techniques, to address the class imbalance problem, where two samples are simultaneously considered to train the classifier. Further, we propose a mechanism to use a single base classifier, instead of an ensemble of classifiers, to obtain the output label of the test sample using majority voting method. Experimental results on several benchmark datasets clearly indicate the usefulness of the proposed approach over the existing state-of-the-art techniques.


systems, man and cybernetics | 2016

Validating “Is ECC-ANN combination equivalent to DNN?” for speech emotion recognition

Rupayan Chakraborty; Sunil Kumar Kopparapu

Use of the error correcting codes (ECC) in a multiclass audio emotion recognition problem is proposed to improve the emotion recognition accuracy. We visualize the emotion recognition system as a noisy communication channel, thus motivating the use of ECC. We assume the emotion recognition process consists of an audio feature extractor followed by an artificial neural network (ANN) for emotion classification. In our formulation, the noise in the communication channel is a result of insufficiently learnt ANN classifier which results in an erroneous emotion classification. We first show that the ECC-ANN combination performs better than the ANN classifier, justifying the use of ECC-ANN combination. We further make the conjecture that ECC in ECC-ANN combination can be visualized as a part of Deep Neural Network (DNN) where the intelligence is under control. We show through rigorous experimentation, on Emo-DB database, that the use of ECC-ANN combination is equivalent to the DNN; in terms of the improved recognition accuracies over an ANN. Our experimental results show that both ECC-ANN and DNN give a minimum absolute improvement of around 13.75%.


pacific asia conference on language information and computation | 2016

MINING CALL CENTER CONVERSATIONS EXHIBITING SIMILAR AFFECTIVE STATES

Rupayan Chakraborty; Meghna Pandharipande; Sunil Kumar Kopparapu


national conference on artificial intelligence | 2018

Knowledge-Driven Feed-Forward Neural Network for Audio Affective Content Analysis.

Sri Harsha Dumpala; Rupayan Chakraborty; Sunil Kumar Kopparapu


arXiv: Learning | 2017

k-FFNN: A priori knowledge infused Feed-forward Neural Networks.

Sri Harsha Dumpala; Rupayan Chakraborty; Sunil Kumar Kopparapu

Collaboration


Dive into the Rupayan Chakraborty's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge