Meghna Pandharipande
Harvard University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Meghna Pandharipande.
International Journal of Digital Multimedia Broadcasting | 2010
Hiranmay Ghosh; Sunil Kumar Kopparapu; Tanushyam Chattopadhyay; Ashish Khare; Sujal Subhash Wattamwar; Amarendra Gorai; Meghna Pandharipande
The problems associated with automatic analysis of news telecasts are more severe in a country like India, where there are many national and regional language channels, besides English. In this paper, we present a framework for multimodal analysis of multilingual news telecasts, which can be augmented with tools and techniques for specific news analytics tasks. Further, we focus on a set of techniques for automatic indexing of the news stories based on keywords spotted in speech as well as on the visuals of contemporary and domain interest. English keywords are derived from RSS feed and converted to Indian language equivalents for detection in speech and on ticker texts. Restricting the keyword list to a manageable number results in drastic improvement in indexing performance. We present illustrative examples and detailed experimental results to substantiate our claim.
Procedia Computer Science | 2016
Rupayan Chakraborty; Meghna Pandharipande; Sunil Kumar Kopparapu
Automatic speech emotion recognition plays an important role in intelligent human computer interaction. Identifying emotion in natural, day to day, spontaneous conversational speech is difficult because most often the emotion expressed by the speaker are not necessarily as prominent as in acted speech. In this paper, we propose a novel spontaneous speech emotion recognition framework that makes use of the available knowledge. The framework is motivated by the observation that there is significant disagreement amongst human annotators when they annotate spontaneous speech; the disagreement largely reduces when they are provided with additional knowledge related to the conversation. The proposed framework makes use of the contexts (derived from linguistic contents) and the knowledge regarding the time lapse of the spoken utterances in the context of an audio call to reliably recognize the current emotion of the speaker in spontaneous audio conversations. Our experimental results demonstrate that there is a significant improvement in the performance of spontaneous speech emotion recognition using the proposed framework.
ieee region 10 conference | 2015
Rupayan Chakraborty; Meghna Pandharipande; Sunil Kumar Kopparapu
Estimating emotion from speech is an active and ongoing area of research, however most of the literature addresses acted speech and not natural day to day conversational speech. Identifying emotion from the latter is difficult because the emotion expressed by non-actors is not necessarily prominent. In this paper we validate the hypothesis, which is based on the observations that human annotators show large inter and intra person variations in annotating emotions expressed in realistic speech as compared to the acted speech. We propose a method to recognize emotions using the knowledge of events in an interactive voice response setup. The main contribution of the paper is the use of event based knowledge to enhance the identification of emotions in real natural speech.
Archive | 2017
Rupayan Chakraborty; Meghna Pandharipande; Sunil Kumar Kopparapu
Emotions when explicitly demonstrated by an actor are easy for a machine to recognize by analyzing their speech. However in case of day to day, naturally spoken spontaneous speech it is not easy for machines to identify the expressed emotion even though emotion of the speaker are embedded in their speech. One of the main reasons for this is that people, especially non-actors, do not explicitly demonstrate their emotion when they speak, thus making it difficult to recognize the emotion embedded in their spoken speech. In this paper, based on some of our previous published work (example, Chakraborty et al. in Proceedings of the 20th International Conference KES-2016 96:587–596, 2016 [1], Chakraborty et al. in TENCON 2015—2015 IEEE Region 10 Conference 1–5, 2015 [2], Chakraborty et al. in PACLIC, 2016 [3], Pandharipande and Kopparapu in TENCON 2015—2015 IEEE Region 10 Conference 1–4, 2015 [4], Kopparapu in Non-Linguistic Analysis of Call Center Conversations, 2014 [5], Pandharipande and Kopparapu in ECTI Trans Comput Inf Technol 7(2):146–155, 2013 [6], Chakraborty and Kopparapu in 2016 IEEE International Conference on Multimedia and Expo Workshops, 1–6, 2016 [7]) we identify the challenges in recognizing emotions in spontaneous speech and suggest a framework that can assist in determining the emotions expressed in spontaneous speech.
international conference on pattern recognition | 2016
Rupayan Chakraborty; Meghna Pandharipande; Sunil Kumar Kopparapu
Automatic and spontaneous speech emotion recognition is an important part of a human-computer interactive system. However, emotion identification in spontaneous speech is difficult because most often the emotion expressed by the speaker are not necessarily as prominent as in acted speech. In this paper, we propose a spontaneous speech emotion recognition framework that makes use of the associated knowledge. The framework is motivated by the observation that there is significant disagreement amongst human annotators when they annotate spontaneous speech; the disagreement largely reduces when they are provided with additional knowledge related to the conversation. The proposed framework makes use of the contexts (derived from linguistic contents) and the knowledge regarding the time lapse of the spoken utterances in the context of an audio call to reliably recognize the current emotion of the speaker in spontaneous audio conversations. Our experimental results demonstrate that there is a significant improvement in the performance of spontaneous speech emotion recognition using the proposed framework.
International Journal of Mobile Human Computer Interaction | 2013
Ahmed Imran; Meghna Pandharipande; Sunil Kumar Kopparapu
There has been an increase in spoken interaction between people from different geographies or different cultural background prominently in the call center scenario. Noticeably, ineffectiveness of conversations is prominent when two people, from different cultures, converse in a language common to them. One of the main reason for conversation ineffectiveness is driven by the way conversation is spoken and not so much by what is being spoken. Speaking rate is a critical factor affecting intelligibility and comprehension of speech. In this paper, we present SpeakRite - a real-time mobile application that assists and guides a person to converse at the right speed by analyzing his spoken speech. As its main function, SpeakRite analyzes the speaking rate during a telephone conversation and provides a real time feedback to assist the speaker modify his speaking rate. Additionally, it also provides an offline analysis of the speaking rate variations in a recorded call. The authors discuss a real time implementation for monitoring speaking rate on a mobile phone device.
european workshop on visual information processing | 2010
Hiranmay Ghosh; Ashish Khare; Amarendra Gorai; Sunil Kumar Kopparapu; Meghna Pandharipande
Indexing of news video streams with semantic keywords is of interest to agencies that regularly monitor many news channels. In this paper, we describe a new method for indexing news video in different languages, for which there are inadequate language tools. Our approach involves combining multimodal inputs, namely audio and visual, and spotting of a handful of keywords with higher reliability, as compared to creating complete transcripts. We conduct a set of experiments to establish performance improvement despite lesser reliability of the language tools.
MediaEval | 2015
Rupayan Chakraborty; Avinash Kumar Maurya; Meghna Pandharipande; Ehtesham Hassan; Hiranmay Ghosh; Sunil Kumar Kopparapu
pacific asia conference on language information and computation | 2016
Rupayan Chakraborty; Meghna Pandharipande; Sunil Kumar Kopparapu
Archive | 2017
Rupayan Chakraborty; Meghna Pandharipande; Sunil Kumar Kopparapu