Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Prashanth Vijayaraghavan is active.

Publication


Featured researches published by Prashanth Vijayaraghavan.


north american chapter of the association for computational linguistics | 2016

DeepStance at SemEval-2016 Task 6: Detecting Stance in Tweets Using Character and Word-Level CNNs

Prashanth Vijayaraghavan; Ivan Sysoev; Soroush Vosoughi; Deb Roy

This paper describes our approach for the Detecting Stance in Tweets task (SemEval-2016 Task 6). We utilized recent advances in short text categorization using deep learning to create word-level and character-level models. The choice between word-level and character-level models in each particular case was informed through validation performance. Our final system is a combination of classifiers using word-level or character-level models. We also employed novel data augmentation techniques to expand and diversify our training dataset, thus making our system more robust. Our system achieved a macro-average precision, recall and F1-scores of 0.67, 0.61 and 0.635 respectively.


meeting of the association for computational linguistics | 2017

Twitter Demographic Classification Using Deep Multi-modal Multi-task Learning.

Prashanth Vijayaraghavan; Soroush Vosoughi; Deb Roy

Twitter should be an ideal place to get a fresh read on how different issues are playing with the public, one that’s potentially more reflective of democracy in this new media age than traditional polls. Pollsters typically ask people a fixed set of questions, while in social media people use their own voices to speak about whatever is on their minds. However, the demographic distribution of users on Twitter is not representative of the general population. In this paper, we present a demographic classifier for gender, age, political orientation and location on Twitter. We collected and curated a robust Twitter demographic dataset for this task. Our classifier uses a deep multi-modal multi-task learning architecture to reach a state-of-the-art performance, achieving an F1-score of 0.89, 0.82, 0.86, and 0.68 for gender, age, political orientation, and location respectively.


virtual reality software and technology | 2017

Auris: creating affective virtual spaces from music

Misha Sra; Pattie Maes; Prashanth Vijayaraghavan; Deb Roy

Affective virtual spaces are of interest in many virtual reality applications such as education, wellbeing, rehabilitation, and entertainment. In this paper we present Auris, a system that attempts to generate affective virtual environments from music. We use music as input because it inherently encodes emotions that listeners readily recognize and respond to. Creating virtual environments is a time consuming and labor-intensive task involving various skills like design, 3D modeling, texturing, animation, and coding. Auris helps make this easier by automating the virtual world generation task using mood and content extracted from song audio and lyrics data respectively. Our user study results indicate virtual spaces created by Auris successfully convey the mood of the songs used to create them and achieve high presence scores with the potential to provide novel experiences of listening to music.


intelligent user interfaces | 2017

TweetVista: An AI-Powered Interactive Tool for Exploring Conversations on Twitter

Prashanth Vijayaraghavan; Soroush Vosoughi; Ann Yuan; Deb Roy

We present TweetVista, an interactive web-based tool for mapping the conversation landscapes on Twitter. TweetVista is an intelligent and interactive desktop web application for exploring the conversation landscapes on Twitter. Given a dataset of tweets, the tool uses advanced NLP techniques using deep neural networks and a scalable clustering algorithm to map out coherent conversation clusters. The interactive visualization engine then enables the users to explore these clusters. We ran three case studies using datasets about the 2016 US presidential election and the summer 2016 Orlando shooting. Despite the enormous size of these datasets, using TweetVista users were able to quickly and clearly make sense of the various conversation topics around these datasets.


computer vision and pattern recognition | 2017

DeepSpace: Mood-Based Image Texture Generation for Virtual Reality from Music

Misha Sra; Prashanth Vijayaraghavan; Ognjen Rudovic; Pattie Maes; Deb Roy

Affective virtual spaces are of interest for many VR applications in areas of wellbeing, art, education, and entertainment. Creating content for virtual environments is a laborious task involving multiple skills like 3D modeling, texturing, animation, lighting, and programming. One way to facilitate content creation is to automate sub-processes like assignment of textures and materials within virtual environments. To this end, we introduce the DeepSpace approach that automatically creates and applies image textures to objects in procedurally created 3D scenes. The main novelty of our DeepSpace approach is that it uses music to automatically create kaleidoscopic textures for virtual environments designed to elicit emotional responses in users. Specifically, DeepSpace exploits the modeling power of deep neural networks, which have shown great performance in image generation tasks, to achieve mood-based image generation. Our study results indicate the virtual environments created by DeepSpace elicit positive emotions and achieve high presence scores.


international conference on weblogs and social media | 2016

Automatic Detection and Categorization of Election-Related Tweets

Prashanth Vijayaraghavan; Soroush Vosoughi; Deb Roy


empirical methods in natural language processing | 2018

Learning Personas from Dialogue with Attentive Memory Networks

Eric Chu; Prashanth Vijayaraghavan; Deb Roy


international conference on weblogs and social media | 2017

Mapping Twitter Conversation Landscapes

Soroush Vosoughi; Prashanth Vijayaraghavan; Ann Yuan; Deb Roy


Vosoughi | 2016

Tweet2Vec: Learning Tweet Embeddings Using Character-level CNN-LSTM Encoder-Decoder

Soroush Vosoughi; Prashanth Vijayaraghavan; Deb Roy

Collaboration


Dive into the Prashanth Vijayaraghavan's collaboration.

Top Co-Authors

Avatar

Deb Roy

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Soroush Vosoughi

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ann Yuan

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Misha Sra

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Pattie Maes

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ivan Sysoev

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ognjen Rudovic

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Eric Chu

University of Amsterdam

View shared research outputs
Researchain Logo
Decentralizing Knowledge