Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aasish Pappu is active.

Publication


Featured researches published by Aasish Pappu.


linguistic annotation workshop | 2017

Finding Good Conversations Online: The Yahoo News Annotated Comments Corpus.

Courtney Napoles; Joel R. Tetreault; Aasish Pappu; Enrica Rosato; Brian Provenzale

This work presents a dataset and annotation scheme for the new task of identifying “good” conversations that occur online, which we call ERICs: Engaging, Respectful, and/or Informative Conversations. We develop a taxonomy to reflect features of entire threads and individual comments which we believe contribute to identifying ERICs; code a novel dataset of Yahoo News comment threads (2.4k threads and 10k comments) and 1k threads from the Internet Argument Corpus; and analyze the features characteristic of ERICs. This is one of the largest annotated corpora of online human dialogues, with the most detailed set of annotations. It will be valuable for identifying ERICs and other aspects of argumentation, dialogue, and discourse.


annual meeting of the special interest group on discourse and dialogue | 2014

Knowledge Acquisition Strategies for Goal-Oriented Dialog Systems

Aasish Pappu; Alexander I. Rudnicky

Many goal-oriented dialog agents are expected to identify slot-value pairs in a spoken query, then perform lookup in a knowledge base to complete the task. When the agent encounters unknown slotvalues, it may ask the user to repeat or reformulate the query. But a robust agent can proactively seek new knowledge from a user, to help reduce subsequent task failures. In this paper, we propose knowledge acquisition strategies for a dialog agent and show their effectiveness. The acquired knowledge can be shown to subsequently contribute to task completion.


Archive | 2016

Investigating Critical Speech Recognition Errors in Spoken Short Messages

Aasish Pappu; Teruhisa Misu; Rakesh Gupta

Understanding dictated short-messages requires the system to perform speech recognition on the user’s speech. This speech recognition process is prone to errors. If the system can automatically detect the presence of an error, it can use dialog to clarify or correct its transcript. In this work, we present our analysis on what types of errors a recognition system makes, and propose a method to detect these critical errors. In particular, we distinguish between simple and critical errors, where the meaning in the transcript is not the same as the user dictated. We show that our method outperforms standard baseline techniques by 2 % absolute F-score.


parallel computing in electrical engineering | 2006

Hybrid Approach for Parallelization of Sequential Code with Function Level and Block Level Parallelization

K. Ashwin Kumar; Aasish Pappu; K. Sarath Kumar; Sudip Sanyal

Automatic parallelization of a sequential code is about finding parallel segments in the code and executing these segments parallely by sending them to different computers in a grid. Basically, parallel segments in the code can be found by doing block level analysis, instruction level analysis or function level analysis. Block is any continuous part of the code that performs a particular task. This paper talks about a hybrid approach that combines the block level analysis with functional level analysis for parallelization of sequential code and its illustrates its advantages over block level parallelization and function level parallelization performed independently. In this approach, segments of code are identified as basic blocks. These blocks are analyzed to identify them as parallelizable or dependent. Loops which are also identified as blocks are parallelized using existing loop parallelization techniques. This information would be used for automatic parallel processing of the set of independent blocks on different nodes in the grid using message passing interface (MPI). The system will annotate the MPI library functions to the program at appropriate positions in the source code to proceed with the automatic parallelization and execution of the program


meeting of the association for computational linguistics | 2017

DocTag2Vec: An Embedding Based Multi-label Learning Approach for Document Tagging.

Sheng Chen; Akshay Soni; Aasish Pappu; Yashar Mehdad

Tagging news articles or blog posts with relevant tags from a collection of predefined ones is coined as document tagging in this work. Accurate tagging of articles can benefit several downstream applications such as recommendation and search. In this work, we propose a novel yet simple approach called DocTag2Vec to accomplish this task. We substantially extend Word2Vec and Doc2Vec---two popular models for learning distributed representation of words and documents. In DocTag2Vec, we simultaneously learn the representation of words, documents, and tags in a joint vector space during training, and employ the simple


international conference on human computer interaction | 2013

Situated multiparty interaction between humans and agents

Aasish Pappu; Ming Sun; Seshadri Sridharan; Alexander I. Rudnicky

k


intelligent user interfaces | 2013

Deploying speech interfaces to the masses

Aasish Pappu; Alexander I. Rudnicky

-nearest neighbor search to predict tags for unseen documents. In contrast to previous multi-label learning methods, DocTag2Vec directly deals with raw text instead of provided feature vector, and in addition, enjoys advantages like the learning of tag representation, and the ability of handling newly created tags. To demonstrate the effectiveness of our approach, we conduct experiments on several datasets and show promising results against state-of-the-art methods.


annual meeting of the special interest group on discourse and dialogue | 2015

The Cohort and Speechify Libraries for Rapid Construction of Speech Enabled Applications for Android

Tejaswi Kasturi; Haojian Jin; Aasish Pappu; Sungjin Lee; Beverley Harrison; Ramana Murthy; Amanda Stent

A social agent such as a receptionist or an escort robot encounters challenges when communicating with people in open areas. The agent must know not to react to distracting acoustic and visual events and it needs to appropriately handle situations that include multiple humans, being able to to focus on active interlocutors and appropriately shift attention based on the context. We describe a multiparty interaction agent that helps multiple users arrange a common activity. From the user study we conducted, we found that the agent can discriminate between active and inactive interlocutors well by using the skeletal and azimuth information. Participants found the addressee much clearer when an animated talking head was used.


conference of the european chapter of the association for computational linguistics | 2014

Conversational Strategies for Robustly Managing Dialog in Public Spaces

Aasish Pappu; Ming Sun; Seshadri Sridharan; Alexander I. Rudnicky

Speech systems are typically deployed either over phones, e.g. IVR agents, or on embodied agents, e.g. domestic robots. Most of these systems are limited to a particular platform i.e., only accessible by phone or in situated interactions. This limits scalability and potential domain of operation. Our goal is to make speech interfaces more widely available, and we are proposing a new approach for deploying such interfaces on the internet along with traditional platforms. In this work, we describe a lightweight speech interface architecture built on top of Freeswitch, an open source softswitch platform. A softswitch enables us to provide users with access over several types of channels (phone, VOIP, etc.) as well as support multiple users at the same time. We demonstrate two dialog applications developed using this approach: 1) Virtual Chauffeur: a voice based virtual driving experience and 2) Talkie: a speech-based chat bot.


annual meeting of the special interest group on discourse and dialogue | 2013

Predicting Tasks in Goal-Oriented Spoken Dialog Systems using Semantic Knowledge Bases

Aasish Pappu; Alexander I. Rudnicky

Despite the prevalence of libraries that provide speech recognition and text-tospeech synthesis “in the cloud”, it remains difficult for developers to create user-friendly, consistent spoken language interfaces to their mobile applications. In this paper, we present the Speechify / Cohort libraries for rapid speech enabling of Android applications. The Speechify library wraps several publicly available speech recognition and synthesis APIs, incorporates state-of-the-art voice activity detection and simple and flexible hybrid speech recognition, and allows developers to experiment with different modes of user interaction. The Cohort library, built on a stripped-down version of OpenDial, facilitates flexible interaction between and within “Speechified” mobile applications.

Collaboration


Dive into the Aasish Pappu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Akshay Soni

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar

Benjamin Frisch

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Matthew Marge

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Ming Sun

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peng Li

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge