Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Karthik Narasimhan is active.

Publication


Featured researches published by Karthik Narasimhan.


empirical methods in natural language processing | 2015

Language Understanding for Text-based Games using Deep Reinforcement Learning

Karthik Narasimhan; Tejas D. Kulkarni; Regina Barzilay

In this paper, we consider the task of learning control policies for text-based games. In these games, all interactions in the virtual world are through text and the underlying state is not observed. The resulting language barrier makes such environments challenging for automatic game players. We employ a deep reinforcement learning framework to jointly learn state representations and action policies using game rewards as feedback. This framework enables us to map text descriptions into vector representations that capture the semantics of the game states. We evaluate our approach on two game worlds, comparing against baselines using bag-ofwords and bag-of-bigrams for state representations. Our algorithm outperforms the baselines on both worlds demonstrating the importance of learning expressive representations. 1


empirical methods in natural language processing | 2016

Improving Information Extraction by Acquiring External Evidence with Reinforcement Learning

Karthik Narasimhan; Adam Yala; Regina Barzilay

Most successful information extraction systems operate with access to a large collection of documents. In this work, we explore the task of acquiring and incorporating external evidence to improve extraction accuracy in domains where the amount of training data is scarce. This process entails issuing search queries, extraction from new sources and reconciliation of extracted values, which are repeated until sufficient evidence is collected. We approach the problem using a reinforcement learning framework where our model learns to select optimal actions based on contextual information. We employ a deep Q-network, trained to optimize a reward function that reflects extraction accuracy while penalizing extra effort. Our experiments on two databases -- of shooting incidents, and food adulteration cases -- demonstrate that our system significantly outperforms traditional extractors and a competitive meta-classifier baseline.


international conference on systems | 2016

sk_p: a neural program corrector for MOOCs

Yewen Pu; Karthik Narasimhan; Armando Solar-Lezama; Regina Barzilay

We present a novel technique for automatic program correction in MOOCs, capable of fixing both syntactic and semantic errors without manual, problem specific correction strategies. Given an incorrect student program, it generates candidate programs from a distribution of likely corrections, and checks each candidate for correctness against a test suite. The key observation is that in MOOCs many programs share similar code fragments, and the seq2seq neural network model, used in the natural-language processing task of machine translation, can be modified and trained to recover these fragments. Experiment shows our scheme can correct 29% of all incorrect submissions and out-performs state of the art approach which requires manual, problem specific correction strategies.


international joint conference on natural language processing | 2015

Machine Comprehension with Discourse Relations

Karthik Narasimhan; Regina Barzilay

This paper proposes a novel approach for incorporating discourse information into machine comprehension applications. Traditionally, such information is computed using off-the-shelf discourse analyzers. This design provides limited opportunities for guiding the discourse parser based on the requirements of the target task. In contrast, our model induces relations between sentences while optimizing a task-specific objective. This approach enables the model to benefit from discourse information without relying on explicit annotations of discourse structure during training. The model jointly identifies relevant sentences, establishes relations between them and predicts an answer. We implement this idea in a discriminative framework with hidden variables that capture relevant sentences and relations unobserved during training. Our experiments demonstrate that the discourse aware model outperforms state-of-the-art machine comprehension systems.1


empirical methods in natural language processing | 2014

Morphological Segmentation for Keyword Spotting

Karthik Narasimhan; Damianos Karakos; Richard M. Schwartz; Stavros Tsakalidis; Regina Barzilay

We explore the impact of morphological segmentation on keyword spotting (KWS). Despite potential benefits, stateof-the-art KWS systems do not use morphological information. In this paper, we augment a state-of-the-art KWS system with sub-word units derived from supervised and unsupervised morphological segmentations, and compare with phonetic and syllabic segmentations. Our experiments demonstrate that morphemes improve overall performance of KWS systems. Syllabic units, however, rival the performance of morphological units when used in KWS. By combining morphological, phonetic and syllabic segmentations, we demonstrate substantial performance gains.


empirical methods in natural language processing | 2016

Neural Generation of Regular Expressions from Natural Language with Minimal Domain Knowledge.

Nicholas Locascio; Karthik Narasimhan; Eduardo DeLeon; Nate Kushman; Regina Barzilay

This paper explores the task of translating natural language queries into regular expressions which embody their meaning. In contrast to prior work, the proposed neural model does not utilize domain-specific crafting, learning to translate directly from a parallel corpus. To fully explore the potential of neural models, we propose a methodology for collecting a large corpus of regular expression, natural language pairs. Our resulting model achieves a performance gain of 19.6% over previous state-of-the-art models.


meeting of the association for computational linguistics | 2016

Nonparametric Spherical Topic Modeling with Word Embeddings.

Kayhan N. Batmanghelich; Ardavan Saeedi; Karthik Narasimhan; Samuel J. Gershman

Traditional topic models do not account for semantic regularities in language. Recent distributional representations of words exhibit semantic consistency over directional metrics such as cosine similarity. However, neither categorical nor Gaussian observational distributions used in existing topic models are appropriate to leverage such correlations. In this paper, we propose to use the von Mises-Fisher distribution to model the density of words over a unit sphere. Such a representation is well-suited for directional data. We use a Hierarchical Dirichlet Process for our base topic model and propose an efficient inference algorithm based on Stochastic Variational Inference. This model enables us to naturally exploit the semantic structures of word embeddings while flexibly discovering the number of topics. Experiments demonstrate that our method outperforms competitive approaches in terms of topic coherence on two different text corpora while offering efficient inference.


international conference on acoustics, speech, and signal processing | 2017

Constructing sub-word units for spoken term detection

Charl Johannes van Heerden; Damianos Karakos; Karthik Narasimhan; Marelie H. Davel; Richard M. Schwartz

Spoken term detection, especially of out-of-vocabulary (OOV) keywords, benefits from the use of sub-word systems. We experiment with different language-independent approaches to sub-word unit generation, generating both syllable-like and morpheme-like units, and demonstrate how the performance of syllable-like units can be improved by artificially increasing the number of unique units. The effect of unit choice is empirically evaluated using the eight languages from the 2016 IARPA BABEL evaluation.


neural information processing systems | 2016

Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation

Tejas D. Kulkarni; Karthik Narasimhan; Ardavan Saeedi; Joshua B. Tenenbaum


Transactions of the Association for Computational Linguistics | 2015

An Unsupervised Method for Uncovering Morphological Chains

Karthik Narasimhan; Regina Barzilay; Tommi S. Jaakkola

Collaboration


Dive into the Karthik Narasimhan's collaboration.

Top Co-Authors

Avatar

Regina Barzilay

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Tommi S. Jaakkola

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ardavan Saeedi

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tejas D. Kulkarni

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Adam Yala

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Armando Solar-Lezama

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jiaming Luo

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jonathan H. Huggins

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge