Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Arvind Neelakantan is active.

Publication


Featured researches published by Arvind Neelakantan.


empirical methods in natural language processing | 2014

Efficient Non-parametric Estimation of Multiple Embeddings per Word in Vector Space

Arvind Neelakantan; Jeevan Shankar; Alexandre Passos; Andrew McCallum

There is rising interest in vector-space word embeddings and their use in NLP, especially given recent methods for their fast estimation at very large scale. Nearly all this work, however, assumes a single vector per word type—ignoring polysemy and thus jeopardizing their usefulness for downstream tasks. We present an extension to the Skip-gram model that efficiently learns multiple embeddings per word type. It differs from recent related work by jointly performing word sense discrimination and embedding learning, by non-parametrically estimating the number of senses per word type, and by its efficiency and scalability. We present new state-of-the-art results in the word similarity in context task and demonstrate its scalability by training with one machine on a corpus of nearly 1 billion tokens in less than 6 hours.


international joint conference on natural language processing | 2015

Compositional Vector Space Models for Knowledge Base Completion

Arvind Neelakantan; Benjamin Roth; Andrew McCallum

Knowledge base (KB) completion adds new facts to a KB by making inferences from existing facts, for example by inferring with high likelihood nationality(X,Y) from bornIn(X,Y). Most previous methods infer simple one-hop relational synonyms like this, or use as evidence a multi-hop relational path treated as an atomic feature, like bornIn(X,Z)→ containedIn(Z,Y). This paper presents an approach that reasons about conjunctions of multi-hop relations non-atomically, composing the implications of a path using a recurrent neural network (RNN) that takes as inputs vector embeddings of the binary relation in the path. Not only does this allow us to generalize to paths unseen at training time, but also, with a single high-capacity RNN, to predict new relation types not seen when the compositional model was trained (zero-shot learning). We assemble a new dataset of over 52M relational triples, and show that our method improves over a traditional classifier by 11%, and a method leveraging pre-trained embeddings by 7%.


north american chapter of the association for computational linguistics | 2015

Inferring Missing Entity Type Instances for Knowledge Base Completion: New Dataset and Methods

Arvind Neelakantan; Ming-Wei Chang

Most of previous work in knowledge base (KB) completion has focused on the problem of relation extraction. In this work, we focus on the task of inferring missing entity type instances in a KB, a fundamental task for KB competition yet receives little attention. Due to the novelty of this task, we construct a large-scale dataset and design an automatic evaluation methodology. Our knowledge base completion method uses information within the existing KB and external information from Wikipedia. We show that individual methods trained with a global objective that considers unobserved cells from both the entity and the type side gives consistently higher quality predictions compared to baseline methods. We also perform manual evaluation on a small subset of the data to verify the effectiveness of our knowledge base completion methods and the correctness of our proposed automatic evaluation method.


association for information science and technology | 2016

Predicting the impact of scientific concepts using full-text features

Kathy McKeown; Hal Daumé; Snigdha Chaturvedi; John Paparrizos; Kapil Thadani; Pablo Barrio; Or Biran; Suvarna Bothe; Michael Collins; Kenneth R. Fleischmann; Luis Gravano; Rahul Jha; Ben King; Kevin McInerney; Taesun Moon; Arvind Neelakantan; Diarmuid O'Seaghdha; Dragomir R. Radev; Clay Templeton; Simone Teufel

New scientific concepts, interpreted broadly, are continuously introduced in the literature, but relatively few concepts have a long‐term impact on society. The identification of such concepts is a challenging prediction task that would help multiple parties—including researchers and the general public—focus their attention within the vast scientific literature. In this paper we present a system that predicts the future impact of a scientific concept, represented as a technical term, based on the information available from recently published research articles. We analyze the usefulness of rich features derived from the full text of the articles through a variety of approaches, including rhetorical sentence analysis, information extraction, and time‐series analysis. The results from two large‐scale experiments with 3.8 million full‐text articles and 48 million metadata records support the conclusion that full‐text features are significantly more useful for prediction than metadata‐only features and that the most accurate predictions result from combining the metadata and full‐text features. Surprisingly, these results hold even when the metadata features are available for a much larger number of documents than are available for the full‐text features.


conference of the european chapter of the association for computational linguistics | 2014

Learning Dictionaries for Named Entity Recognition using Minimal Supervision

Arvind Neelakantan; Michael Collins

This paper describes an approach for automatic construction of dictionaries for Named Entity Recognition (NER) using large amounts of unlabeled data and a few seed examples. We use Canonical Correlation Analysis (CCA) to obtain lower dimensional embeddings (representations) for candidate phrases and classify these phrases using a small number of labeled examples. Our method achieves 16.5% and 11.3% F-1 score improvement over co-training on disease and virus NER respectively. We also show that by adding candidate phrase embeddings as features in a sequence tagger gives better performance compared to using word embeddings.


north american chapter of the association for computational linguistics | 2016

Incorporating Selectional Preferences in Multi-hop Relation Extraction.

Rajarshi Das; Arvind Neelakantan; David Belanger; Andrew McCallum

Relation extraction is one of the core challenges in automated knowledge base construction. One line of approach for relation extraction is to perform multi-hop reasoning on the paths connecting an entity pair to infer new relations. While these methods have been successfully applied for knowledge base completion, they do not utilize the entity or the entity type information to make predictions. In this work, we incorporate selectional preferences, i.e., relations enforce constraints on the allowed entity types for the candidate entities, to multi-hop relation extraction by including entity type information. We achieve a 17.67% (relative) improvement in MAP score in a relation extraction task when compared to a method that does not use entity type information.


international conference on learning representations | 2016

Neural Programmer: Inducing Latent Programs with Gradient Descent

Arvind Neelakantan; Quoc V. Le; Ilya Sutskever


arXiv: Machine Learning | 2017

Adding Gradient Noise Improves Learning for Very Deep Networks

Arvind Neelakantan; Luke Vilnis; Quoc V. Le; Lukasz Kaiser; Karol Kurach; Ilya Sutskever; James Martens


international conference on learning representations | 2017

Learning a Natural Language Interface with Neural Programmer

Arvind Neelakantan; Quoc V. Le; Martín Abadi; Andrew McCallum; Dario Amodei


conference of the european chapter of the association for computational linguistics | 2017

Chains of Reasoning over Entities, Relations, and Text using Recurrent Neural Networks

Rajarshi Das; Arvind Neelakantan; David Belanger; Andrew McCallum

Collaboration


Dive into the Arvind Neelakantan's collaboration.

Top Co-Authors

Avatar

Andrew McCallum

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

David Belanger

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rajarshi Das

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexandre Passos

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Ashish Vaswani

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Aurko Roy

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge