Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sumit Chopra is active.

Publication


Featured researches published by Sumit Chopra.


empirical methods in natural language processing | 2015

A Neural Attention Model for Abstractive Sentence Summarization

Alexander M. Rush; Sumit Chopra; Jason Weston

Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines.


empirical methods in natural language processing | 2014

Question Answering with Subgraph Embeddings

Antoine Bordes; Sumit Chopra; Jason Weston

This paper presents a system which learns to answer questions on a broad range of topics from a knowledge base using few hand-crafted features. Our model learns low-dimensional embeddings of words and knowledge base constituents; these representations are used to score natural language questions against candidate answers. Training our system using pairs of questions and structured representations of their answers, and pairs of question paraphrases, yields competitive results on a recent benchmark of the literature.


north american chapter of the association for computational linguistics | 2016

Abstractive Sentence Summarization with Attentive Recurrent Neural Networks

Sumit Chopra; Michael Auli; Alexander M. Rush

Abstractive Sentence Summarization generates a shorter version of a given sentence while attempting to preserve its meaning. We introduce a conditional recurrent neural network (RNN) which generates a summary of an input sentence. The conditioning is provided by a novel convolutional attention-based encoder which ensures that the decoder focuses on the appropriate input words at each step of generation. Our model relies only on learned features and is easy to train in an end-to-end fashion on large data sets. Our experiments show that the model significantly outperforms the recently proposed state-of-the-art method on the Gigaword corpus while performing competitively on the DUC-2004 shared task.ive Sentence Summarization generates a shorter version of a given sentence while attempting to preserve its meaning. We introduce a conditional recurrent neural network (RNN) which generates a summary of an input sentence. The conditioning is provided by a novel convolutional attention-based encoder which ensures that the decoder focuses on the appropriate input words at each step of generation. Our model relies only on learned features and is easy to train in an end-to-end fashion on large data sets. Our experiments show that the model significantly outperforms the recently proposed state-of-the-art method on the Gigaword corpus while performing competitively on the DUC-2004 shared task.


empirical methods in natural language processing | 2014

#TagSpace: Semantic Embeddings from Hashtags

Jason Weston; Sumit Chopra; Keith Adams

We describe a convolutional neural network that learns feature representations for short textual posts using hashtags as a supervised signal. The proposed approach is trained on up to 5.5 billion words predicting 100,000 possible hashtags. As well as strong performance on the hashtag prediction task itself, we show that its learned representation of text (ignoring the hashtag labels) is useful for other tasks as well. To that end, we present results on a document recommendation task, where it also outperforms a number of baselines.


Archive | 2008

Machine Learning and the Spatial Structure of House Prices and Housing Returns

Andrew Caplin; Sumit Chopra; John Leahy; Yann LeCun; Trivikraman Thampy

Economists do not have reliable measures of current house values, let alone housing returns. This ignorance underlies the illiquidity of mortgage-backed securities, which in turn feeds back to deepen the sub-prime crisis. Using a massive new data tape of housing transactions in L.A., we demonstrate systematic patterns in the error associated with using the ubiquitous repeat sales methodology to understand house values. In all periods, the resulting indices under-predict sales prices of less expensive homes, and over-predict prices of more expensive homes. The recent period has produced errors that are not only unprecedentedly large in absolute value, but highly systematic: after a few years in which the indices under-predicted prices, they now significantly over-predict them. We introduce new machine learning techniques from computer science to correct for prediction errors that have geographic origins. The results are striking. Accounting for geography significantly reduces the extent of the prediction error, removes many of the systematic patterns, and results in far less deterioration in model performance in the recent period.


computer vision and pattern recognition | 2005

Learning a similarity metric discriminatively, with application to face verification

Sumit Chopra; Raia Hadsell; Yann LeCun


neural information processing systems | 2006

Efficient Learning of Sparse Representations with an Energy-Based Model

Marc'Aurelio Ranzato; Christopher S. Poultney; Sumit Chopra; Yann Le Cun


computer vision and pattern recognition | 2006

Dimensionality Reduction by Learning an Invariant Mapping

Raia Hadsell; Sumit Chopra; Yann LeCun


international conference on learning representations | 2015

Memory Networks

Jason Weston; Sumit Chopra; Antoine Bordes


international conference on learning representations | 2016

Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks

Jason Weston; Antoine Bordes; Sumit Chopra; Alexander M. Rush; Bart van Merriënboer; Armand Joulin; Tomas Mikolov

Collaboration


Dive into the Sumit Chopra's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arthur Szlam

City College of New York

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge