Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Samuel R. Bowman is active.

Publication


Featured researches published by Samuel R. Bowman.


empirical methods in natural language processing | 2015

A large annotated corpus for learning natural language inference

Samuel R. Bowman; Gabor Angeli; Christopher Potts; Christopher D. Manning

Understanding entailment and contradiction is fundamental to understanding natural language, and inference about entailment and contradiction is a valuable testing ground for the development of semantic representations. However, machine learning research in this area has been dramatically limited by the lack of large-scale resources. To address this, we introduce the Stanford Natural Language Inference corpus, a new, freely available collection of labeled sentence pairs, written by humans doing a novel grounded task based on image captioning. At 570K pairs, it is two orders of magnitude larger than all other resources of its type. This increase in scale allows lexicalized classifiers to outperform some sophisticated existing entailment models, and it allows a neural network-based model to perform competitively on natural language inference benchmarks for the first time.


conference on computational natural language learning | 2016

Generating Sentences from a Continuous Space

Samuel R. Bowman; Luke Vilnis; Oriol Vinyals; Andrew M. Dai; Rafal Jozefowicz; Samy Bengio

The standard recurrent neural network language model (RNNLM) generates sentences one word at a time and does not work from an explicit global sentence representation. In this work, we introduce and study an RNN-based variational autoencoder generative model that incorporates distributed latent representations of entire sentences. This factorization allows it to explicitly model holistic properties of sentences such as style, topic, and high-level syntactic features. Samples from the prior over these sentence representations remarkably produce diverse and well-formed sentences through simple deterministic decoding. By examining paths through this latent space, we are able to generate coherent novel sentences that interpolate between known sentences. We present techniques for solving the difficult learning problem presented by this model, demonstrate its effectiveness in imputing missing words, explore many interesting properties of the models latent sentence space, and present negative results on the use of the model in language modeling.


north american chapter of the association for computational linguistics | 2018

A BROAD-COVERAGE CHALLENGE CORPUS FOR SENTENCE UNDERSTANDING THROUGH INFERENCE

Adina Williams; Nikita Nangia; Samuel R. Bowman

This paper introduces the Multi-Genre Natural Language Inference (MultiNLI) corpus, a dataset designed for use in the development and evaluation of machine learning models for sentence understanding. In addition to being one of the largest corpora available for the task of NLI, at 433k examples, this corpus improves upon available resources in its coverage: it offers data from ten distinct genres of written and spoken English--making it possible to evaluate systems on nearly the full complexity of the language--and it offers an explicit setting for the evaluation of cross-genre domain adaptation.


meeting of the association for computational linguistics | 2016

A Fast Unified Model for Parsing and Sentence Understanding

Samuel R. Bowman; Jon Gauthier; Abhinav Rastogi; Raghav Gupta; Christopher D. Manning; Christopher Potts

Tree-structured neural networks exploit valuable syntactic parse information as they interpret the meanings of sentences. However, they suer from two key technical problems that make them slow and unwieldyforlarge-scaleNLPtasks: theyusually operate on parsed sentences and they do not directly support batched computation. We address these issues by introducingtheStack-augmentedParser-Interpreter NeuralNetwork(SPINN),whichcombines parsing and interpretation within a single tree-sequence hybrid model by integrating tree-structured sentence interpretation into the linear sequential structure of a shiftreduceparser. Ourmodelsupportsbatched computation for a speedup of up to 25◊ over other tree-structured models, and its integrated parser can operate on unparsed data with little loss in accuracy. We evaluate it on the Stanford NLI entailment task and show that it significantly outperforms other sentence-encoding models.


meeting of the association for computational linguistics | 2017

Detecting and Explaining Crisis.

Robert Morris; Samuel R. Bowman

Individuals on social media may reveal themselves to be in various states of crisis (e.g. suicide, self-harm, abuse, or eating disorders). Detecting crisis from social media text automatically and accurately can have profound consequences. However, detecting a general state of crisis without explaining why has limited applications. An explanation in this context is a coherent, concise subset of the text that rationalizes the crisis detection. We explore several methods to detect and explain crisis using a combination of neural and non-neural techniques. We evaluate these techniques on a unique data set obtained from Koko, an anonymous emotional support network available through various messaging applications. We annotate a small subset of the samples labeled with crisis with corresponding explanations. Our best technique significantly outperforms the baseline for detection and explanation.


meeting of the association for computational linguistics | 2017

Sequential Attention: A Context-Aware Alignment Function for Machine Reading.

Sebastian Brarda; Philip Yeres; Samuel R. Bowman

In this paper we propose a neural network model with a novel Sequential Attention layer that extends soft attention by assigning weights to words in an input sequence in a way that takes into account not just how well that word matches a query, but how well surrounding words match. We evaluate this approach on the task of reading comprehension (on the Who did What and CNN datasets) and show that it dramatically improves a strong baseline--the Stanford Reader--and is competitive with the state of the art.


arXiv: Computation and Language | 2015

Recursive Neural Networks Can Learn Logical Semantics

Samuel R. Bowman; Christopher Potts; Christopher D. Manning


language resources and evaluation | 2014

A Gold Standard Dependency Corpus for English

Natalia Silveira; Timothy Dozat; Marie-Catherine de Marneffe; Samuel R. Bowman; Miriam Connor; John Bauer; Christopher D. Manning


arXiv: Computation and Language | 2013

Can recursive neural tensor networks learn logical reasoning

Samuel R. Bowman


Archive | 2014

Recursive Neural Networks for Learning Logical Semantics.

Samuel R. Bowman; Christopher Potts; Christopher D. Manning

Collaboration


Dive into the Samuel R. Bowman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guillaume Lample

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge