Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Philip Bachman is active.

Publication


Featured researches published by Philip Bachman.


meeting of the association for computational linguistics | 2017

NEWSQA: A MACHINE COMPREHENSION DATASET

Adam Trischler; Tong Wang; Xingdi Yuan; Justin Harris; Alessandro Sordoni; Philip Bachman; Kaheer Suleman

We present NewsQA, a challenging machine comprehension dataset of over 100,000 human-generated question-answer pairs. Crowdworkers supply questions and answers based on a set of over 10,000 news articles from CNN, with answers consisting of spans of text in the articles. We collect this dataset through a four-stage process designed to solicit exploratory questions that require reasoning. Analysis confirms that NewsQA demands abilities beyond simple word matching and recognizing textual entailment. We measure human performance on the dataset and compare it to several strong neural models. The performance gap between humans and machines (13.3% F1) indicates that significant progress can be made on NewsQA through future research. The dataset is freely available online.


empirical methods in natural language processing | 2016

Natural Language Comprehension with the EpiReader.

Adam Trischler; Zheng Ye; Xingdi Yuan; Philip Bachman; Alessandro Sordoni; Kaheer Suleman

We present the EpiReader, a novel model for machine comprehension of text. Machine comprehension of unstructured, real-world text is a major research goal for natural language processing. Current tests of machine comprehension pose questions whose answers can be inferred from some supporting text, and evaluate a models response to the questions. The EpiReader is an end-to-end neural model comprising two components: the first component proposes a small set of candidate answers after comparing a question to its supporting text, and the second component formulates hypotheses using the proposed candidates and the question, then reranks the hypotheses based on their estimated concordance with the supporting text. We present experiments demonstrating that the EpiReader sets a new state-of-the-art on the CNN and Childrens Book Test machine comprehension benchmarks, outperforming previous neural models by a significant margin.


meeting of the association for computational linguistics | 2016

A Parallel-Hierarchical Model for Machine Comprehension on Sparse Data

Adam Trischler; Zheng Ye; Xingdi Yuan; Jing He; Philip Bachman

Understanding unstructured text is a major goal within natural language processing. Comprehension tests pose questions based on short text passages to evaluate such understanding. In this work, we investigate machine comprehension on the challenging {\it MCTest} benchmark. Partly because of its limited size, prior work on {\it MCTest} has focused mainly on engineering better features. We tackle the dataset with a neural approach, harnessing simple neural networks arranged in a parallel hierarchy. The parallel hierarchy enables our model to compare the passage, question, and answer from a variety of trainable perspectives, as opposed to using a manually designed, rigid feature set. Perspectives range from the word level to sentence fragments to sequences of sentences; the networks operate only on word-embedding representations of text. When trained with a methodology designed to help cope with limited training data, our Parallel-Hierarchical model sets a new state of the art for {\it MCTest}, outperforming previous feature-engineered approaches slightly and previous neural approaches by a significant margin (over 15\% absolute).


meeting of the association for computational linguistics | 2017

Machine Comprehension by Text-to-Text Neural Question Generation

Xingdi Yuan; Tong Wang; Caglar Gulcehre; Alessandro Sordoni; Philip Bachman; Saizheng Zhang; Sandeep Subramanian; Adam Trischler

We propose a recurrent neural model that generates natural-language questions from documents, conditioned on answers. We show how to train the model using a combination of supervised and reinforcement learning. After teacher forcing for standard maximum likelihood training, we fine-tune the model using policy gradient techniques to maximize several rewards that measure question quality. Most notably, one of these rewards is the performance of a question-answering system. We motivate question generation as a means to improve the performance of question answering systems. Our model is trained and evaluated on the recent question-answering dataset SQuAD.


european conference on machine learning | 2013

Greedy Confidence Pursuit: A Pragmatic Approach to Multi-bandit Optimization

Philip Bachman; Doina Precup

We address the practical problem of maximizing the number of high-confidence results produced among multiple experiments sharing an exhaustible pool of resources. We formalize this problem in the framework of bandit optimization as follows: given a set of multiple multi-armed bandits and a budget on the total number of trials allocated among them, select the top-m arms (with high confidence) for as many of the bandits as possible. To solve this problem, which we call greedy confidence pursuit, we develop a method based on posterior sampling. We show empirically that our method outperforms existing methods for top-m selection in single bandits, which has been studied previously, and improves on baseline methods for the full greedy confidence pursuit problem, which has not been studied previously.


national conference on artificial intelligence | 2018

Deep Reinforcement Learning that Matters

Peter Henderson; Riashat Islam; Joelle Pineau; David Meger; Doina Precup; Philip Bachman


neural information processing systems | 2014

Learning with Pseudo-Ensembles

Philip Bachman; Ouais Alsharif; Doina Precup


international conference on learning representations | 2017

Calibrating Energy-based Generative Adversarial Networks

Zihang Dai; Amjad Almahairi; Philip Bachman; Eduard H. Hovy; Aaron C. Courville


neural information processing systems | 2016

An Architecture for Deep, Hierarchical Generative Models

Philip Bachman


neural information processing systems | 2015

Data generation as sequential decision making

Philip Bachman; Doina Precup

Collaboration


Dive into the Philip Bachman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge