Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Will Monroe is active.

Publication


Featured researches published by Will Monroe.


empirical methods in natural language processing | 2016

Deep Reinforcement Learning for Dialogue Generation

Jiwei Li; Will Monroe; Alan Ritter; Daniel Jurafsky; Michel Galley; Jianfeng Gao

Recent neural models of dialogue generation offer great promise for generating responses for conversational agents, but tend to be shortsighted, predicting utterances one at a time while ignoring their influence on future outcomes. Modeling the future direction of a dialogue is crucial to generating coherent, interesting dialogues, a need which led traditional NLP models of dialogue to draw on reinforcement learning. In this paper, we show how to integrate these goals, applying deep reinforcement learning to model future reward in chatbot dialogue. The model simulates dialogues between two virtual agents, using policy gradient methods to reward sequences that display three useful conversational properties: informativity (non-repetitive turns), coherence, and ease of answering (related to forward-looking function). We evaluate our model on diversity, length as well as with human judges, showing that the proposed algorithm generates more interactive responses and manages to foster a more sustained conversation in dialogue simulation. This work marks a first step towards learning a neural conversational model based on the long-term success of dialogues.


empirical methods in natural language processing | 2017

Adversarial Learning for Neural Dialogue Generation

Jiwei Li; Will Monroe; Tianlin Shi; Sébastien Jean; Alan Ritter; Daniel Jurafsky

We apply adversarial training to open-domain dialogue generation, training a system to produce sequences that are indistinguishable from human-generated dialogue utterances. We cast the task as a reinforcement learning problem where we jointly train two systems: a generative model to produce response sequences, and a discriminator—analagous to the human evaluator in the Turing test— to distinguish between the human-generated dialogues and the machine-generated ones. In this generative adversarial network approach, the outputs from the discriminator are used to encourage the system towards more human-like dialogue. Further, we investigate models for adversarial evaluation that uses success in fooling an adversary as a dialogue evaluation metric, while avoiding a number of potential pitfalls. Experimental results on several metrics, including adversarial evaluation, demonstrate that the adversarially-trained system generates higher-quality responses than previous baselines


meeting of the association for computational linguistics | 2014

Word Segmentation of Informal Arabic with Domain Adaptation

Will Monroe; Spence Green; Christopher D. Manning

Segmentation of clitics has been shown to improve accuracy on a variety of Arabic NLP tasks. However, state-of-the-art Arabic word segmenters are either limited to formal Modern Standard Arabic, performing poorly on Arabic text featuring dialectal vocabulary and grammar, or rely on linguistic knowledge that is hand-tuned for each dialect. We extend an existing MSA segmenter with a simple domain adaptation technique and new features in order to segment informal and dialectal Arabic text. Experiments show that our system outperforms existing systems on newswire, broadcast news and Egyptian dialect, improvingsegmentationF1 scoreonarecently released Egyptian Arabic corpus to 95.1%, compared to 90.8% for another segmenter designed specifically for Egyptian Arabic.


international joint conference on natural language processing | 2015

Text to 3D Scene Generation with Rich Lexical Grounding

Angel X. Chang; Will Monroe; Manolis Savva; Christopher Potts; Christopher D. Manning

The ability to map descriptions of scenes to 3D geometric representations has many applications in areas such as art, education, and robotics. However, prior work on the text to 3D scene generation task has used manually specified object categories and language that identifies them. We introduce a dataset of 3D scenes annotated with natural language descriptions and learn from this data how to ground textual descriptions to physical objects. Our method successfully grounds a variety of lexical terms to concrete referents, and we show quantitatively that our method improves 3D scene generation over previous work using purely rule-based methods. We evaluate the fidelity and plausibility of 3D scenes generated with our grounding approach through human judgments. To ease evaluation on this task, we also introduce an automated metric that strongly correlates with human judgments.


arXiv: Computation and Language | 2016

Understanding Neural Networks through Representation Erasure.

Jiwei Li; Will Monroe; Daniel Jurafsky


arXiv: Computation and Language | 2016

A Simple, Fast Diverse Decoding Algorithm for Neural Generation.

Jiwei Li; Will Monroe; Daniel Jurafsky


arXiv: Computation and Language | 2015

Learning in the Rational Speech Acts Model.

Will Monroe; Christopher Potts


arXiv: Computation and Language | 2017

Learning to Decode for Future Success.

Jiwei Li; Will Monroe; Daniel Jurafsky


Transactions of the Association for Computational Linguistics | 2017

Colors in Context: A Pragmatic Neural Model for Grounded Language Understanding

Will Monroe; Robert X. D. Hawkins; Noah D. Goodman; Christopher Potts


empirical methods in natural language processing | 2016

Learning to Generate Compositional Color Descriptions.

Will Monroe; Noah D. Goodman; Christopher Potts

Collaboration


Dive into the Will Monroe's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge