Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eunsol Choi is active.

Publication


Featured researches published by Eunsol Choi.


meeting of the association for computational linguistics | 2017

TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension

Mandar Joshi; Eunsol Choi; Daniel S. Weld; Luke Zettlemoyer

We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- this http URL


conference on computational natural language learning | 2017

Zero-Shot Relation Extraction via Reading Comprehension

Omer Levy; Minjoon Seo; Eunsol Choi; Luke Zettlemoyer

We show that relation extraction can be reduced to answering simple reading comprehension questions, by associating one or more natural-language questions with each relation slot. This reduction has several advantages: we can (1) learn relation-extraction models by extending recent neural reading-comprehension techniques, (2) build very large training sets for those models by combining relation-specific crowd-sourced questions with distant supervision, and even (3) do zero-shot learning by extracting new relation types that are only specified at test-time, for which we have no labeled training examples. Experiments on a Wikipedia slot-filling task demonstrate that the approach can generalize to new questions for known relation types with high accuracy, and that zero-shot generalization to unseen relation types is possible, at lower accuracy levels, setting the bar for future work on this task.


meeting of the association for computational linguistics | 2016

Document-level Sentiment Inference with Social, Faction, and Discourse Context.

Eunsol Choi; Hannah Rashkin; Luke Zettlemoyer; Yejin Choi

We present a new approach for documentlevel sentiment inference, where the goal is to predict directed opinions (who feels positively or negatively towards whom) for all entities mentioned in a text. To encourage more complete and consistent predictions, we introduce an ILP that jointly models (1) sentenceand discourse-level sentiment cues, (2) factual evidence about entity factions, and (3) global constraints based on social science theories such as homophily, social balance, and reciprocity. Together, these cues allow for rich inference across groups of entities, including for example that CEOs and the companies they lead are likely to have similar sentiment towards others. We evaluate performance on new, densely labeled data that provides supervision for all pairs, complementing previous work that only labeled pairs mentioned in the same sentence. Experiments demonstrate that the global model outperforms sentence-level baselines, by providing more coherent predictions across sets of related entities.


meeting of the association for computational linguistics | 2017

Coarse-to-Fine Question Answering for Long Documents

Eunsol Choi; Daniel Hewlett; Jakob Uszkoreit; Illia Polosukhin; Alexandre Lacoste; Jonathan Berant

We present a framework for question answering that can efficiently scale to longer documents while maintaining or even improving performance of state-of-the-art models. While most successful approaches for reading comprehension rely on recurrent neural networks (RNNs), running them over long documents is prohibitively slow because it is difficult to parallelize over sequences. Inspired by how people first skim the document, identify relevant parts, and carefully read these parts to produce an answer, we combine a coarse, fast model for selecting relevant sentences and a more expensive RNN for producing the answer from those sentences. We treat sentence selection as a latent variable trained jointly from the answer only using reinforcement learning. Experiments demonstrate state-of-the-art performance on a challenging subset of the WikiReading dataset and on a new dataset, while speeding up the model by 3.5x-6.7x.


meeting of the association for computational linguistics | 2012

Hedge Detection as a Lens on Framing in the GMO Debates: A Position Paper

Eunsol Choi; Chenhao Tan; Lillian Lee; Cristian Danescu-Niculescu-Mizil; Jennifer Spindel


empirical methods in natural language processing | 2017

Truth of Varying Shades: Analyzing Language in Fake News and Political Fact-Checking

Hannah Rashkin; Eunsol Choi; Jin Yea Jang; Svitlana Volkova; Yejin Choi


arXiv: Computation and Language | 2016

Hierarchical Question Answering for Long Documents.

Eunsol Choi; Daniel Hewlett; Alexandre Lacoste; Illia Polosukhin; Jakob Uszkoreit; Jonathan Berant


empirical methods in natural language processing | 2018

QuAC: Question Answering in Context

Eunsol Choi; He He; Mohit Iyyer; Mark Yatskar; Wen-tau Yih; Yejin Choi; Percy Liang; Luke Zettlemoyer


meeting of the association for computational linguistics | 2018

Ultra-Fine Entity Typing

Eunsol Choi; Omer Levy; Yejin Choi; Luke Zettlemoyer


empirical methods in natural language processing | 2018

Neural Metaphor Detection in Context

Ge Gao; Eunsol Choi; Yejin Choi; Luke Zettlemoyer

Collaboration


Dive into the Eunsol Choi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yejin Choi

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Omer Levy

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel S. Weld

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge