Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kenton Lee is active.

Publication


Featured researches published by Kenton Lee.


empirical methods in natural language processing | 2015

Broad-coverage CCG Semantic Parsing with AMR

Yoav Artzi; Kenton Lee; Luke Zettlemoyer

We propose a grammar induction technique for AMR semantic parsing. While previous grammar induction techniques were designed to re-learn a new parser for each target application, the recently annotated AMR Bank provides a unique opportunity to induce a single model for understanding broad-coverage newswire text and support a wide range of applications. We present a new model that combines CCG parsing to recover compositional aspects of meaning and a factor graph to model non-compositional phenomena, such as anaphoric dependencies. Our approach achieves 66.2 Smatch F1 score on the AMR bank, significantly outperforming the previous state of the art.


north american chapter of the association for computational linguistics | 2018

DEEP CONTEXTUALIZED WORD REPRESENTATIONS

Matthew E. Peters; Mark Neumann; Mohit Iyyer; Matt Gardner; Christopher G. Clark; Kenton Lee; Luke Zettlemoyer

We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus. We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pre-trained network is crucial, allowing downstream models to mix different types of semi-supervision signals.


meeting of the association for computational linguistics | 2014

Context-dependent Semantic Parsing for Time Expressions

Kenton Lee; Yoav Artzi; Jesse Dodge; Luke Zettlemoyer

We present an approach for learning context-dependent semantic parsers to identify and interpret time expressions. We use a Combinatory Categorial Grammar to construct compositional meaning representations, while considering contextual cues, such as the document creation time and the tense of the governing verb, to compute the final time values. Experiments on benchmark datasets show that our approach outperforms previous stateof-the-art systems, with error reductions of 13% to 21% in end-to-end performance.


meeting of the association for computational linguistics | 2017

Deep Semantic Role Labeling: What Works and What’s Next

Luheng He; Kenton Lee; Mike Lewis; Luke Zettlemoyer

We introduce a new deep learning model for semantic role labeling (SRL) that significantly improves the state of the art, along with detailed analyses to reveal its strengths and limitations. We use a deep highway BiLSTM architecture with constrained decoding, while observing a number of recent best practices for initialization and regularization. Our 8-layer ensemble model achieves 83.2 F1 on theCoNLL 2005 test set and 83.4 F1 on CoNLL 2012, roughly a 10% relative error reduction over the previous state of the art. Extensive empirical analysis of these gains show that (1) deep models excel at recovering long-distance dependencies but can still make surprisingly obvious errors, and (2) that there is still room for syntactic parsers to improve these results.


north american chapter of the association for computational linguistics | 2016

LSTM CCG Parsing

Mike Lewis; Kenton Lee; Luke Zettlemoyer

We demonstrate that a state-of-the-art parser can be built using only a lexical tagging model and a deterministic grammar, with no explicit model of bi-lexical dependencies. Instead, all dependencies are implicitly encoded in an LSTM supertagger that assigns CCG lexical categories. The parser significantly outperforms all previously published CCG results, supports efficient and optimal A decoding, and benefits substantially from semisupervised tri-training. We give a detailed analysis, demonstrating that the parser can recover long-range dependencies with high accuracy and that the semi-supervised learning enables significant accuracy gains. By running the LSTM on a GPU, we are able to parse over 2600 sentences per second while improving state-of-the-art accuracy by 1.1 F1 in domain and up to 4.5 F1 out of domain.


empirical methods in natural language processing | 2017

End-to-end Neural Coreference Resolution

Kenton Lee; Luheng He; Mike Lewis; Luke Zettlemoyer

We introduce the first end-to-end coreference resolution model and show that it significantly outperforms all previous work without using a syntactic parser or hand-engineered mention detector. The key idea is to directly consider all spans in a document as potential mentions and learn distributions over possible antecedents for each. The model computes span embeddings that combine context-dependent boundary representations with a head-finding attention mechanism. It is trained to maximize the marginal likelihood of gold antecedent spans from coreference clusters and is factored to enable aggressive pruning of potential mentions. Experiments demonstrate state-of-the-art performance, with a gain of 1.5 F1 on the OntoNotes benchmark and by 3.1 F1 using a 5-model ensemble, despite the fact that this is the first approach to be successfully trained with no external resources.


empirical methods in natural language processing | 2016

Global Neural CCG Parsing with Optimality Guarantees.

Kenton Lee; Mike Lewis; Luke Zettlemoyer

We introduce the first global recursive neural parsing model with optimality guarantees during decoding. To support global features, we give up dynamic programs and instead search directly in the space of all possible subtrees. Although this space is exponentially large in the sentence length, we show it is possible to learn an efficient A* parser. We augment existing parsing models, which have informative bounds on the outside score, with a global model that has loose bounds but only needs to model non-local phenomena. The global model is trained with a new objective that encourages the parser to explore a tiny fraction of the search space. The approach is applied to CCG parsing, improving state-of-the-art accuracy by 0.4 F1. The parser finds the optimal parse for 99.9% of held-out sentences, exploring on average only 190 subtrees.


empirical methods in natural language processing | 2015

Event Detection and Factuality Assessment with Non-Expert Supervision

Kenton Lee; Yoav Artzi; Yejin Choi; Luke Zettlemoyer

Events are communicated in natural language with varying degrees of certainty. For example, if you are “hoping for a raise,” it may be somewhat less likely than if you are “expecting” one. To study these distinctions, we present scalable, highquality annotation schemes for event detection and fine-grained factuality assessment. We find that non-experts, with very little training, can reliably provide judgments about what events are mentioned and the extent to which the author thinks they actually happened. We also show how such data enables the development of regression models for fine-grained scalar factuality predictions that outperform strong baselines.


arXiv: Computation and Language | 2017

Learning Recurrent Span Representations for Extractive Question Answering

Kenton Lee; Tom Kwiatkowksi; Ankur Parikh; Dipanjan Das


arXiv: Computation and Language | 2017

Recurrent Additive Networks.

Kenton Lee; Omer Levy; Luke Zettlemoyer

Collaboration


Dive into the Kenton Lee's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luheng He

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Mike Lewis

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Yoav Artzi

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chris Dyer

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge