Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Burkett is active.

Publication


Featured researches published by David Burkett.


empirical methods in natural language processing | 2008

Two Languages are Better than One (for Syntactic Parsing)

David Burkett; Daniel Klein

We show that jointly parsing a bitext can substantially improve parse quality on both sides. In a maximum entropy bitext parsing model, we define a distribution over source trees, target trees, and node-to-node alignments between them. Features include monolingual parse scores and various measures of syntactic divergence. Using the translated portion of the Chinese treebank, our model is trained iteratively to maximize the marginal likelihood of training tree pairs, with alignments treated as latent variables. The resulting bitext parser outperforms state-of-the-art monolingual parser baselines by 2.5 F1 at predicting English side trees and 1.8 F1 at predicting Chinese side trees (the highest published numbers on these corpora). Moreover, these improved trees yield a 2.4 BLEU increase when used in a downstream MT evaluation.


meeting of the association for computational linguistics | 2014

Structured Learning for Taxonomy Induction with Belief Propagation

Mohit Bansal; David Burkett; Gerard de Melo; Daniel Klein

We present a structured learning approach to inducing hypernym taxonomies using a probabilistic graphical model formulation. Our model incorporates heterogeneous relational evidence about both hypernymy and siblinghood, captured by semantic features based on patterns and statistics from Web n-grams and Wikipedia abstracts. For efficient inference over taxonomy structures, we use loopy belief propagation along with a directed spanning tree algorithm for the core hypernymy factor. To train the system, we extract sub-structures of WordNet and discriminatively learn to reproduce them, using adaptive subgradient stochastic optimization. On the task of reproducing sub-hierarchies of WordNet, our approach achieves a 51% error reduction over a chance baseline, including a 15% error reduction due to the non-hypernym-factored sibling features. On a comparison setup, we find up to 29% relative error reduction over previous work on ancestor F1.


north american chapter of the association for computational linguistics | 2010

Joint Parsing and Alignment with Weakly Synchronized Grammars

David Burkett; John Blitzer; Daniel Klein


conference on computational natural language learning | 2010

Learning Better Monolingual Models with Unannotated Bilingual Text

David Burkett; Slav Petrov; John Blitzer; Daniel Klein


empirical methods in natural language processing | 2012

An Empirical Investigation of Statistical Significance in NLP

Taylor Berg-Kirkpatrick; David Burkett; Daniel Klein


empirical methods in natural language processing | 2012

Transforming Trees to Improve Syntactic Convergence

David Burkett; Daniel Klein


conference on computational natural language learning | 2011

Mention Detection: Heuristics for the OntoNotes annotations

Jonathan K. Kummerfeld; Mohit Bansal; David Burkett; Daniel Klein


conference of the international speech communication association | 2009

Using syntax in large-scale audio document translation.

Jing Zheng; Necip Fazil Ayan; Wen Wang; David Burkett


international conference on automated planning and scheduling | 2013

Faster optimal planning with partial-order pruning

David Leo Wright Hall; Alon Cohen; David Burkett; Daniel Klein


national conference on artificial intelligence | 2011

Optimal graph search with iterated graph cuts

David Burkett; David Leo Wright Hall; Daniel Klein

Collaboration


Dive into the David Burkett's collaboration.

Top Co-Authors

Avatar

Daniel Klein

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mohit Bansal

Toyota Technological Institute at Chicago

View shared research outputs
Top Co-Authors

Avatar

Alon Cohen

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge