David Burkett
University of California, Berkeley
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David Burkett.
empirical methods in natural language processing | 2008
David Burkett; Daniel Klein
We show that jointly parsing a bitext can substantially improve parse quality on both sides. In a maximum entropy bitext parsing model, we define a distribution over source trees, target trees, and node-to-node alignments between them. Features include monolingual parse scores and various measures of syntactic divergence. Using the translated portion of the Chinese treebank, our model is trained iteratively to maximize the marginal likelihood of training tree pairs, with alignments treated as latent variables. The resulting bitext parser outperforms state-of-the-art monolingual parser baselines by 2.5 F1 at predicting English side trees and 1.8 F1 at predicting Chinese side trees (the highest published numbers on these corpora). Moreover, these improved trees yield a 2.4 BLEU increase when used in a downstream MT evaluation.
meeting of the association for computational linguistics | 2014
Mohit Bansal; David Burkett; Gerard de Melo; Daniel Klein
We present a structured learning approach to inducing hypernym taxonomies using a probabilistic graphical model formulation. Our model incorporates heterogeneous relational evidence about both hypernymy and siblinghood, captured by semantic features based on patterns and statistics from Web n-grams and Wikipedia abstracts. For efficient inference over taxonomy structures, we use loopy belief propagation along with a directed spanning tree algorithm for the core hypernymy factor. To train the system, we extract sub-structures of WordNet and discriminatively learn to reproduce them, using adaptive subgradient stochastic optimization. On the task of reproducing sub-hierarchies of WordNet, our approach achieves a 51% error reduction over a chance baseline, including a 15% error reduction due to the non-hypernym-factored sibling features. On a comparison setup, we find up to 29% relative error reduction over previous work on ancestor F1.
north american chapter of the association for computational linguistics | 2010
David Burkett; John Blitzer; Daniel Klein
conference on computational natural language learning | 2010
David Burkett; Slav Petrov; John Blitzer; Daniel Klein
empirical methods in natural language processing | 2012
Taylor Berg-Kirkpatrick; David Burkett; Daniel Klein
empirical methods in natural language processing | 2012
David Burkett; Daniel Klein
conference on computational natural language learning | 2011
Jonathan K. Kummerfeld; Mohit Bansal; David Burkett; Daniel Klein
conference of the international speech communication association | 2009
Jing Zheng; Necip Fazil Ayan; Wen Wang; David Burkett
international conference on automated planning and scheduling | 2013
David Leo Wright Hall; Alon Cohen; David Burkett; Daniel Klein
national conference on artificial intelligence | 2011
David Burkett; David Leo Wright Hall; Daniel Klein