Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Gildea is active.

Publication


Featured researches published by Daniel Gildea.


Computational Linguistics | 2005

The Proposition Bank: An Annotated Corpus of Semantic Roles

Martha Palmer; Daniel Gildea; Paul Kingsbury

The Proposition Bank project takes a practical approach to semantic representation, adding a layer of predicate-argument information, or semantic role labels, to the syntactic structures of the Penn Treebank. The resulting resource can be thought of as shallow, in that it does not represent coreference, quantification, and many other higher-order phenomena, but also broad, in that it covers every instance of every verb in the corpus and allows representative statistics to be calculated. We discuss the criteria used to define the sets of semantic roles used in the annotation process and to analyze the frequency of syntactic/semantic alternations in the corpus. We describe an automatic system for semantic role tagging trained on the corpus and discuss the effect on its performance of various types of information, including a comparison of full syntactic parsing with a flat representation and the contribution of the empty trace categories of the treebank.


Journal of the Acoustical Society of America | 2003

Effects of disfluencies, predictability, and utterance position on word form variation in English conversation

Alan Bell; Daniel Jurafsky; Eric Fosler-Lussier; Cynthia Girand; Michelle L. Gregory; Daniel Gildea

Function words, especially frequently occurring ones such as (the, that, and, and of), vary widely in pronunciation. Understanding this variation is essential both for cognitive modeling of lexical production and for computer speech recognition and synthesis. This study investigates which factors affect the forms of function words, especially whether they have a fuller pronunciation (e.g., thi, thaet, aend, inverted-v v) or a more reduced or lenited pronunciation (e.g., thax, thixt, n, ax). It is based on over 8000 occurrences of the ten most frequent English function words in a 4-h sample from conversations from the Switchboard corpus. Ordinary linear and logistic regression models were used to examine variation in the length of the words, in the form of their vowel (basic, full, or reduced), and whether final obstruents were present or not. For all these measures, after controlling for segmental context, rate of speech, and other important factors, there are strong independent effects that made high-frequency monosyllabic function words more likely to be longer or have a fuller form (1) when neighboring disfluencies (such as filled pauses uh and um) indicate that the speaker was encountering problems in planning the utterance; (2) when the word is unexpected, i.e., less predictable in context; (3) when the word is either utterance initial or utterance final. Looking at the phenomenon in a different way, frequent function words are more likely to be shorter and to have less-full forms in fluent speech, in predictable positions or multiword collocations, and utterance internally. Also considered are other factors such as sex (women are more likely to use fuller forms, even after controlling for rate of speech, for example), and some of the differences among the ten function words in their response to the factors.


meeting of the association for computational linguistics | 2003

Loosely Tree-Based Alignment for Machine Translation

Daniel Gildea

We augment a model of translation based on re-ordering nodes in syntactic trees in order to allow alignments not conforming to the original tree structure, while keeping computational complexity polynomial in the sentence length. This is done by adding a new subtree cloning operation to either tree-to-string or tree-to-tree alignment algorithms.


language and technology conference | 2006

Synchronous Binarization for Machine Translation

Hao Zhang; Liang Huang; Daniel Gildea; Kevin Knight

Systems based on synchronous grammars and tree transducers promise to improve the quality of statistical machine translation output, but are often very computationally intensive. The complexity is exponential in the size of individual grammar rules due to arbitrary re-orderings between the two languages, and rules extracted from parallel corpora can be quite large. We devise a linear-time algorithm for factoring syntactic re-orderings by binarizing synchronous rules when possible and show that the resulting rule set significantly improves the speed and accuracy of a state-of-the-art syntax-based machine translation system.


empirical methods in natural language processing | 2003

Identifying semantic roles using Combinatory Categorial Grammar

Daniel Gildea; Julia Hockenmaier

We present a system for automatically identifying PropBank-style semantic roles based on the output of a statistical parser for Combinatory Categorial Grammar. This system performs at least as well as a system based on a traditional Treebank parser, and outperforms it on core argument roles.


meeting of the association for computational linguistics | 2005

Stochastic Lexicalized Inversion Transduction Grammar for Alignment

Hao Zhang; Daniel Gildea

We present a version of Inversion Transduction Grammar where rule probabilities are lexicalized throughout the synchronous parse tree, along with pruning techniques for efficient training. Alignment results improve over unlexicalized ITG on short sentences for which full EM is feasible, but pruning seems to have a negative impact on longer sentences.


meeting of the association for computational linguistics | 2009

Bayesian Learning of a Tree Substitution Grammar

Matt Post; Daniel Gildea

Tree substitution grammars (TSGs) offer many advantages over context-free grammars (CFGs), but are hard to learn. Past approaches have resorted to heuristics. In this paper, we learn a TSG using Gibbs sampling with a nonparametric prior to control subtree size. The learned grammars perform significantly better than heuristically extracted ones on parsing accuracy.


Cognitive Science | 2010

Do Grammars Minimize Dependency Length

Daniel Gildea; David Temperley

A well-established principle of language is that there is a preference for closely related words to be close together in the sentence. This can be expressed as a preference for dependency length minimization (DLM). In this study, we explore quantitatively the degree to which natural languages reflect DLM. We extract the dependencies from natural language text and reorder the words in such a way as to minimize dependency length. Comparing the original text with these optimal linearizations (and also with random linearizations) reveals the degree to which natural language minimizes dependency length. Tests on English data show that English shows a strong effect of DLM, with dependency length much closer to optimal than to random; the optimal English grammar also has many specific features in common with English. In German, too, dependency length is significantly less than random, but the effect is much weaker than in English. We conclude by speculating about some possible reasons for this difference between English and German.


Computational Linguistics | 2009

Binarization of synchronous context-free grammars

Liang Huang; Hao Zhang; Daniel Gildea; Kevin Knight

Systems based on synchronous grammars and tree transducers promise to improve the quality of statistical machine translation output, but are often very computationally intensive. The complexity is exponential in the size of individual grammar rules due to arbitrary re-orderings between the two languages. We develop a theory of binarization for synchronous context-free grammars and present a linear-time algorithm for binarizing synchronous rules when possible. In our large-scale experiments, we found that almost all rules are binarizable and the resulting binarized rule set significantly improves the speed and accuracy of a state-of-the-art syntax-based machine translation system. We also discuss the more general, and computationally more difficult, problem of finding good parsing strategies for non-binarizable rules, and present an approximate polynomial-time algorithm for this problem.


meeting of the association for computational linguistics | 2000

Automatic labeling of semantic roles

Daniel Gildea; Daniel Jurafsky

We present a system for identifying the semantic relationships, or semantic roles, filled by constituents of a sentence within a semantic frame. Various lexical and syntactic features are derived from parse trees and used to derive statistical classifiers from hand-annotated training data.

Collaboration


Dive into the Daniel Gildea's collaboration.

Top Co-Authors

Avatar

Hao Zhang

University of Rochester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Linfeng Song

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ding Liu

University of Rochester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matt Post

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge