Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Trond Grenager is active.

Publication


Featured researches published by Trond Grenager.


meeting of the association for computational linguistics | 2005

Incorporating Non-local Information into Information Extraction Systems by Gibbs Sampling

Jenny Rose Finkel; Trond Grenager; Christopher D. Manning

Most current statistical natural language processing models use only local features so as to permit dynamic programming in inference, but this makes them unable to fully account for the long distance structure that is prevalent in language use. We show how to solve this dilemma with Gibbs sampling, a simple Monte Carlo method used to perform approximate inference in factored probabilistic models. By using simulated annealing in place of Viterbi decoding in sequence models such as HMMs, CMMs, and CRFs, it is possible to incorporate non-local structure while preserving tractable inference. We use this technique to augment an existing CRF-based information extraction system with long-distance dependency models, enforcing label consistency and extraction template consistency constraints. This technique results in an error reduction of up to 9% over state-of-the-art systems on two established information extraction tasks.


Artificial Intelligence | 2007

If multi-agent learning is the answer, what is the question?

Yoav Shoham; Rob Powers; Trond Grenager

The area of learning in multi-agent systems is today one of the most fertile grounds for interaction between game theory and artificial intelligence. We focus on the foundational questions in this interdisciplinary area, and identify several distinct agendas that ought to, we argue, be separated. The goal of this article is to start a discussion in the research community that will result in firmer foundations for the area.


language and technology conference | 2006

Learning to recognize features of valid textual entailments

Bill MacCartney; Trond Grenager; Marie-Catherine de Marneffe; Daniel M. Cer; Christopher D. Manning

This paper advocates a new architecture for textual inference in which finding a good alignment is separated from evaluating entailment. Current approaches to semantic inference in question answering and textual entailment have approximated the entailment problem as that of computing the best alignment of the hypothesis to the text, using a locally decomposable matching score. We argue that there are significant weaknesses in this approach, including flawed assumptions of monotonicity and locality. Instead we propose a pipelined approach where alignment is followed by a classification step, in which we extract features representing high-level characteristics of the entailment problem, and pass the resulting feature vector to a statistical classifier trained on development data. We report results on data from the 2005 Pascal RTE Challenge which surpass previously reported results for alignment-based systems.


meeting of the association for computational linguistics | 2005

Unsupervised Learning of Field Segmentation Models for Information Extraction

Trond Grenager; Daniel Klein; Christopher D. Manning

The applicability of many current information extraction techniques is severely limited by the need for supervised training data. We demonstrate that for certain field structured extraction tasks, such as classified advertisements and bibliographic citations, small amounts of prior knowledge can be used to learn effective models in a primarily unsupervised fashion. Although hidden Markov models (HMMs) provide a suitable generative model for field structured text, general unsupervised HMM learning fails to learn useful structure in either of our domains. However, one can dramatically improve the quality of the learned structure by exploiting simple prior knowledge of the desired solutions. In both domains, we found that unsupervised methods can attain accuracies with 400 unlabeled examples comparable to those attained by supervised methods on 50 labeled examples, and that semi-supervised methods can make good use of small amounts of labeled data.


meeting of the association for computational linguistics | 2007

Learning Alignments and Leveraging Natural Logic

Nathanael Chambers; Daniel M. Cer; Trond Grenager; David Leo Wright Hall; Chloé Kiddon; Bill MacCartney; Marie-Catherine de Marneffe; Daniel Ramage; Eric Yeh; Christopher D. Manning

We describe an approach to textual inference that improves alignments at both the typed dependency level and at a deeper semantic level. We present a machine learning approach to alignment scoring, a stochastic search procedure, and a new tool that finds deeper semantic alignments, allowing rapid development of semantic features over the aligned graphs. Further, we describe a complementary semantic component based on natural logic, which shows an added gain of 3.13% accuracy on the RTE3 test set.


empirical methods in natural language processing | 2006

Unsupervised Discovery of a Statistical Verb Lexicon

Trond Grenager; Christopher D. Manning

This paper demonstrates how unsupervised techniques can be used to learn models of deep linguistic structure. Determining the semantic roles of a verbs dependents is an important step in natural language understanding. We present a method for learning models of verb argument patterns directly from unannotated text. The learned models are similar to existing verb lexicons such as VerbNet and PropBank, but additionally include statistics about the linkings used by each verb. The method is based on a structured probabilistic model of the domain, and unsupervised learning is performed with the EM algorithm. The learned models can also be used discriminatively as semantic role labelers, and when evaluated relative to the PropBank annotation, the best learned model reduces 28% of the error between an informed baseline and an oracle upper bound.


discovery science | 2001

Computational Revision of Quantitative Scientific Models

Kazumi Saito; Pat Langley; Trond Grenager; Christopher Potter; Alicia Torregrosa; Steven A. Klooster

Research on the computational discovery of numeric equations has focused on constructing laws from scratch, whereas work on theory revision has emphasized qualitative knowledge. In this paper, we describe an approach to improving scientific models that are cast as sets of equations. We review one such model for aspects of the Earth ecosystem, then recount its application to revising parameter values, intrinsic properties, and functional forms, in each case achieving reduction in error on Earth science data while retaining the communicability of the original model. After this, we consider earlier work on computational scientific discovery and theory revision, then close with suggestions for future research on this topic.


Archive | 2003

Multi-Agent Reinforcement Learning:a critical survey

Yoav Shoham; Rob Powers; Trond Grenager


Archive | 2007

Techniques for facilitating on-line contextual analysis and advertising

Assaf Henkin; Yoav Shaham; Itai Brickner; Trond Grenager; Daniel Klein


meeting of the association for computational linguistics | 2007

The Infinite Tree

Jenny Rose Finkel; Trond Grenager; Christopher D. Manning

Collaboration


Dive into the Trond Grenager's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel Klein

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge