Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adam Pauls is active.

Publication


Featured researches published by Adam Pauls.


computational intelligence | 2013

MULTI-DOCUMENT SUMMARIZATION OF EVALUATIVE TEXT

Giuseppe Carenini; Jackie Chi Kit Cheung; Adam Pauls

In many decision‐making scenarios, people can benefit from knowing what other peoples opinions are. As more and more evaluative documents are posted on the Web, summarizing these useful resources becomes a critical task for many organizations and individuals. This paper presents a framework for summarizing a corpus of evaluative documents about a single entity by a natural language summary. We propose two summarizers: an extractive summarizer and an abstractive one. As an additional contribution, we show how our abstractive summarizer can be modified to generate summaries tailored to a model of the user preferences that is solidly grounded in decision theory and can be effectively elicited from users. We have tested our framework in three user studies. In the first one, we compared the two summarizers. They performed equally well relative to each other quantitatively, while significantly outperforming a baseline standard approach to multidocument summarization. Trends in the results as well as qualitative comments from participants suggest that the summarizers have different strengths and weaknesses. After this initial user study, we realized that the diversity of opinions expressed in the corpus (i.e., its controversiality) might play a critical role in comparing abstraction versus extraction. To clearly pinpoint the role of controversiality, we ran a second user study in which we controlled for the degree of controversiality of the corpora that were summarized for the participants. The outcome of this study indicates that for evaluative text abstraction tends to be more effective than extraction, particularly when the corpus is controversial. In the third user study we assessed the effectiveness of our user tailoring strategy. The results of this experiment confirm that user tailored summaries are more informative than untailored ones.


intelligent user interfaces | 2006

Interactive multimedia summaries of evaluative text

Giuseppe Carenini; Raymond T. Ng; Adam Pauls

We present an interactive multimedia interface for automatically summarizing large corpora of evaluative text (e.g. online product reviews). We rely on existing techniques for extracting knowledge from the corpora but present a novel approach for conveying that knowledge to the user. Our system presents the extracted knowledge in a hierarchical visualization mode as well as in a natural language summary. We propose a method for reasoning about the extracted knowledge so that the natural language summary can include only the most important information from the corpus. Our approach is interactive in that it allows the user to explore in the original dataset through intuitive visual and textual methods. Results of a formative evaluation of our interface show general satisfaction among users with our approach.


international joint conference on natural language processing | 2009

K-Best A* Parsing

Adam Pauls; Daniel Klein

A* parsing makes 1-best search efficient by suppressing unlikely 1-best items. Existing k-best extraction methods can efficiently search for top derivations, but only after an exhaustive 1-best pass. We present a unified algorithm for k-best A* parsing which preserves the efficiency of k-best extraction while giving the speed-ups of A* methods. Our algorithm produces optimal k-best parses under the same conditions required for optimality in a 1-best A* parser. Empirically, optimal k-best lists can be extracted significantly faster than with other approaches, over a range of grammar types.


empirical methods in natural language processing | 2009

Consensus Training for Consensus Decoding in Machine Translation

Adam Pauls; John DeNero; Daniel Klein

We propose a novel objective function for discriminatively tuning log-linear machine translation models. Our objective explicitly optimizes the BLEU score of expected n-gram counts, the same quantities that arise in forest-based consensus and minimum Bayes risk decoding methods. Our continuous objective can be optimized using simple gradient ascent. However, computing critical quantities in the gradient necessitates a novel dynamic program, which we also present here. Assuming BLEU as an evaluation measure, our objective function has two principle advantages over standard max BLEU tuning. First, it specifically optimizes model weights for downstream consensus decoding procedures. An unexpected second benefit is that it reduces overfitting, which can improve test set BLEU scores when using standard Viterbi decoding.


north american chapter of the association for computational linguistics | 2009

Efficient Parsing for Transducer Grammars

John DeNero; Mohit Bansal; Adam Pauls; Daniel Klein

The tree-transducer grammars that arise in current syntactic machine translation systems are large, flat, and highly lexicalized. We address the problem of parsing efficiently with such grammars in three ways. First, we present a pair of grammar transformations that admit an efficient cubic-time CKY-style parsing algorithm despite leaving most of the grammar in n-ary form. Second, we show how the number of intermediate symbols generated by this transformation can be substantially reduced through binarization choices. Finally, we describe a two-pass coarse-to-fine parsing approach that prunes the search space using predictions from a subset of the original grammar. In all, parsing time reduces by 81%. We also describe a coarse-to-fine pruning scheme for forest-based language model reranking that allows a 100-fold increase in beam size while reducing decoding time. The resulting translations improve by 1.3 BLEU.


north american chapter of the association for computational linguistics | 2009

Hierarchical Search for Parsing

Adam Pauls; Daniel Klein

Both coarse-to-fine and A* parsing use simple grammars to guide search in complex ones. We compare the two approaches in a common, agenda-based framework, demonstrating the tradeoffs and relative strengths of each method. Overall, coarse-to-fine is much faster for moderate levels of search errors, but below a certain threshold A* is superior. In addition, we present the first experiments on hierarchical A* parsing, in which computation of heuristics is itself guided by meta-heuristics. Multi-level hierarchies are helpful in both approaches, but are more effective in the coarse-to-fine case because of accumulated slack in A* heuristics.


meeting of the association for computational linguistics | 2009

Asynchronous Binarization for Synchronous Grammars

John DeNero; Adam Pauls; Daniel Klein

Binarization of n-ary rules is critical for the efficiency of syntactic machine translation decoding. Because the target side of a rule will generally reorder the source side, it is complex (and sometimes impossible) to find synchronous rule binarizations. However, we show that synchronous binarizations are not necessary in a two-stage decoder. Instead, the grammar can be binarized one way for the parsing stage, then rebinarized in a different way for the reranking stage. Each individual binarization considers only one monolingual projection of the grammar, entirely avoiding the constraints of synchronous binarization and allowing binarizations that are separately optimized for each stage. Compared to n-ary forest reranking, even simple target-side binarization schemes improve overall decoding accuracy.


meeting of the association for computational linguistics | 2011

Faster and Smaller N-Gram Language Models

Adam Pauls; Daniel Klein


conference of the european chapter of the association for computational linguistics | 2006

Multi-Document Summarization of Evaluative Text.

Giuseppe Carenini; Raymond T. Ng; Adam Pauls


empirical methods in natural language processing | 2012

Syntactic Transfer Using a Bilingual Lexicon

Greg Durrett; Adam Pauls; Daniel Klein

Collaboration


Dive into the Adam Pauls's collaboration.

Top Co-Authors

Avatar

Daniel Klein

University of California

View shared research outputs
Top Co-Authors

Avatar

Giuseppe Carenini

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

David Chiang

University of Notre Dame

View shared research outputs
Top Co-Authors

Avatar

Kevin Knight

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Raymond T. Ng

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Ashish Vaswani

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Greg Durrett

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge