Juri Ganitkevitch
Johns Hopkins University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Juri Ganitkevitch.
international joint conference on natural language processing | 2015
Ellie Pavlick; Pushpendre Rastogi; Juri Ganitkevitch; Benjamin Van Durme; Chris Callison-Burch
We present a new release of the Paraphrase Database. PPDB 2.0 includes a discriminatively re-ranked set of paraphrases that achieve a higher correlation with human judgments than PPDB 1.0’s heuristic rankings. Each paraphrase pair in the database now also includes finegrained entailment relations, word embedding similarities, and style annotations.
international joint conference on natural language processing | 2015
Ellie Pavlick; Juri Ganitkevitch; Tsz Ping Chan; Xuchen Yao; Benjamin Van Durme; Chris Callison-Burch
The validity of applying paraphrase rules depends on the domain of the text that they are being applied to. We develop a novel method for extracting domainspecific paraphrases. We adapt the bilingual pivoting paraphrase method to bias the training data to be more like our target domain of biology. Our best model results in higher precision while retaining complete recall, giving a 10% relative improvement in AUC.
conference of the european chapter of the association for computational linguistics | 2014
Jonathan Weese; Juri Ganitkevitch; Chris Callison-Burch
Paraphrase evaluation is typically done either manually or through indirect, taskbased evaluation. We introduce an intrinsic evaluation PARADIGM which measures the goodness of paraphrase collections that are represented using synchronous grammars. We formulate two measures that evaluate these paraphrase grammars using gold standard sentential paraphrases drawn from a monolingual parallel corpus. The first measure calculates how often a paraphrase grammar is able to synchronously parse the sentence pairs in the corpus. The second measure enumerates paraphrase rules from the monolingual parallel corpus and calculates the overlap between this reference paraphrase collection and the paraphrase resource being evaluated. We demonstrate the use of these evaluation metrics on paraphrase collections derived from three different data types: multiple translations of classic French novels, comparable sentence pairs drawn from different newspapers, and bilingual parallel corpora. We show that PARADIGM correlates with human judgments more strongly than BLEU on a task-based evaluation of paraphrase quality.
north american chapter of the association for computational linguistics | 2013
Juri Ganitkevitch; Benjamin Van Durme; Chris Callison-Burch
meeting of the association for computational linguistics | 2010
Chris Dyer; Adam Lopez; Juri Ganitkevitch; Jonathan Weese; Ferhan Türe; Phil Blunsom; Hendra Setiawan; Vladimir Eidelman; Philip Resnik
empirical methods in natural language processing | 2011
Juri Ganitkevitch; Chris Callison-Burch; Courtney Napoles; Benjamin Van Durme
language resources and evaluation | 2014
Juri Ganitkevitch; Chris Callison-Burch
workshop on statistical machine translation | 2012
Juri Ganitkevitch; Yuan Cao; Jonathan Weese; Matt Post; Chris Callison-Burch
workshop on statistical machine translation | 2011
Jonathan Weese; Juri Ganitkevitch; Chris Callison-Burch; Matt Post; Adam Lopez
meeting of the association for computational linguistics | 2011
Courtney Napoles; Chris Callison-Burch; Juri Ganitkevitch; Benjamin Van Durme