Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael J. Denkowski is active.

Publication


Featured researches published by Michael J. Denkowski.


workshop on statistical machine translation | 2014

Meteor Universal: Language Specific Translation Evaluation for Any Target Language

Michael J. Denkowski; Alon Lavie

This paper describes Meteor Universal, released for the 2014 ACL Workshop on Statistical Machine Translation. Meteor Universal brings language specific evaluation to previously unsupported target languages by (1) automatically extracting linguistic resources (paraphrase tables and function word lists) from the bitext used to train MT systems and (2) using a universal parameter set learned from pooling human judgments of translation quality from several language directions. Meteor Universal is shown to significantly outperform baseline BLEU on two new languages, Russian (WMT13) and Hindi (WMT14).


Machine Translation | 2009

The Meteor metric for automatic evaluation of machine translation

Alon Lavie; Michael J. Denkowski

The Meteor Automatic Metric for Machine Translation evaluation, originally developed and released in 2004, was designed with the explicit goal of producing sentence-level scores which correlate well with human judgments of translation quality. Several key design decisions were incorporated into Meteor in support of this goal. In contrast with IBM’s Bleu, which uses only precision-based features, Meteor uses and emphasizes recall in addition to precision, a property that has been confirmed by several metrics as being critical for high correlation with human judgments. Meteor also addresses the problem of reference translation variability by utilizing flexible word matching, allowing for morphological variants and synonyms to be taken into account as legitimate correspondences. Furthermore, the feature ingredients within Meteor are parameterized, allowing for the tuning of the metric’s free parameters in search of values that result in optimal correlation with human judgments. Optimal parameters can be separately tuned for different types of human judgments and for different languages. We discuss the initial design of the Meteor metric, subsequent improvements, and performance in several independent evaluations in recent years.


conference of the european chapter of the association for computational linguistics | 2014

Learning from Post-Editing: Online Model Adaptation for Statistical Machine Translation

Michael J. Denkowski; Chris Dyer; Alon Lavie

Using machine translation output as a starting point for human translation has become an increasingly common application of MT. We propose and evaluate three computationally efficient online methods for updating statistical MT systems in a scenario where post-edited MT output is constantly being returned to the system: (1) adding new rules to the translation model from the post-edited content, (2) updating a Bayesian language model of the target language that is used by the MT system, and (3) updating the MT system’s discriminative parameters with a MIRA step. Individually, these techniques can substantially improve MT quality, even over strong baselines. Moreover, we see super-additive improvements when all three techniques are used in tandem.


conference of the european chapter of the association for computational linguistics | 2014

Real Time Adaptive Machine Translation for Post-Editing with cdec and TransCenter

Michael J. Denkowski; Alon Lavie; Isabel Lacruz; Chris Dyer

Using machine translation output as a starting point for human translation has recently gained traction in the translation community. This paper describes cdec Realtime, a framework for building adaptive MT systems that learn from post-editor feedback, and TransCenter, a web-based translation interface that connects users to Realtime systems and logs post-editing activity. This combination allows the straightforward deployment of MT systems specifically for post-editing and analysis of human translator productivity when working with these systems. All tools, as well as actual post-editing data collected as part of a validation experiment, are freely available under an open source license.


workshop on statistical machine translation | 2011

Meteor 1.3: Automatic Metric for Reliable Optimization and Evaluation of Machine Translation Systems

Michael J. Denkowski; Alon Lavie


workshop on statistical machine translation | 2010

METEOR-NEXT and the METEOR Paraphrase Tables: Improved Evaluation Support for Five Target Languages

Michael J. Denkowski; Alon Lavie


north american chapter of the association for computational linguistics | 2010

Extending the METEOR Machine Translation Evaluation Metric to the Phrase Level

Michael J. Denkowski; Alon Lavie


north american chapter of the association for computational linguistics | 2010

Turker-Assisted Paraphrasing for English-Arabic Machine Translation

Michael J. Denkowski; Hassan Al-Haj; Alon Lavie


Archive | 2010

Choosing the Right Evaluation for Machine Translation: an Examination of Annotator and Automatic Metric Performance on Human Judgment Tasks

Michael J. Denkowski; Alon Lavie


workshop on statistical machine translation | 2012

The CMU-Avenue French-English Translation System

Michael J. Denkowski; Greg Hanneman; Alon Lavie

Collaboration


Dive into the Michael J. Denkowski's collaboration.

Top Co-Authors

Avatar

Alon Lavie

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Chris Dyer

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Greg Hanneman

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Austin Matthews

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Hassan Al-Haj

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Victor Chahuneau

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Waleed Ammar

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Wang Ling

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge