Fabienne Braune
University of Stuttgart
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Fabienne Braune.
Proceedings of the First Conference on Machine Translation: Volume 2,#N# Shared Task Papers | 2016
Jan-Thorsten Peter; Tamer Alkhouli; Hermann Ney; Matthias Huck; Fabienne Braune; Alexander M. Fraser; Aleš Tamchyna; Ondrej Bojar; Barry Haddow; Rico Sennrich; Frédéric Blain; Lucia Specia; Jan Niehues; Alex Waibel; Alexandre Allauzen; Lauriane Aufrant; Franck Burlot; Elena Knyazeva; Thomas Lavergne; François Yvon; Marcis Pinnis; Stella Frank
This paper describes the joint submission of the QT21 and HimL projects for the English→Romanian translation task of the ACL 2016 First Conference on Machine Translation (WMT 2016). The submission is a system combination which combines twelve different statistical machine translation systems provided by the different groups (RWTH Aachen University, LMU Munich, Charles University in Prague, University of Edinburgh, University of Sheffield, Karlsruhe Institute of Technology, LIMSI, University of Amsterdam, Tilde). The systems are combined using RWTH’s system combination approach. The final submission shows an improvement of 1.0 BLEU compared to the best single system on newstest2016.
international joint conference on natural language processing | 2015
Nina Seemann; Fabienne Braune; Andreas Maletti
We achieve significant improvements in several syntax-based machine translation experiments using a string-to-tree variant of multi bottom-up tree transducers. Our new parameterized rule extraction algorithm extracts string-to-tree rules that can be discontiguous and non-minimal in contrast to existing algorithms for the tree-to-tree setting. The obtained models significantly outperform the string-to-tree component of the Moses framework in a large-scale empirical evaluation on several known translation tasks. Our linguistic analysis reveals the remarkable benefits of discontiguous and non-minimal rules.
The Prague Bulletin of Mathematical Linguistics | 2014
Aleš Tamchyna; Fabienne Braune; Alexander M. Fraser; Marine Carpuat; Hal Daumé; Chris Quirk
Abstract Current state-of-the-art statistical machine translation (SMT) relies on simple feature functions which make independence assumptions at the level of phrases or hierarchical rules. However, it is well-known that discriminative models can benefit from rich features extracted from the source sentence context outside of the applied phrase or hierarchical rule, which is available at decoding time. We present a framework for the open-source decoder Moses that allows discriminative models over source context to easily be trained on a large number of examples and then be included as feature functions in decoding.
empirical methods in natural language processing | 2015
Fabienne Braune; Nina Seemann; Alexander M. Fraser
In syntax-based machine translation, rule selection is the task of choosing the correct target side of a translation rule among rules with the same source side. We define a discriminative rule selection model for systems that have syntactic annotation on the target language side (stringto-tree). This is a new and clean way to integrate soft source syntactic constraints into string-to-tree systems as features of the rule selection model. We release our implementation as part of Moses.
Proceedings of the First Conference on Machine Translation: Volume 1, Research Papers | 2016
Fabienne Braune; Alexander M. Fraser; Hal Daumė; Aleš Tamchyna
Training discriminative rule selection models is usually expensive because of the very large size of the hierarchical grammar. Previous approaches reduced the training costs either by (i) using models that are local to the source side of the rules or (ii) by heavily pruning out negative samples. Moreover, all previous evaluations were performed on small scale translation tasks, containing at most 250,000 sentence pairs. We propose two contributions to discriminative rule selection. First, we test previous approaches on two French-English translation tasks in domains for which only limited resources are available and show that they fail to improve translation quality. To improve on such tasks, we propose a rule selection model that is (i) global with rich label-dependent features (ii) trained with all available negative samples. Our global model yields significant improvements, up to 1 BLEU point, over previously proposed rule selection models. Second, we successfully scale rule selection models to large translation tasks but have so far failed to produce significant improvements in BLEU on these tasks.
The Prague Bulletin of Mathematical Linguistics | 2013
Robin Kurtz; Nina Seemann; Fabienne Braune; Andreas Maletti
Abstract The development of accurate machine translation systems requires detailed analyses of the recurring translation mistakes. However, the manual inspection of the decoder log files is a daunting task because of their sheer size and their uncomfortable format, in which the relevant data is widely spread. For all major platforms, DIMwid offers a graphical user interface that allows the quick inspection of the decoder stacks or chart cells for a given span in a uniform way. Currently, DIMwid can process the decoder log files of the phrase-based stack decoder and the syntax-based chart decoder inside the Moses framework.
international conference on computational linguistics | 2010
Fabienne Braune; Alexander M. Fraser
meeting of the association for computational linguistics | 2013
Fabienne Braune; Nina Seemann; Daniel Quernheim; Andreas Maletti
Archive | 2012
Fabienne Braune; Anita Gojun; Alexander M. Fraser
language resources and evaluation | 2014
Fabienne Braune; Daniel Bauer; Kevin Knight