Arda Tezcan
Ghent University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Arda Tezcan.
workshop on statistical machine translation | 2015
Arda Tezcan; Veronique Hoste; Bart Desmet; Lieve Macken
This paper describes the submission of the UGENT-LT3 SCATE system to the WMT15 Shared Task on Quality Estimation (QE), viz. English-Spanish word and sentence-level QE. We conceived QE as a supervised Machine Learning (ML) problem and designed additional features and combined these with the baseline feature set to estimate quality. The sentence-level QE system re-uses the word level predictions of the word-level QE system. We experimented with different learning methods and observe improvements over the baseline system for wordlevel QE with the use of the new features and by combining learning methods into ensembles. For sentence-level QE we show that using a single feature based on word-level predictions can perform better than the baseline system and using this in combination with additional features led to further improvements in performance.
Proceedings of the First Conference on Machine Translation, Volume 2: Shared Task Papers | 2016
Arda Tezcan; Veronique Hoste; Lieve Macken
This paper describes the submission of the UGENT-LT3 SCATE system to the WMT16 Shared Task on Quality Estimation (QE), viz. English-German word and sentence-level QE. Based on the observation that the data set is homogeneous (all sentences belong to the IT domain), we performed bilingual terminology extraction and added features derived from the resulting term list to the well-performing features of the word-level QE task of last year. For sentence-level QE, we analyzed the importance of the features and based on those insights extended the feature set of last year. We also experimented with different learning methods and ensembles. We present our observations from the different experiments we conducted and our submissions for both tasks.
The Prague Bulletin of Mathematical Linguistics | 2017
Arda Tezcan; Veronique Hoste; Lieve Macken
Abstract In this paper we present a Neural Network (NN) architecture for detecting grammatical errors in Statistical Machine Translation (SMT) using monolingual morpho-syntactic word representations in combination with surface and syntactic context windows. We test our approach on two language pairs and two tasks, namely detecting grammatical errors and predicting overall post-editing effort. Our results show that this approach is not only able to accurately detect grammatical errors but it also performs well as a quality estimation system for predicting overall post-editing effort, which is characterised by all types of MT errors. Furthermore, we show that this approach is portable to other languages.
Trends in e-tools and resources for translators and interpreters | 2016
Arda Tezcan; Veronique Hoste; Lieve Macken
Quality Estimation (QE) and error analysis of Machine Translation (MT) output remain active areas in Natural Language Processing (NLP) research. Many recent efforts have focused on Machine Learning (ML) systems to estimate the MT quality, translation errors, post-editing speed or post-editing effort. As the accuracy of such ml tasks relies on the availability of corpora, there is an increasing need for large corpora of machine translations annotated with translation errors and the error annotation guidelines to produce consistent annotations. Drawing on previous work on translation error taxonomies, we present the SCATE (Smart Computer-aided Translation Environment) mt error taxonomy, which is hierarchical in nature and is based upon the familiar notions of accuracy and fluency. In the scate annotation framework, we annotate fluency errors in the target text and accuracy errors in both the source and target text, while linking the source and target annotations. We also propose a novel method for alignment-based Inter-Annotator Agreement (IAA) analysis and show that this method can be used effectively on large annotation sets. Using the scate taxonomy and guidelines, we create the first corpus of MT errors for the English-Dutch language pair, consisting of Statistical Machine Translation (SMT) and Rule-Based Machine Translation (RBMT) errors, which is a valuable resource not only for NLP tasks in this field but also to study the relationship between mt errors and post-editing efforts in the future. Finally, we analyse the error profiles of the smt and the rbmt systems used in this study and compare the quality of these two different mt architectures based on the error types.
language resources and evaluation | 2018
Laura Van Brussel; Arda Tezcan; Lieve Macken
Archive | 2018
Vincent Vandeghinste; Tom Vanallemeersch; Bram Bulté; Liesbeth Augustinus; Frank Van Eynde; Joris Pelemans; Lyan Verwimp; Patrick Wambacq; Geert Heyman; Marie-Francine Moens; Lulianna van der Lek-Ciudin; Frieda Steurs; Ayla Rigouts Terryn; Els Lefever; Arda Tezcan; Lieve Macken; Veronique Hoste; Sven Coppers; Jens Brulmans; Jan Van den Bergh; Kris Luyten; Karin Coninx
Archive | 2018
Arda Tezcan
Multiword units in machine translation and translation technology | 2018
Lieve Macken; Arda Tezcan
Proceedings EAMT 2017 | 2017
Vincent Vandeghinste; Tom Vanallemeersch; Liesbeth Augustinus; Frank Van Eynde; Joris Pelemans; Lyan Verwimp; Patrick Wambacq; Geert Heyman; Marie-Francine Moens; Iulianna van der Lek-Ciudin; Frieda Steurs; Ayla Rigouts Terryn; Els Lefever; Arda Tezcan; Lieve Macken; Sven Coppers; Jan Van den Bergh; Kris Luyten; Karin Coninx
Proceedings of the 19th Annual Conference of the European Association for#N# Machine Translation | 2016
Arda Tezcan; Veronique Hoste; Lieve Macken