Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joke Daems is active.

Publication


Featured researches published by Joke Daems.


New directions in empirical translation process research : exploring the CRITT TPR-DB | 2016

The Effectiveness of Consulting External Resources During Translation and Post-editing of General Text Types

Joke Daems; Michael Carl; Sonia Vandepitte; Robert J. Hartsuiker; Lieve Macken

Consulting external resources is an important aspect of the translation process. Whereas most previous studies were limited to screen capture software to analyze the usage of external resources, we present a more convenient way to capture this data, by combining the functionalities of CASMACAT with those of Inputlog, two state-of-the-art logging tools. We used this data to compare the types of resources used and the time spent in external resources for 40 from-scratch translation sessions (HT) and 40 post-editing (PE) sessions of 10 master’s students of translation (from English into Dutch). We took a closer look at the effect of the usage of external resources on productivity and quality of the final product. The types of resources consulted were comparable for HT and PE, but more time was spent in external resources when translating. Though search strategies seemed to be more successful when translating than when post-editing, the quality of the final product was comparable, and post-editing was faster than regular translation.


Frontiers in Psychology | 2017

Identifying the Machine Translation Error Types with the Greatest Impact on Post-editing Effort

Joke Daems; Sonia Vandepitte; Robert J. Hartsuiker; Lieve Macken

Translation Environment Tools make translators’ work easier by providing them with term lists, translation memories and machine translation output. Ideally, such tools automatically predict whether it is more effortful to post-edit than to translate from scratch, and determine whether or not to provide translators with machine translation output. Current machine translation quality estimation systems heavily rely on automatic metrics, even though they do not accurately capture actual post-editing effort. In addition, these systems do not take translator experience into account, even though novices’ translation processes are different from those of professional translators. In this paper, we report on the impact of machine translation errors on various types of post-editing effort indicators, for professional translators as well as student translators. We compare the impact of MT quality on a product effort indicator (HTER) with that on various process effort indicators. The translation and post-editing process of student translators and professional translators was logged with a combination of keystroke logging and eye-tracking, and the MT output was analyzed with a fine-grained translation quality assessment approach. We find that most post-editing effort indicators (product as well as process) are influenced by machine translation quality, but that different error types affect different post-editing effort indicators, confirming that a more fine-grained MT quality analysis is needed to correctly estimate actual post-editing effort. Coherence, meaning shifts, and structural issues are shown to be good indicators of post-editing effort. The additional impact of experience on these interactions between MT quality and post-editing effort is smaller than expected.


MT Summit XIV Workshop on Post-editing Technology and Practice, Proceedings | 2013

Quality as the sum of its parts: a two-step approach for the identification of translation problems and translation quality assessment for HT and MT+PE

Joke Daems; Lieve Macken; Sonia Vandepitte


Fourth Workshop on Post-Editing Technology and Practice, Proceedings | 2015

The impact of machine translation error types on post-editing effort indicators

Joke Daems; Sonia Vandepitte; Robert J. Hartsuiker; Lieve Macken


Metamaterials | 2017

Translation methods and experience : a comparative analysis of human translation and post-editing with students and professional translators

Joke Daems; Sonia Vandepitte; Robert J. Hartsuiker; Lieve Macken


Multiword units in machine translation and translation technology | 2018

How do students cope with machine translation output of Multi-Word Units? An exploratory study

Joke Daems; Michael Carl; Sonia Vandepitte; Robert J. Hartsuiker; Lieve Macken


Linguistica Antverpiensia, New Series – Themes in Translation Studies | 2018

Translationese and Post-editese: How comparable is comparable quality?

Joke Daems; Orphée De Clercq; Lieve Macken


OVER TAAL | 2017

Automatische vertaalsystemen : hel of hulp?

Joke Daems


Archive | 2017

Workers of the World? A Digital Approach Towards the International Scope of Belgian Socialist Newspapers, 1885-1940

Christophe Verbruggen; Simon Hengchen; Tecle Zere; Thomas D'haeninck; Joke Daems


Digital Humanities Benelux 2017 | 2017

Towards a IIIF-based corpus management platform

Joke Daems; Sally Chambers; Tecle Zere; Christophe Verbruggen

Collaboration


Dive into the Joke Daems's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Carl

Copenhagen Business School

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge