Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sharon O'Brien is active.

Publication


Featured researches published by Sharon O'Brien.


Machine Translation | 2010

Eye tracking as an MT evaluation technique

Stephen Doherty; Sharon O'Brien; Michael Carl

Eye tracking has been used successfully as a technique for measuring cognitive load in reading, psycholinguistics, writing, language acquisition etc. for some time now. Its application as a technique for measuring the reading ease of MT output has not yet, to our knowledge, been tested. We report here on a preliminary study testing the use and validity of an eye tracking methodology as a means of semi-automatically evaluating machine translation output. 50 French machine translated sentences, 25 rated as excellent and 25 rated as poor in an earlier human evaluation, were selected. Ten native speakers of French were instructed to read the MT sentences for comprehensibility. Their eye gaze data were recorded non-invasively using a Tobii 1750 eye tracker. The average gaze time and fixation count were found to be higher for the “bad” sentences, while average fixation duration and pupil dilations were not found to be substantially different for output rated as good and output rated as bad. Comparisons between HTER scores and eye gaze data were also found to correlate well with gaze time and fixation count, but not with pupil dilation and fixation duration. We conclude that the eye tracking data, in particular gaze time and fixation count, correlate reasonably well with human evaluation of MT output but fixation duration and pupil dilation may be less reliable indicators of reading difficulty for MT output. We also conclude that eye tracking has promise as a semi-automatic MT evaluation technique, which does not require bi-lingual knowledge, and which can potentially tap into the end users’ experience of machine translation output.


Machine Translation | 2005

Methodologies for Measuring the Correlations between Post-Editing Effort and Machine Translatability

Sharon O'Brien

Against the background of a wider research project that aims to investigate the correlation, if any, between post-editing effort and the presence of negative translatability indicators in source texts submitted to Machine Translation (MT), this paper sets out to assess the potential of two methods for measuring the effort involved in post-editing MT output. The first is based on the use of the keyboard-monitoring program Translog; the second on Choice Network Analysis (CNA). The paper reviews relevant research in both machine translatability and MT post-editing, and appraises, in particular, the suitability of think-aloud protocols in assessing post-editing effort. The combined use of Translog and CNA is proposed as a way of overcoming some of the difficulties presented by the use of think-aloud protocols in the current context. Initial results from a study conducted at Dublin City University confirm that triangulating data from Translog and CNA can cast light on the temporal, cognitive and technical aspects of post-editing effort.


Machine Translation | 2011

Towards predicting post-editing productivity

Sharon O'Brien

Machine translation (MT) quality is generally measured via automatic metrics, producing scores that have no meaning for translators who are required to post-edit MT output or for project managers who have to plan and budget for translation projects. This paper investigates correlations between two such automatic metrics (general text matcher and translation edit rate) and post-editing productivity. For the purposes of this paper, productivity is measured via processing speed and cognitive measures of effort using eye tracking as a tool. Processing speed, average fixation time and count are found to correlate well with the scores for groups of segments. Segments with high GTM and TER scores require substantially less time and cognitive effort than medium or low-scoring segments. Future research involving score thresholds and confidence estimation is suggested.


International Journal of Human-computer Interaction | 2014

Assessing the Usability of Raw Machine Translated Output: A User-Centered Study Using Eye Tracking

Stephen Doherty; Sharon O'Brien

This article reports on the results of a project that aimed to investigate the usability of raw machine translated technical support documentation for a commercial online file storage service. Adopting a user-centered approach, the ISO/TR 16982 definition of usability—goal completion, satisfaction, effectiveness, and efficiency— is utilized and eye-tracking measures that are shown to be reliable indicators of cognitive effort are applied along with a posttask questionnaire. The study investigated these measures for the original user documentation written in English and in four target languages: Spanish, French, German, and Japanese, all of which were translated using a freely available online statistical machine translation engine. Using native speakers for each language, the study found several significant differences between the source and MT output, a finding that indicates a difference in usability between well-formed content and raw machine translated content. One target language in particular, Japanese, was found to have a considerably lower usability level when compared with the original English.


Machine Translation | 2015

Correlations of perceived post-editing effort with measurements of actual effort

Joss Moorkens; Sharon O'Brien; Igor Antônio Silva; Norma Barbosa de Lima Fonseca; Fabio Alves

Human rating of predicted post-editing effort is a common activity and has been used to train confidence estimation models. However, the correlation between human ratings and actual post-editing effort is under-measured. Moreover, the impact of presenting effort indicators in a post-editing user interface on actual post-editing effort has hardly been researched. In this study, ratings of perceived post-editing effort are tested for correlations with actual temporal, technical and cognitive post-editing effort. In addition, the impact on post-editing effort of the presentation of post-editing effort indicators in the user interface is also tested. The language pair involved in this study is English-Brazilian Portuguese. Our findings, based on a small sample, suggest that there is little agreement between raters for predicted post-editing effort and that the correlations between actual post-editing effort and predicted effort are only moderate, and thus an inefficient basis for MT confidence estimation. Moreover, the presentation of post-editing effort indicators in the user interface appears not to impact on actual post-editing effort.


Machine Translation | 2014

Quality evaluation in community post-editing

Linda Mitchell; Sharon O'Brien; Johann Roturier

Machine translation is increasingly being deployed to translate user-generated content (UGC). In many situations, post-editing is required to ensure that the translations are correct and comprehensible for the users. Post-editing by professional translators is not always feasible in the context of UGC within online communities and so members of such communities are sometimes asked to translate or post-edit content on behalf of the community. How should we measure the quality of UGC that has been post-edited by community members? Is quality evaluation by community members a feasible alternative to professional evaluation techniques? This paper describes the outcomes of three quality evaluation methods for community post-edited content: (1) an error annotation performed by a trained linguist; (2) evaluation of fluency and fidelity by domain specialists; (3) evaluation of fluency by community members. The study finds that there are correlations of evaluation results between the domain specialist evaluation and the community evaluation for content machine translated from English into German in an online technical support community. Interestingly, the community evaluators were more critical in their ratings for fluency than the domain experts. Although the results of the error annotation seem to contradict those obtained in the domain specialist evaluation, a higher number of errors in the error annotation appear to result in lower scores in the domain specialist evaluation. We conclude that, within the context of this evaluation, post-editing by community members is feasible, though with considerable variation across individuals, and that evaluation by the community is also a feasible proposition.


Machine Translation | 2014

Introduction to special issue on post-editing

Sharon O'Brien; Michel Simard

With the increasing deployment of machine translation (MT) in certain sectors of the translation industry, a spotlight has turned to the task of post-editing, which is still essential when high quality translation is required. In a recent survey of 1,000 Language Service Providers (LSPs) (DePalma et al. 2013), 44 % reported that they were offering MT and Post-Editing as a service. At the same time, LSPs appear to struggle with the introduction of post-editing as a service, anecdotally reporting that there is significant translator resistance to the task. There are many reasons for this resistance and an indepth discussion is beyond the scope of this introduction (for more detailed discussion see, e.g. O’Brien and Moorkens 2014). The increasing demand for post-editing has led to a propagation of research in the past decade. We have seen the production of several theses (e.g. Tatsumi 2011; Guerberof 2012; De Almeida 2013), journal articles (e.g. García 2010, 2011), edited volumes (O’Brien et al. 2014a), workshop proceedings (O’Brien et al. 2012, 2013, 2014b) as well as many individual conference papers. In addition, EU funded projects such as CasMaCat1 and MateCat2 have contributed to the topic and to technological development (see also Moran et al. 2013). The topics that have received most attention in research to date include productivity, impact on quality, cognitive effort and, to a lesser extent, automatic post-editing and correlations between effort and automatic quality scores. The research has been conducted by scholars


Perspectives-studies in Translatology | 2016

Language, culture, and translation in disaster ICT: an ecosystemic model of understanding

Patrick Cadwell; Sharon O'Brien

ABSTRACT This paper examines how the roles of language, culture, and translation could be modelled within a framework of the Information and Communication Technology (ICT) used in disasters. It is based on empirical data drawn from a case study of foreign nationals resident in Japan for the 2011 Great East Japan Earthquake. The case study revealed that the ICT used in the 2011 disaster was diverse; that interesting relationships existed between the forms of ICT used; that the use of this ICT varied across time, space, and user; and that translation in the disaster was a highly-contextual process of written and oral interlingual and intercultural transfer carried out mostly by volunteers. These findings have been combined in the paper with concepts taken from ecosystems theory in the study of ecology to propose a model of an ICT ecosystem in a disaster setting. The model describes and explains the forces and factors that come together to create the environment in which ICT is used by human actors during a disaster; namely information circulation, power, network capacity, infrastructure, location, income, language, and culture. The model also explains how translation can be conceptualised as a subsidy to assist the central force driving the system.


Translation & Interpreting | 2017

Testing interaction with a Mobile MT post-editing app

Olga Torres-Hostench; Joss Moorkens; Sharon O'Brien; Joris Vreeke

This is an exploratory inquiry into signed language interpreters’ perceptions of interpreter e-professionalism on social media, specifically Facebook. Given the global pervasiveness of Facebook, this study presents an international perspective, and reports on findings of focus groups held with a total of 12 professional signed language interpreters from the United States of America, the United Kingdom, and Denmark, all of whom are also Facebook users. The findings reveal that Facebook is seen to blur the traditional boundaries between personal and professional realms – an overlap which is perceived to be compounded by the nature of the small community in which signed language interpreters typically work –necessitating boundary management strategies in order to maintain perceptions of professionalism on the site. Facebook is considered a valuable professional resource to leverage for networking, professional development, problem solving and assignment preparation, but it is also perceived as a potential professional liability for both individual interpreters and the profession at large. Maintaining client confidentiality was found to be the most pressing challenge Facebook brings to the profession. Educational measures to raise awareness about e-professionalism were generally viewed favourably.The study probes into translation students’ perception of the value of online peer feedback in improving translation skills. Students enrolled in a translation degree in Australia translated a 250-word text on two separate occasions. On each occasion, the students were given another fellow student’s translation of the same text to mark and provide anonymous peer feedback. The original translations from all the students, together with any peer feedback, were uploaded onto an online forum. The students were encouraged to download their own translation to review the peer feedback in it. They were also encouraged to download and peruse other students’ peer reviewed translations for comparison. Upon completion of the project, the students were surveyed about their perceptions and appreciation of their engagement in the process in the following three capacities: (i) as a feedback provider, (ii) as a feedback recipient, and (iii) as a peruser of other students’ work and the peer feedback therein. Results suggest that translation students appreciate online peer feedback as a valuable activity that facilitates improvement. The students found receiving peer feedback on their own translation especially rewarding, as it offered alternative approaches and perspectives on tackling linguistic/translation issues. In comparing the three capacities, students perceived reviewing feedback on their own work and perusing other students’ work as more beneficial than engaging in giving feedback to others.Title: Tarjamat al-khadamaat al-’aammah ( Community Interpreting and Translation) Author: Dr. Mustapha Taibi (University of Western Sydney) Year of publication: 2011 Publisher: Dar Assalam , Rabat (Morocco) ISBN: 978-9954-22-088-7 191 pagesAccent is known to cause comprehension difficulty, but empirical interpreting studies on its specific impact have been sporadic. According to Mazzetti (1999), an accent is composed of deviated phonemics and prosody, both discussed extensively in the TESL discipline. The current study seeks to examine, in the interpreting setting, the applicability of Anderson-Hsieh, Johnson and Koehlers (1992) finding that deviated prosody hinders comprehension more than problematic phonemics and syllable structure do. Thirty-seven graduate-level interpreting majors, assigned randomly to four groups, rendered four versions of a text read by the same speaker and then filled out a questionnaire while playing back their own renditions. Renditions were later rated for accuracy by two freelance interpreters, whereas the questionnaires analysed qualitatively. Results of analyses indicated that 1) both phonemics and prosody deteriorated comprehension, but prosody had a greater impact; 2) deviated North American English post-vowel /r/, intonation and rhythm were comprehension problem triggers. The finding may be of use to interpreting trainers, trainees and professionals by contributing to their knowledge of accent.The title Conference of the Tongues at first sight raises questions as to the particularities of its pertinence to translation studies, i.e. the range of possible subject matters subsumed, and is somewhat loosely explained in the preface by a short and factual hint to its historical origins (in sixteenth-century Spain in a paratext to a translation of Aesop). There is no further elaboration on the motivation for the choice of this title however.The market for translation services provided by individuals is currently characterized by significant uncertainty because buyers lack clear ways to identify qualified providers from amongst the total pool of translators. Certification and educational diplomas both serve to reduce the resulting information asymmetry, but both suffer from potential drawbacks: translator training programs are currently oversupplying the market with graduates who may lack the specific skills needed in the market and no certification program enjoys universal recognition. In addition, the two may be seen as competing means of establishing qualification. The resulting situation, in which potential clients are uncertain about which signal to trust, is known as a signal jam . In order to overcome this jam and provide more consistent signaling, translator-training programs and professional associations offering certification need to collaborate more closely to harmonize their requirements and deliver continuing professional development (CPD) that help align the outcomes from training and certification.Interpreting is rather like scuba diving. With just a bit of protective equipment, we interpreters plunge for a short time into an often alien world, where a mistake can be very serious, not only for ourselves but for the other divers who are depending on us to understand their surroundings. And as all who dive, we interpreters find this daily foray into a new environment fascinating, exhilarating, but also at times, challenging. One of the high-risk dive sites into which we venture often is the sea of healthcare, where the strange whale-song of medical dialogue, the often incomprehensible behavior of local denizens such as doctors, and the tricky currents of the healthcare system itself require special knowledge and skill to navigate successfully. Did you ever wish for a dive manual for unique world of healthcare? Well, here’s a good one, from linguist, RN and interpreter trainer, Dr. Ineke Crezee of New Zealand.Among all the difficulties inherent in interpreting, numbers stand out as a common and complex problem trigger. This experimental study contributes to research on the causes of errors in the passive simultaneous interpretation (SI) of numbers. Two groups of Italian Master’s degree students (one for English and one for German) were asked to interpret simultaneously a number-dense speech from their respective B language into their mother tongue, Italian. Note-taking was allowed during the test and both the study participants and their lecturers completed a questionnaire afterwards. Data analysis was conducted with statistical and qualitative methods, combining the cognitivist and contextualist approach. The objective was to ascertain whether one main variable may be held responsible for the high error rate related to interpreting numbers and the difficulty perceived by students in the task. The analysis quantifies the relative impact of different causes of difficulties on participants’ delivery of numbers. It stresses the crucial role of the subjective variable represented by interpreters’ skills. Didactic implications and directions for future research are discussed in the conclusion.


Machine Translation | 2011

Yves Gambier and Luc Van Doorslaer (eds): Handbook of Translation Studies

Sharon O'Brien

The Handbook of Translation Studies (vol 1) aims to disseminate knowledge about translation and interpreting studies. According to its editors, Yves Gambier and Luc Van Doorslaer, both very well respected scholars in the field of translation and interpreting studies, it is an academic tool, but it is also aimed at a broader audience, such as those with professional or personal interest in translation, interpreting, localisation etc. The publication of the Handbook is seen by the editors as testimony to the ‘institutionalisation’ of the discipline. As the editors themselves acknowledge in their introduction, this is not the first of its kind. For example, the Oxford Handbook of Translation Studies (Malmkjaer and Windle 2011) covers similar terrain, as does the Routledge Encyclopedia of Translation Studies (Baker and Saldanha 2011), although both are organised somewhat differently. The Oxford Handbook of Translation Studies targets the same audience as the Handbook under review here and seeks to cover all major concepts and theoretical angles within translation studies, including technological topics. The main difference is the way in which it is organised: it is first divided into two parts, with part 1 focusing on the history of translation theory and part 2 on central concepts, including machine translation and electronic tools for translators. In addition, the articles tend to be longer, on average, than those in the Gambier and Van Doorslaer volume. The tenor and degree of detail, however, is the same in both handbooks. The Routledge Encyclopedia of Translation Studies (second edition) is also organised into two parts. Part 1 not only seeks to cover the central concepts in translation studies, but to open the discussion out into areas that might be seen as also having a bearing on translation, e.g. publishing strategies for translated works or the translation of news. Computer-aided translation and machine translation form two separate entries in Part

Collaboration


Dive into the Sharon O'Brien's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stephen Doherty

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Fabio Alves

Universidade Federal de Minas Gerais

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge