Sanja Seljan
University of Zagreb
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sanja Seljan.
information technology interfaces | 2008
Ksenija Klasnić; Sanja Seljan; Hrvoje Stančić
Learning and teaching are considered to be the main activities in higher education. The environment in which these activities take place is rapidly changing and it is getting more and more oriented towards teaching with the help of the new technologies, namely e-learning system, relying not only on technical skills but also on motivation and contextualisation. In the paper, different views towards the quality of e-learning have been presented, and the research regarding the quality of the e-learning Omega system (Moodle) conducted at the Faculty of Humanities and Social Sciences, University of Zagreb presented in relation to different European policies.
information technology interfaces | 2006
Sanja Seljan; D. Pavuna
Necessity for machine-assisted translation (MAT) has become obvious in multinational companies, professional societies, in government agencies, in EU etc. Amount of text is rapidly growing and need for fast translation is increasing. Answering to needs of information society, business, education, science and culture, it is necessary to make on one side overview of MAT tools used in EU, translation needs and costs and on the other side to carry a survey of the situation in Croatia in order to proceed with further concrete activities regarding organizational, professional and educational changes, creation of language resources and joining the European projects and standards
world conference on information systems and technologies | 2015
Sanja Seljan; Marko Tucaković; Ivan Dunđer
This paper presents results of human evaluation of machine translated texts for one non closely-related language pair, English-Croatian, and for one closely-related language pair, Russian-Croatian. 400 sentences from the domain of tourist guides were analysed, i.e. 100 sentences for each language pair and for two online machine translation services, Google Translate and Yandex.Translate. Human evaluation is made with regard to the criteria of fluency and adequacy. In order to measure internal consistency, Cronbach’s alpha is calculated. Error analysis is made for several categories: untranslated/omitted words, surplus words, morphological errors/wrong word endings, lexical errors/wrong translations, syntactic errors/wrong word order and punctuation errors. At the end of this paper, conclusions and suggestions for further research are given.
Archive | 2010
Ksenija Klasnić; Jadranka Lasić-Lazić; Sanja Seljan
As e-learning has become an increasingly important issue in educational systems in the last several years, a considerable number of generic standards, quality guidelines and frameworks have been published relating to better efficiency and quality improvement of the e-learning. While early initiatives were concentrated on the functional understanding and technical skills of ICT use, nowadays they take more into consideration motivation, satisfaction and contextualization which are reflected in the quality of e-learning. In the first part of the paper, different views towards the quality of an integrated e-learning system have been considered. In the second part, the research results regarding the quality of the integrated e-learning system (Moodle) have been presented and discussed in relation to different European policies. The survey was made on the Croatian version of Moodle (Omega) at the Faculty of Humanities and Social Sciences, University of Zagreb.
iberian conference on information systems and technologies | 2015
Sanja Seljan; Ivan Dunder
Automatic quality evaluation of machine translation systems has become an important issue in the field of natural language processing, due to raised interest and needs of industry and everyday users. Development of online machine translation systems is also important for less-resourced languages, as they enable basic information transfer and communication. Although the quality of free online automatic translation systems is not perfect, it is important to assure acceptable quality. As human evaluation is time-consuming, expensive and subjective, automatic quality evaluation metrics try to approach and approximate human evaluation as much as possible. In this paper, several automatic quality metrics will be utilised, in order to assess the quality of specific machine translated text. Namely, the research is performed on sociological-philosophical-spiritual domain, resulting from the digitisation process of a scientific publication written in Croatian and English. The quality evaluation results are discussed and further analysis is proposed.
5th International Conference The Future of Information Sciences - INFuture2015: e-Institutions – Openness, Accessibility, and Preservation | 2015
Sanja Seljan; Ksenija Klasnić; Mara Stojanac; Barbara Pešorda; Nives Mikelić Preradović
Information access – presented in proper language, in understandable way, at the right time and right place can be of considerable importance. Information and communication technology, wrapping also human language technologies, can play important role in information transfer to the specific user. Translation technology along with summarizing technology has opened new possibilities and perspectives, requiring in the same time the critical opinion in information analysis. The main purpose of this research is to present the impact of text summarization and online machine translation tools on information transfer. The research was performed on texts taken from online newspapers in five do- mains (politics, news, sport, film and gastronomy) in English, German and Russian languages. The total of N=240 evaluations were analysed, performed by the same three evaluators. In the research three types of assignments were made. The first assignment was to evaluate machine-translated sentences at the sentence level for the three language pairs (English-Croatian, German-Croatian and Russian-Croatan). In the second task, the similar evaluation was performed, but at the whole text level. In the third assignment, which was related to information transfer, the evaluators were asked to evaluate the overall quality of the texts process in the pipe- lined process (online summarization and online machine translation) for English and German. Assessment was based on the finding the answers to the following questions – who, what, when, where, and how? The results were analysed by ANOVA, t-test and binary logistic regression.
international conference on computational linguistics | 2013
Marija Brkić; Sanja Seljan; Tomislav Vičić
This paper presents work on the manual and automatic evaluation of the online available machine translation (MT) service Google Translate, for the English-Croatian language pair in legislation and general domains. The experimental study is conducted on the test set of 200 sentences in total. Human evaluation is performed by native speakers, using the criteria of fluency and adequacy, and it is enriched by error analysis. Automatic evaluation is performed on a single reference set by using the following metrics: BLEU, NIST, F-measure and WER. The influence of lowercasing, tokenization and punctuation is discussed. Pearsons correlation between automatic metrics is given, as well as correlation between the two criteria, fluency and adequacy, and automatic metrics.
Lecture Notes in Computer Science | 2013
Marija Brkić; Sanja Seljan; Tomislav Vičić
In this paper, we introduce feature adaptation, an unsupervised method for cross-domain natural language processing (NLP). Feature adaptation adapts a supervised NLP system to a new domain by recomputing feature values while retaining the model and the feature definitions used on the original domain. We demonstrate the effectiveness of feature adaptation through cross-domain experiments in compositionality grading and show that it rivals supervised target domain systems when moving from generic web text to a specialized physics text domain.This paper presents work on the manual and automatic evaluation of the online available machine translation (MT) service Google Translate, for the English-Croatian language pair in legislation and general domains. The experimental study is conducted on the test set of 200 sentences in total. Human evaluation is performed by native speakers, using the criteria of fluency and adequacy, and it is enriched by error analysis. Automatic evaluation is performed on a single reference set by using the following metrics: BLEU, NIST, F-measure and WER. The influence of lowercasing, tokenization and punctuation is discussed. Pearsons correlation between automatic metrics is given, as well as cor relation between the two criteria, fluency and adequacy, and automatic metrics.
Archive | 2011
Sanja Seljan; Marija Brkić; Vlasta Kučiš
29th Internationl Convention MIPRO 2006 : Computers in Education | 2006
Sanja Seljan; Mihaela Banek Zorica; Sonja Špiranec; Jadranka Lasić-Lazić