Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ramon Ziai is active.

Publication


Featured researches published by Ramon Ziai.


International journal of continuing engineering education and life-long learning | 2011

Integrating parallel analysis modules to evaluate the meaning of answers to reading comprehension questions

Detmar Meurers; Ramon Ziai; Niels Ott; Stacey Bailey

Contextualised, meaning-based interaction in the foreign language is widely recognised as crucial for second language acquisition. Correspondingly, current exercises in foreign language teaching generally require students to manipulate both form and meaning. For intelligent language tutoring systems to support such activities, they thus must be able to evaluate the appropriateness of the meaning of a learner response for a given exercise. We discuss such a content-assessment approach, focusing on reading comprehension exercises. We pursue the idea that a range of simultaneously available representations at different levels of complexity and linguistic abstraction provide a good empirical basis for content assessment. We show how an annotation-based NLP architecture implementing this idea can be realised and that it successfully performs on a corpus of authentic learner answers to reading comprehension questions. To support comparison and sustainable development on content assessment, we also define a general exchange format for such exercise data.


Computer Assisted Language Learning | 2011

Analyzing Learner Language: Towards a Flexible Natural Language Processing Architecture for Intelligent Language Tutors.

Luiz Amaral; Detmar Meurers; Ramon Ziai

Intelligent language tutoring systems (ILTS) typically analyze learner input to diagnose learner language properties and provide individualized feedback. Despite a long history of ILTS research, such systems are virtually absent from real-life foreign language teaching (FLT). Taking a step toward more closely linking ILTS research to real-life FLT, in this article we investigate the connection between FLT activity design and the system architecture of an ILT system. We argue that a demand-driven, annotation-based natural language processing (NLP) architecture is well-suited to handle the demands posed by the heterogeneous learner input which results when supporting a wider range of FLT activity types. We illustrate how the unstructured information management architecture (UIMA) can be used in an ILTS, thereby connecting the specific needs of activities in foreign language teaching to the current research and development of NLP architectures in general. Making the conceptual issues concrete, we discuss the design and realization of a UIMA-based reimplementation of the NLP in the TAGARELA system, an intelligent web-based tutoring system supporting the teaching and learning of Portuguese.


linguistic annotation workshop | 2014

Focus Annotation in Reading Comprehension Data

Ramon Ziai; Detmar Meurers

When characterizing the information structure of sentences, the so-called focus identifies the part of a sentence addressing the current question under discussion in the discourse. While this notion is precisely defined in formal semantics and potentially very useful in theoretical and practical terms, it has turned out to be difficult to reliably annotate focus in corpus data. We present a new focus annotation effort designed to overcome this problem. On the one hand, it is based on a task-based corpus providing more explicit context. The annotation study is based on the CREG corpus (Ott et al., 2012), which consists of answers to explicitly given reading comprehension questions. On the other hand, we operationalize focus annotation as an incremental process including several substeps which provide guidance, such as explicit answer typing. We evaluate the focus annotation both intrinsically by calculating agreement between annotators and extrinsically by showing that the focus information substantially improves the automatic meaning assessment of answers in the CoMiC system (Meurers et al., 2011).


north american chapter of the association for computational linguistics | 2015

CoMiC: Adapting a Short Answer Assessment System for Answer Selection

Björn Rudzewitz; Ramon Ziai

Open forum threads exhibit a great variability in the quality and quantity of the answers they attract, making it difficult to manually moderate and separate relevant from irrelevant content. The goal of SemEval 2015 Task 3 (Subtask A, English) is to build systems that automatically distinguish between relevant and irrelevant content in forum threads. We extend a short answer assessment system to build relations between forum questions and answers with respect to similarity, question type, and answer content. The features are used in a sequence classifier to account for the conversation character of threads. The performance of this approach is modest in comparison to the other task participants and also to the performance the system usually reaches in short answer assessment. However, the new features implemented for this task are a first step in developing more fine-grained question-answer features and identifying relevant answers.


linguistic annotation workshop | 2016

Focus Annotation of Task-based Data: Establishing the Quality of Crowd Annotation.

Kordula De Kuthy; Ramon Ziai; Detmar Meurers

We explore the annotation of information structure in German and compare the quality of expert annotation with crowdsourced annotation taking into account the cost of reaching crowd consensus. Concretely, we discuss a crowd-sourcing effort annotating focus in a task-based corpus of German containing reading comprehension questions and answers. Against the backdrop of a gold standard reference resulting from adjudicated expert annotation, we evaluate a crowd sourcing experiment using majority voting to determine a baseline performance. To refine the crowd-sourcing setup, we introduce the Consensus Cost as a measure of agreement within the crowd. We investigate the usefulness of Consensus Cost as a measure of crowd annotation quality both intrinsically, in relation to the expert gold standard, and extrinsically, by integrating focus annotation information into a system performing Short Answer Assessment taking into account the Consensus Cost. We find that low Consensus Cost in crowd sourcing indicates high quality, though high cost does not necessarily indicate low accuracy but increased variability. Overall, taking Consensus Cost into account improves both intrinsic and extrinsic evaluation measures.


joint conference on lexical and computational semantics | 2016

Approximating Givenness in Content Assessment through Distributional Semantics

Ramon Ziai; Kordula De Kuthy; Detmar Meurers

Givenness (Schwarzschild, 1999) is one of the central notions in the formal pragmatic literature discussing the organization of discourse. In this paper, we explore where distributional semantics can help address the gap between the linguistic insights into the formal pragmatic notion of Givenness and its implementation in computational linguistics. As experimental testbed, we focus on short answer assessment, in which the goal is to assess whether a student response correctly answers the provided reading comprehension question or not. Current approaches only implement a very basic, surface-based perspective on Givenness: A word of the answer that appears as such in the question counts as GIVEN. We show that an approach approximating Givenness using distributional semantics to check whether a word in a sentence is similar enough to a word in the context to count as GIVEN is more successful quantitatively and supports interesting qualitative insights into the data and the limitations of a basic distributional semantic approach identifying Givenness at the lexical level.


workshop on innovative use of nlp for building educational applications | 2010

Enhancing Authentic Web Pages for Language Learners

Detmar Meurers; Ramon Ziai; Luiz Amaral; Adriane Boyd; Aleksandar Dimitrov; Vanessa Metcalf; Niels Ott


empirical methods in natural language processing | 2011

Evaluating Answers to Reading Comprehension Questions in Context: Results for German and the Role of Information Structure

Detmar Meurers; Ramon Ziai; Niels Ott; Janina Kopp


Archive | 2010

Evaluating Dependency Parsing Performance on German Learner Language

Niels Ott; Ramon Ziai


Archive | 2012

Creation and Analysis of a Reading Comprehension Exercise Corpus : Towards Evaluating Meaning in Context

Niels Ott; Ramon Ziai; Walt Detmar Meurers

Collaboration


Dive into the Ramon Ziai's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Niels Ott

University of Tübingen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luiz Amaral

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge