Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Judith Eckle-Kohler is active.

Publication


Featured researches published by Judith Eckle-Kohler.


empirical methods in natural language processing | 2015

On the Role of Discourse Markers for Discriminating Claims and Premises in Argumentative Discourse

Judith Eckle-Kohler; Roland Kluge; Iryna Gurevych

This paper presents a study on the role of discourse markers in argumentative discourse. We annotated a German corpus with arguments according to the common claim-premise model of argumentation and performed various statistical analyses regarding the discriminative nature of discourse markers for claims and premises. Our experiments show that particular semantic groups of discourse markers are indicative of either claims or premises and constitute highly predictive features for discriminating between them.


north american chapter of the association for computational linguistics | 2015

Linking the Thoughts: Analysis of Argumentation Structures in Scientific Publications

Christian Kirschner; Judith Eckle-Kohler; Iryna Gurevych

This paper presents the results of an annotation study focused on the fine-grained analysis of argumentation structures in scientific publications. Our new annotation scheme specifies four types of binary argumentative relations between sentences, resulting in the representation of arguments as small graph structures. We developed an annotation tool that supports the annotation of such graphs and carried out an annotation study with four annotators on 24 scientific articles from the domain of educational research. For calculating the inter-annotator agreement, we adapted existing measures and developed a novel graphbased agreement measure which reflects the semantic similarity of different annotation graphs.


Sprachwissenschaft | 2015

lemonUby – A large, interlinked, syntactically-rich lexical resource for ontologies

Judith Eckle-Kohler; John P. McCrae; Christian Chiarcos

We introduce lemonUby, a new lexical resource integrated in the Semantic Web which is the result of converting data extracted from the existing large-scale linked lexical resource UBY to the lemon lexicon model. The following data from UBY were converted: WordNet, FrameNet, VerbNet, English and German Wiktionary, the English and German entries of Omega- Wiki, as well as links between pairs of these lexicons at the word sense level (links between VerbNet and FrameNet, VerbNet and WordNet, WordNet and FrameNet, WordNet and Wiktionary, WordNet and German OmegaWiki). We linked lemonUby to other lexical resources and linguistic terminology repositories in the Linguistic Linked Open Data cloud and outline possible applications of this new dataset.


conference of the european chapter of the association for computational linguistics | 2014

Automated Verb Sense Labelling Based on Linked Lexical Resources

Kostadin Cholakov; Judith Eckle-Kohler; Iryna Gurevych

We present a novel approach for creating sense annotated corpora automatically. Our approach employs shallow syntacticosemantic patterns derived from linked lexical resources to automatically identify instances of word senses in text corpora. We evaluate our labelling method intrinsically on SemCor and extrinsically by using automatically labelled corpus text to train a classifier for verb sense disambiguation. Testing this classifier on verbs from the English MASC corpus and on verbs from the Senseval-3 all-words disambiguation task shows that it matches the performance of a classifier which has been trained on manually annotated data.


meeting of the association for computational linguistics | 2016

Optimizing an Approximation of ROUGE - a Problem-Reduction Approach to Extractive Multi-Document Summarization

Maxime Peyrard; Judith Eckle-Kohler

This paper presents a problem-reduction approach to extractive multi-document summarization: we propose a reduction to the problem of scoring individual sentences with their ROUGE scores based on supervised learning. For the summarization, we solve an optimization problem where the ROUGE score of the selected summary sentences is maximized. To this end, we derive an approximation of the ROUGE-N score of a set of sentences, and define a principled discrete optimization problem for sentence selection. Mathematical and empirical evidence suggests that the sentence selection step is solved almost exactly, thus reducing the problem to the sentence scoring task. We perform a detailed experimental evaluation on two DUC datasets to demonstrate the validity of our approach.


Computer Speech & Language | 2008

Automatic recognition of German news focusing on future-directed beliefs and intentions

Judith Eckle-Kohler; Michael Kohler; Jens Mehnert

We consider the classification of German news stories as either focusing on future-directed beliefs and intentions or lacking these. The method proposed in this article requires only a small set of labeled training data. Rather, we introduce German clues for the automatic identification of future-orientation which are used for automatic labeling of Reuters news stories. We describe the development of a high-precision procedure for automatic labeling in a bootstrapping fashion: A first version of the labeling procedure uses the absence of clues for future-directedness as indicator for non-future-directedness and is able to automatically label about one-third of the Reuters news stories with high precision. Then a perceptron is applied to the automatically labeled news stories in order to semi-automatically acquire an additional set of clues for non-future-directedness. The second version of the labeling procedure additionally uses these clues and achieves remarkably improved results in terms of recall; it can even be extended by a guessing step to perform classification with an error of 22.5%. We also investigate another way to increase the recall by using the automatically labeled news stories as training data for statistical classifiers. Three different types of statistical classifiers are applied in order to address the question, which classifier is most suited for the text classification task considered. The best statistical classifier combined with the results of improved automatic labeling is able to recognize the two classes of news stories with an error of 19%.


meeting of the association for computational linguistics | 2016

EmpiriST: AIPHES - Robust Tokenization and POS-Tagging for Different Genres

Steffen Remus; Gerold Hintz; Chris Biemann; Christian M. Meyer; Darina Benikova; Judith Eckle-Kohler; Margot Mieskes; Thomas Arnold

We present our system used for the AIPHES team submission in the context of the EmpiriST shared task on “Automatic Linguistic Annotation of ComputerMediated Communication / Social Media”. Our system is based on a rulebased tokenizer and a machine learning sequence labelling POS tagger using a variety of features. We show that the system is robust across the two tested genres: German computer mediated communication (CMC) and general German web data (WEB). We achieve the second rank in three of four scenarios. Also, the presented systems are freely available as open source components.


meeting of the association for computational linguistics | 2017

Supervised Learning of Automatic Pyramid for Optimization-Based Multi-Document Summarization.

Maxime Peyrard; Judith Eckle-Kohler

We present a new supervised framework that learns to estimate automatic Pyramid scores and uses them for optimization-based extractive multi-document summarization. For learning automatic Pyramid scores, we developed a method for automatic training data generation which is based on a genetic algorithm using automatic Pyramid as the fitness function. Our experimental evaluation shows that our new framework significantly outperforms strong baselines regarding automatic Pyramid, and that there is much room for improvement in comparison with the upper-bound for automatic Pyramid.


meeting of the association for computational linguistics | 2017

A Principled Framework for Evaluating Summarizers: Comparing Models of Summary Quality against Human Judgments

Maxime Peyrard; Judith Eckle-Kohler

We present a new framework for evaluating extractive summarizers, which is based on a principled representation as optimization problem. We prove that every extractive summarizer can be decomposed into an objective function and an optimization technique. We perform a comparative analysis and evaluation of several objective functions embedded in well-known summarizers regarding their correlation with human judgments. Our comparison of these correlations across two datasets yields surprising insights into the role and performance of objective functions in the different summarizers.


meeting of the association for computational linguistics | 2017

Integrating Deep Linguistic Features in Factuality Prediction over Unified Datasets

Gabriel Stanovsky; Judith Eckle-Kohler; Yevgeniy Puzikov; Ido Dagan; Iryna Gurevych

Previous models for the assessment of commitment towards a predicate in a sentence (also known as factuality prediction) were trained and tested against a specific annotated dataset, subsequently limiting the generality of their results. In this work we propose an intuitive method for mapping three previously annotated corpora onto a single factuality scale, thereby enabling models to be tested across these corpora. In addition, we design a novel model for factuality prediction by first extending a previous rule-based factuality prediction system and applying it over an abstraction of dependency trees, and then using the output of this system in a supervised classifier. We show that this model outperforms previous methods on all three datasets. We make both the unified factuality corpus and our new model publicly available.

Collaboration


Dive into the Judith Eckle-Kohler's collaboration.

Top Co-Authors

Avatar

Iryna Gurevych

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Christian M. Meyer

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Michael Kohler

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Michael Matuschek

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Silvana Hartmann

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christian Chiarcos

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar

Kostadin Cholakov

Humboldt University of Berlin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Steffen Remus

Technische Universität Darmstadt

View shared research outputs
Researchain Logo
Decentralizing Knowledge