Niko Schenk
Goethe University Frankfurt
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Niko Schenk.
Proceedings of the CoNLL-16 shared task | 2016
Niko Schenk; Christian Chiarcos; Kathrin Donandt; Samuel Rönnqvist; Evgeny A. Stepanov; Giuseppe Riccardi
We describe our contribution to the CoNLL 2016 Shared Task on shallow discourse parsing.1 Our system extends the two best parsers from previous year’s competition by integration of a novel implicit sense labeling component. It is grounded on a highly generic, language-independent feedforward neural network architecture incorporating weighted word embeddings for argument spans which obviates the need for (traditional) hand-crafted features. Despite its simplicity, our system overall outperforms all results from 2015 on 5 out of 6 evaluation sets for English and achieves an absolute improvement in F1-score of 3.2% on the PDTB test section for non-explicit sense classification.
conference on computational natural language learning | 2015
Christian Chiarcos; Niko Schenk
We describe a minimalist approach to shallow discourse parsing in the context of the CoNLL 2015 Shared Task. 1 Our parser integrates a rule-based component for argument identification and datadriven models for the classification of explicit and implicit relations. We place special emphasis on the evaluation of implicit sense labeling, we present different feature sets and show that (i) word embeddings are competitive with traditional word-level features, and (ii) that they can be used to considerably reduce the total number of features. Despite its simplicity, our parser is competitive with other systems in terms of sense recognition and thus provides a solid ground for further refinement.
joint conference on lexical and computational semantics | 2015
Steffen Eger; Niko Schenk; Alexander Mehler
We induce semantic association networks from translation relations in parallel corpora. The resulting semantic spaces are encoded in a single reference language, which ensures cross-language comparability. As our main contribution, we cluster the obtained (crosslingually comparable) lexical semantic spaces. We find that, in our sample of languages, lexical semantic spaces largely coincide with genealogical relations. To our knowledge, this constitutes the first large-scale quantitative lexical semantic typology that is completely unsupervised, bottom-up, and datadriven. Our results may be important for the decision which multilingual resources to integrate in a semantic evaluation task.
meeting of the association for computational linguistics | 2017
Samuel Rönnqvist; Niko Schenk; Christian Chiarcos
We introduce an attention-based Bi-LSTM for Chinese implicit discourse relations and demonstrate that modeling argument pairs as a joint sequence can outperform word order-agnostic approaches. Our model benefits from a partial sampling scheme and is conceptually simple, yet achieves state-of-the-art performance on the Chinese Discourse Treebank. We also visualize its attention activity to illustrate the models ability to selectively focus on the relevant parts of an input sequence.
Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics | 2017
Niko Schenk; Christian Chiarcos
We present a resource-lean neural recognizer for modeling coherence in commonsense stories. Our lightweight system is inspired by successful attempts to modeling discourse relations and stands out due to its simplicity and easy optimization compared to prior approaches to narrative script learning. We evaluate our approach in the Story Cloze Test demonstrating an absolute improvement in accuracy of 4.7% over state-of-the-art implementations.
north american chapter of the association for computational linguistics | 2016
Niko Schenk; Christian Chiarcos
Gold annotations for supervised implicit semantic role labeling are extremely sparse and costly. As a lightweight alternative, this paper describes an approach based on unsupervised parsing which can do without iSRL-specific training data: We induce prototypical roles from large amounts of explicit SRL annotations paired with their distributed word representations. An evaluation shows competitive performance with supervised methods on the SemEval 2010 data, and our method can easily be applied to predicates (or languages) for which no training annotations are available.
annual meeting of the special interest group on discourse and dialogue | 2015
Christian Chiarcos; Niko Schenk
We propose a generic, memory-based approach for the detection of implicit semantic roles. While state-of-the-art methods for this task combine hand-crafted rules with specialized and costly lexical resources, our models use large corpora with automated annotations for explicit semantic roles only to capture the distribution of predicates and their associated roles. We show that memory-based learning can increase the recognition rate of implicit roles beyond the state-of-the-art.
language resources and evaluation | 2018
Christian Chiarcos; Niko Schenk
language resources and evaluation | 2018
Christian Chiarcos; Émilie Pagé-Perron; Ilya Khait; Niko Schenk; Lucas Reckling
language resources and evaluation | 2018
Armin Hoenen; Niko Schenk