Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Natalie Schluter is active.

Publication


Featured researches published by Natalie Schluter.


international joint conference on natural language processing | 2015

Unsupervised extractive summarization via coverage maximization with syntactic and semantic concepts

Natalie Schluter; Anders Søgaard

Coverage maximization with bigram concepts is a state-of-the-art approach to unsupervised extractive summarization. It has been argued that such concepts are adequate and, in contrast to more linguistic concepts such as named entities or syntactic dependencies, more robust, since they do not rely on automatic processing. In this paper, we show that while this seems to be the case for a commonly used newswire dataset, use of syntactic and semantic concepts leads to significant improvements in performance in other domains.


joint conference on lexical and computational semantics | 2015

The complexity of finding the maximum spanning DAG and other restrictions for DAG parsing of natural language

Natalie Schluter

Recently, there has been renewed interest in semantic dependency parsing, among which one of the paradigms focuses on parsing directed acyclic graphs (DAGs). Consideration of the decoding problem in natural language semantic dependency parsing as finding a maximum spanning DAG of a weighted directed graph carries many complexities. In particular, the computational complexity (and approximability) of the problem has not been addressed in the literature to date. This paper helps to fill this gap, showing that this general problem is APX-hard, and is NP-hard even under the planar restriction, in the graphtheoretic sense. On the other hand, we show that under the restriction of projectivity, the problem has a straightforward O(n 3 ) algorithm. We also give some empirical evidence of the algorithmic importance of these graph restrictions, on data from the SemEval 2014 task 8 on Broad Coverage Semantic Dependency Parsing.


meeting of the association for computational linguistics | 2014

On maximum spanning DAG algorithms for semantic DAG parsing

Natalie Schluter

Consideration of the decoding problem in semantic parsing as finding a maximum spanning DAG of a weighted directed graph carries many complexities that haven’t been fully addressed in the literature to date, among which are its actual appropriateness for the decoding task in semantic parsing, not to mention an explicit proof of its complexity (and its approximability). In this paper, we consider the objective function for the maximum spanning DAG problem, and what it means in terms of decoding for semantic parsing. In doing so, we give anecdotal evidence against its use in this task. In addition, we consider the only graph-based maximum spanning DAG approximation algorithm presented in the literature (without any approximation guarantee) to date and finally provide an approximation guarantee for it, showing that it is an O( 1 n ) factor approximation algorithm, where n is the size of the digraph’s vertex set.


developments in language theory | 2010

On Lookahead Hierarchies for Monotone and Deterministic Restarting Automata with Auxiliary Symbols (Extended Abstract)

Natalie Schluter

A restarting automaton is a special type of linearly bounded automaton with fixed lookahead length k, whose computation proceeds in cycles performing one length-reducing rewrite of the lookahead contents per cycle as the only modification of the input. Through various restrictions on this machine model, a vast number of traditional and new language classes have been excavated. In two studies on lookahead hierarchies for restarting automata without auxiliary symbols [2,3], it was shown that lookahead length is often a significant restriction on the power of these types of restarting automata. No similar study on lookahead hierarchies for restarting automata with auxiliary symbols has been explicitly carried out.


International Journal of Computer Mathematics | 2015

Restarting automata with auxiliary symbols restricted by lookahead size

Natalie Schluter

This paper presents a study on lookahead hierarchies for restarting automata with auxiliary symbols. We show that the class of languages for deterministic monotone or monotone restarting automaton, whose restart step and rewrite step are separated, coincides with that of the same type of restarting automaton whose restart and rewrite steps are not separated, for any fixed lookahead size. For the non-monotone deterministic case, the lookahead length must be approximately doubled. We then turn our attention to restarting automata with small lookahead. For the general restarting automaton model, we show that there are just two different classes of languages recognized, through the restriction of lookahead size: those with lookahead size 1 and those with lookahead size 2. We also show that the respective (left-) monotone restarting automaton models characterize the context-free languages and that the respective right–left-monotone restarting automata characterize the linear languages both with just lookahead length 2.


international conference on computational linguistics | 2014

Copenhagen-Malmö: Tree Approximations of Semantic Parsing Problems

Natalie Schluter; Anders Søgaard; Jakob Elming; Dirk Hovy; Barbara Plank; Héctor Martínez Alonso; Anders Johanssen; Sigrid Klerke

In this shared task paper for SemEval2014 Task 8, we show that most semantic structures can be approximated by trees through a series of almost bijective graph transformations. We transform input graphs, apply off-the-shelf methods from syntactic parsing on the resulting trees, and retrieve output graphs. Using tree approximations, we obtain good results across three semantic formalisms, with a 15.9% error reduction over a stateof-the-art semantic role labeling system on development data. Our system came in 3/6 in the shared task closed track.


language and automata theory and applications | 2011

Restarting automata with auxiliary symbols and small lookahead

Natalie Schluter

We present a study on lookahead hierarchies for restarting automata with auxiliary symbols and small lookahead. In particular, we show that there are just two different classes of languages recognised by RRWW automata, through the restriction of lookahead size. We also show that the respective (left-) monotone restarting automaton models characterise the context-free languages and that the respective right-leftmonotone restarting automata characterise the linear languages both with just lookahead length 2.


meeting of the association for computational linguistics | 2017

How (not) to train a dependency parser: The curious case of jackknifing part-of-speech taggers

Željko Agić; Natalie Schluter

In dependency parsing, jackknifing taggers is indiscriminately used as a simple adaptation strategy. Here, we empirically evaluate when and how (not) to use jackknifing in parsing. On 26 languages, we reveal a preference that conflicts with, and surpasses the ubiquitous ten-folding. We show no clear benefits of tagging the training data in cross-lingual parsing.


north american chapter of the association for computational linguistics | 2016

CoastalCPH at SemEval-2016 Task 11: The importance of designing your Neural Networks right.

Joachim Bingel; Natalie Schluter; Héctor Martínez Alonso

We present two methods for the automatic detection of complex words in context as perceived by non-native English readers, for the SemEval 2016 Task 11 on Complex Word Identification (Paetzold and Specia, 2016). The submitted systems exploit the same set of features, but are highly disparate in (i) their learning algorithm and (ii) their angle on the learning objective, where especially the latter presents an effort to account for the sparsity of positive instances in the data as well as the large disparity between the distributions of positive instances in the training and test data. We further present valuable insights that we gained during intensive and extensive posttask experiments. Those revealed that despite poor results in the task, our neural network approach is competitive with the systems achieving the best results. The central contribution of this paper is therefore a demonstration of the aptitude of deep neural networks for the task of identifying complex words.


international workshop/conference on parsing technologies | 2015

Maximising Spanning Subtree Scores for Parsing Tree Approximations of Semantic Dependency Digraphs

Natalie Schluter

We present a method for finding the best tree approximation parse of a dependency digraph for a given sentence, with respect to a dataset of semantic digraphs as a computationally efficient and accurate alternative to DAG parsing. We present a training algorithm that learns the spanning subtree parses with the highest scores with respect to the data, and consider the output of this algorithm a description of the best tree approximations for digraphs of sentences from similar data. With the results from this approach, we acquire some important insights on the limits of solely data-driven tree approximation approaches to semantic dependency DAG parsing, and their rule-based, pre-processed tree approximation counterparts.

Collaboration


Dive into the Natalie Schluter's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Barbara Plank

University of Copenhagen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Željko Agić

University of Copenhagen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jakob Elming

Copenhagen Business School

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dirk Hovy

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Sigrid Klerke

University of Copenhagen

View shared research outputs
Researchain Logo
Decentralizing Knowledge