Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rob van der Goot is active.

Publication


Featured researches published by Rob van der Goot.


international conference on computational linguistics | 2014

The Meaning Factory: Formal Semantics for Recognizing Textual Entailment and Determining Semantic Similarity

Johannes Bjerva; Johan Bos; Rob van der Goot; Malvina Nissim

Shared Task 1 of SemEval-2014 comprised two subtasks on the same dataset of sentence pairs: recognizing textual entailment and determining textual similarity. We used an existing system based on formal semantics and logical inference to participate in the first subtask, reaching an accuracy of 82%, ranking in the top 5 of more than twenty participating systems. For determining semantic similarity we took a supervised approach using a variety of features, the majority of which was produced by our system for recognizing textual entailment. In this subtask our system achieved a mean squared error of 0.322, the best of all participating systems.


meeting of the association for computational linguistics | 2017

Parser Adaptation for Social Media by Integrating Normalization

Rob van der Goot; Gerardus van Noord

This work explores different approaches of using normalization for parser adaptation. Traditionally, normalization is used as separate pre-processing step. We show that integrating the normalization model into the parsing algorithm is more beneficial. This way, multiple normalization candidates can be leveraged, which improves parsing performance on social media. We test this hypothesis by modifying the Berkeley parser; out-of-the-box it achieves an F1 score of 66.52. Our integrated approach reaches a significant improvement with an F1 score of 67.36, while using the best normalization sequence results in an F1 score of only 66.94.This work explores different approaches of using normalization for parser adaptation. Traditionally, normalization is used as separate pre-processing step. We show that integrating the normalization model into the parsing algorithm is more beneficial. This way, multiple normalization candidates can be leveraged, which improves parsing performance on social media. We test this hypothesis by modifying the Berkeley parser; out-of-the-box it achieves an F1 score of 66.52. Our integrated approach reaches a significant improvement with an F1 score of 67.36, while using the best normalization sequence results in an F1 score of only 66.94.


empirical methods in natural language processing | 2017

To Normalize, or Not to Normalize : The Impact of Normalization on Part-of-Speech Tagging

Rob van der Goot; Barbara Plank; Malvina Nissim

Does normalization help Part-of-Speech (POS) tagging accuracy on noisy, non-canonical data? To the best of our knowledge, little is known on the actual impact of normalization in a real-world scenario, where gold error detection is not available. We investigate the effect of automatic normalization on POS tagging of tweets. We also compare normalization to strategies that leverage large amounts of unlabeled data kept in its raw form. Our results show that normalization helps, but does not add consistently beyond just word embedding layer initialization. The latter approach yields a tagging model that is competitive with a Twitter state-of-the-art tagger.


Computational Linguistics | 2017

Sharing Is Caring: The Future of Shared Tasks

Malvina Nissim; Lasha Abzianidze; Kilian Evang; Rob van der Goot; Hessel Haagsma; Barbara Plank; Martijn Wieling

Shared tasks are indisputably drivers of progress and interest for problems in NLP. This is reflected by their increasing popularity, as well as by the fact that new shared tasks regularly emerge for under-researched and under-resourced topics, especially at workshops and smaller conferences. The general procedures and conventions for organizing a shared task have arisen organically over time (Paroubek, Chaudiron, and Hirschman 2007, Section 7). There is no consistent framework that describes how shared tasks should be organized. This is not a harmful thing per se, but we believe that shared tasks, and by extension the field in general, would benefit from some reflection on the existing conventions. This, in turn, could lead to the future harmonization of shared task procedures. Shared tasks revolve around two aspects: research advancement and competition. We see research advancement as the driving force and main goal behind organizing them. Competition is an instrument to encourage and promote participation. However,


language resources and evaluation | 2016

The Denoised Web Treebank: Evaluating Dependency Parsing under Noisy Input Conditions

Rob van der Goot; Joachim Daiber


computational linguistics in the netherlands | 2017

MoNoise: Modeling Noise Using a Modular Normalization System.

Rob van der Goot; Gerardus van Noord


north american chapter of the association for computational linguistics | 2015

ROB: Using Semantic Meaning to Recognize Paraphrases

Rob van der Goot; Gerardus van Noord


meeting of the association for computational linguistics | 2018

Bleaching Text: Abstract Features for Cross-lingual Gender Prediction

Rob van der Goot; Nikola Ljubešić; Ian Matroos; Malvina Nissim; Barbara Plank


language resources and evaluation | 2018

A Taxonomy for In-depth Evaluation of Normalization for User Generated Content

Rob van der Goot; Rik van Noord; Gertjan van Noord


empirical methods in natural language processing | 2018

Modeling Input Uncertainty in Neural Network Dependency Parsing

Rob van der Goot; Gertjan van Noord

Collaboration


Dive into the Rob van der Goot's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Barbara Plank

University of Copenhagen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Johan Bos

University of Groningen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kilian Evang

University of Groningen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge