Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Maximilian Köper is active.

Publication


Featured researches published by Maximilian Köper.


GSCL | 2013

Pattern-Based Distinction of Paradigmatic Relations for German Nouns, Verbs, Adjectives

Sabine Schulte im Walde; Maximilian Köper

This paper implements a simple vector space model relying on lexico-syntactic patterns to distinguish between the paradigmatic relations synonymy, antonymy and hypernymy. Our study is performed across word classes, and models the lexical relations between German nouns, verbs and adjectives. Applying nearest-centroid classification to the relation vectors, we achieve a precision of 59.80%, which significantly outperforms the majority baseline (χ 2, p<0.05). The best results rely on large-scale, noisy patterns, without significant improvements from various pattern generalisations and reliability filters. Analysing the classification shows that (i) antonym/synonym distinction is performed significantly better than synonym/hypernym distinction, and (ii) that paradigmatic relations between verbs are more difficult to predict than paradigmatic relations between nouns or adjectives.


meeting of the association for computational linguistics | 2016

Improving Zero-Shot-Learning for German Particle Verbs by using Training-Space Restrictions and Local Scaling.

Maximilian Köper; Sabine Schulte im Walde; Max Kisselew; Sebastian Padó

Recent models in distributional semantics consider derivational patterns (e.g., use → use + f ul ) as the result of a compositional process, where base term and affix are combined. We exploit such models for German particle verbs (PVs), and focus on the task of learning a mapping function between base verbs and particle verbs. Our models apply particle-verb motivated training-space restrictions relying on nearest neighbors, as well as recent advances from zeroshot-learning. The models improve the mapping between base terms and derived terms for a new PV derivation dataset, and also across existing derivation datasets for German and English.


Proceedings of the 13th Workshop on Multiword Expressions (MWE 2017) | 2017

Complex Verbs are Different: Exploring the Visual Modality in Multi-Modal Models to Predict Compositionality.

Maximilian Köper; Sabine Schulte im Walde

This paper compares a neural network DSM relying on textual co-occurrences with a multi-modal model integrating visual information. We focus on nominal vs. verbal compounds, and zoom into lexical, empirical and perceptual target properties to explore the contribution of the visual modality. Our experiments show that (i) visual features contribute differently for verbs than for nouns, and (ii) images complement textual information, if (a) the textual modality by itself is poor and appropriate image subsets are used, or (b) the textual modality by itself is rich and large (potentially noisy) images are added.


International Conference of the German Society for Computational Linguistics and Language Technology | 2017

Optimizing Visual Representations in Semantic Multi-modal Models with Dimensionality Reduction, Denoising and Contextual Information

Maximilian Köper; Kim Anh Nguyen; Sabine Schulte im Walde

This paper improves visual representations for multi-modal semantic models, by (i) applying standard dimensionality reduction and denoising techniques, and by (ii) proposing a novel technique \( ContextVision \) that takes corpus-based textual information into account when enhancing visual embeddings. We explore our contribution in a visual and a multi-modal setup and evaluate on benchmark word similarity and relatedness tasks. Our findings show that NMF, denoising as well as \( ContextVision \) perform significantly better than the original vectors or SVD-modified vectors.


Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications | 2017

Improving Verb Metaphor Detection by Propagating Abstractness to Words, Phrases and Individual Senses

Maximilian Köper; Sabine Schulte im Walde

Abstract words refer to things that can not be seen, heard, felt, smelled, or tasted as opposed to concrete words. Among other applications, the degree of abstractness has been shown to be a useful information for metaphor detection. Our contribution to this topic are as follows: i) we compare supervised techniques to learn and extend abstractness ratings for huge vocabularies ii) we learn and investigate norms for multi-word units by propagating abstractness to verb-noun pairs which lead to better metaphor detection, iii) we overcome the limitation of learning a single rating per word and show that multisense abstractness ratings are potentially useful for metaphor detection. Finally, with this paper we publish automatically created abstractness norms for 3 million English words and multi-words as wellwords refer to things that can not be seen, heard, felt, smelled, or tasted as opposed to concrete words. Among other applications, the degree of abstractness has been shown to be a useful information for metaphor detection. Our contribution to this topic are as follows: i) we compare supervised techniques to learn and extend abstractness ratings for huge vocabularies ii) we learn and investigate norms for multi-word units by propagating abstractness to verb-noun pairs which lead to better metaphor detection, iii) we overcome the limitation of learning a single rating per word and show that multisense abstractness ratings are potentially useful for metaphor detection. Finally, with this paper we publish automatically created abstractness norms for 3 million English words and multi-words as well as automatically created sense-specific abstractness ratings.


north american chapter of the association for computational linguistics | 2016

Distinguishing Literal and Non-Literal Usage of German Particle Verbs

Maximilian Köper; Sabine Schulte im Walde

This paper provides a binary, token-based classification of German particle verbs (PVs) into literal vs. non-literal usage. A random forest improving standard features (e.g., bagof-words; affective ratings) with PV-specific information and abstraction over common nouns significantly outperforms the majority baseline. In addition, PV-specific classification experiments demonstrate the role of shared particle semantics and semantically related base verbs in PV meaning shifts.


meeting of the association for computational linguistics | 2016

Automatic Semantic Classification of German Preposition Types: Comparing Hard and Soft Clustering Approaches across Features

Maximilian Köper; Sabine Schulte im Walde

This paper addresses an automatic classification of preposition types in German, comparing hard and soft clustering approaches and various window- and syntax-based co-occurrence features. We show that (i) the semantically most salient preposition features (i.e., subcategorised nouns) are the most successful, and that (ii) soft clustering approaches are required for the task but reveal quite different attitudes towards predicting ambiguity.


Proceedings of the 11th International Conference on Computational Semantics | 2015

Multilingual Reliability and "Semantic" Structure of Continuous Word Spaces

Maximilian Köper; Christian Scheible; Sabine Schulte im Walde


language resources and evaluation | 2016

Automatically Generated Affective Norms of Abstractness, Arousal, Imageability and Valence for 350 000 German Lemmas.

Maximilian Köper; Sabine Schulte im Walde


empirical methods in natural language processing | 2017

Hierarchical Embeddings for Hypernymy Detection and Directionality.

Kim Anh Nguyen; Maximilian Köper; Sabine Schulte im Walde; Ngoc Thang Vu

Collaboration


Dive into the Maximilian Köper's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Max Kisselew

University of Stuttgart

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Qi Han

University of Stuttgart

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Steffen Koch

University of Stuttgart

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge