Peter Menke
Bielefeld University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Peter Menke.
Proceedings of the 8th International Conference on the Evolution of Language (Evolang8) | 2010
Alexander Mehler; Petra Weiß; Peter Menke; Andy Lücking
This paper presents a model of lexical alignment in communication. The aim is to provide a reference model for simulating dialogs in naming game-related simulations of language evolution. We introduce a network model of alignment to shed light on the law-like dynamics of dialogs in contrast to their random counterpart. That way, the paper provides evidence on alignment to be used as reference data in building simulation models of dyadic conversations.
Neural Networks | 2012
Alexander Mehler; Andy Lücking; Peter Menke
We present a network model of dialog lexica, called TiTAN (Two-layer Time-Aligned Network) series. TiTAN series capture the formation and structure of dialog lexica in terms of serialized graph representations. The dynamic update of TiTAN series is driven by the dialog-inherent timing of turn-taking. The model provides a link between neural, connectionist underpinnings of dialog lexica on the one hand and observable symbolic behavior on the other. On the neural side, priming and spreading activation are modeled in terms of TiTAN networking. On the symbolic side, TiTAN series account for cognitive alignment in terms of the structural coupling of the linguistic representations of dialog partners. This structural stance allows us to apply TiTAN in machine learning of data of dialogical alignment. In previous studies, it has been shown that aligned dialogs can be distinguished from non-aligned ones by means of TiTAN -based modeling. Now, we simultaneously apply this model to two types of dialog: task-oriented, experimentally controlled dialogs on the one hand and more spontaneous, direction giving dialogs on the other. We ask whether it is possible to separate aligned dialogs from non-aligned ones in a type-crossing way. Starting from a recent experiment (Mehler, Lücking, & Menke, 2011a), we show that such a type-crossing classification is indeed possible. This hints at a structural fingerprint left by alignment in networks of linguistic items that are routinely co-activated during conversation.
international symposium on neural networks | 2011
Alexander Mehler; Andy Lücking; Peter Menke
We present a lexical network model, called TiTAN, that captures the formation and the structure of natural language dialogue lexica. The model creates a bridge between neural connectionist networks and symbolic architectures: On the one hand, TiTAN is driven by the neural motor of lexical alignment, namely priming. On the other hand, TiTAN accounts for observed symbolic output of interlocutors, namely uttered words. The TiTAN series update is driven by the dialogue inherent dynamics of turns and incorporates a measure of the structural similarity of graphs. This allows to apply and evaluate the model: TiTAN is tested classifying 55 experimental dialogue data according to their alignment status. The trade-off between precision and recall of the classification results in an F-score of 0.92.
Applied Ontology | 2017
Peter Menke; Basil Ell; Philipp Cimiano
Representing provenance information for data is of crucial importance for data reuse. This is in particular the case for language resources such as annotated corpora. NIF has been proposed as an RDF vocabulary to support the representation of text data together with annotations. However, NIF suffers from severe shortcomings with respect to its ability to represent provenance information. As a remedy to this, we present MOND, a new glue ontology that implements an interface between NIF and the PROV-O ontology to support the inclusion of provenance information into NIF annotated datasets. We first present an approach that reifies annotations and allows the attachment of any provenance metadata to annotations at arbitrary granularity. We show that this approach has an important drawback as it roughly doubles the size of the data. Building on this observation, we design the MOND glue ontology that implements a modular approach in which annotation metadata is not attached to single annotations but to modules that represent collections of annotations of the same type and origin. This yields a moderate increase in data size, while maintaining all the benefits of the first approach. We validate our approach on three use cases that represent prototypical needs in corpus work.
Proceedings of the Corpus Linguistics 2009 Conference | 2009
Rüdiger Gleim; Ulli Waltinger; Alexander Mehler; Peter Menke
Proceedings of 12th Workshop on Semantics and Pragmatics of Dialogue | 2008
Andy Lücking; Alexander Mehler; Peter Menke
Proceedings of the 2nd Workshop on Linked Data in Linguistics (LDL-2013): Representing and linking lexicons, terminologies and other language data | 2013
Peter Menke; John P. McCrae; Philipp Cimiano
language resources and evaluation | 2012
Peter Menke; Philipp Cimiano
language resources and evaluation | 2010
Peter Menke; Alexander Mehler
Journal of Multimodal Communication Studies | 2015
Peter Menke; Farina Freigang; Thomas Kronenberg; Sören Klett; Kirsten Bergmann