Noortje Venhuizen
Saarland University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Noortje Venhuizen.
Cognitive Science | 2017
Harm Brouwer; Matthew W. Crocker; Noortje Venhuizen; John Hoeks
Abstract Ten years ago, researchers using event‐related brain potentials (ERPs) to study language comprehension were puzzled by what looked like a Semantic Illusion: Semantically anomalous, but structurally well‐formed sentences did not affect the N400 component—traditionally taken to reflect semantic integration—but instead produced a P600 effect, which is generally linked to syntactic processing. This finding led to a considerable amount of debate, and a number of complex processing models have been proposed as an explanation. What these models have in common is that they postulate two or more separate processing streams, in order to reconcile the Semantic Illusion and other semantically induced P600 effects with the traditional interpretations of the N400 and the P600. Recently, however, these multi‐stream models have been called into question, and a simpler single‐stream model has been proposed. According to this alternative model, the N400 component reflects the retrieval of word meaning from semantic memory, and the P600 component indexes the integration of this meaning into the unfolding utterance interpretation. In the present paper, we provide support for this “Retrieval–Integration (RI)” account by instantiating it as a neurocomputational model. This neurocomputational model is the first to successfully simulate the N400 and P600 amplitude in language comprehension, and simulations with this model provide a proof of concept of the single‐stream RI account of semantically induced patterns of N400 and P600 modulations.
Handbook of Linguistic Annotation | 2017
Johan Bos; Valerio Basile; Kilian Evang; Noortje Venhuizen; Johannes Bjerva
The goal of the Groningen Meaning Bank (GMB) is to obtain a large corpus of English texts annotated with formal meaning representations. Since manually annotating a comprehensive corpus with deep semantic representations is a hard and time-consuming task, we employ a sophisticated bootstrapping approach. This method employs existing language technology tools (for segmentation, part-of-speech tagging, named entity tagging, animacy labelling, syntactic parsing, and semantic processing) to get a reasonable approximation of the target annotations as a starting point. The machine-generated annotations are then refined by information obtained from both expert linguists (using a wiki-like platform) and crowd-sourcing methods (in the form of a ‘Game with a Purpose’) which help us in deciding how to resolve syntactic and semantic ambiguities. The result is a semantic resource that integrates various linguistic phenomena, including predicate-argument structure, scope, tense, thematic roles, rhetorical relations and presuppositions. The semantic formalism that brings all levels of annotation together in one meaning representation is Discourse Representation Theory, which supports meaning representations that can be translated to first-order logic. In contrast to ordinary treebanks, the units of annotation in the GMB are texts, rather than isolated sentences. The current version of the GMB contains more than 10,000 public domain texts aligned with Discourse Representation Structures, and is freely available for research purposes.
Journal of Semantics | 2018
Noortje Venhuizen; Johan Bos; Petra Hendriks; Harm Brouwer
The property of projection poses a challenge to formal semantic theories, due to its apparent non-compositional nature. Projected content is therefore typically analyzed as being different from and independent of asserted content. Recent evidence, however, suggests that these types of content in fact closely interact, thereby calling for a more integrated analysis that captures their similarities, while respecting their differences. Here, we propose such a unified, compositional semantic analysis of asserted and projected content. Our analysis captures the similarities and differences between presuppositions, anaphora, conventional implicatures and assertions on the basis of their information structure, that is, on basis of how their content is contributed to the unfolding discourse context. We formalize our analysis in an extension of the dynamic semantic framework of Discourse Representation Theory (DRT)—called Projective DRT (PDRT)—that employs projection variables to capture the information-structural aspects of semantic content; different constellations of such variables capture the differences between the different types of projected and asserted content within a single dimension of meaning. We formally derive the structural and compositional properties of PDRT, as well as its semantic interpretation. By instantiating PDRT as a mature semantic formalism, we argue that it paves way for a more focused investigation of the information-structural aspects of meaning.
Discourse Processes | 2018
Noortje Venhuizen; Matthew W. Crocker; Harm Brouwer
ABSTRACT The processing difficulty of each word we encounter in a sentence is affected by both our prior linguistic experience and our general knowledge about the world. Computational models of incremental language processing have, however, been limited in accounting for the influence of world knowledge. We develop an incremental model of language comprehension that constructs—on a word-by-word basis—rich, probabilistic situation model representations. To quantify linguistic processing effort, we adopt Surprisal Theory, which asserts that the processing difficulty incurred by a word is inversely proportional to its expectancy (Hale, 2001; Levy, 2008). In contrast with typical language model implementations of surprisal, the proposed model instantiates a novel comprehension-centric metric of surprisal that reflects the likelihood of the unfolding utterance meaning as established after processing each word. Simulations are presented that demonstrate that linguistic experience and world knowledge are integrated in the model at the level of interpretation and combine in determining online expectations.
language resources and evaluation | 2012
Valerio Basile; Johan Bos; Kilian Evang; Noortje Venhuizen
Proceedings of the 10th International Conference on Computational Semantics (IWCS 2013) -- Short Papers | 2013
Noortje Venhuizen; Valerio Basile; Kilian Evang; Johan Bos
conference of the european chapter of the association for computational linguistics | 2012
Valerio Basile; Johan Bos; Kilian Evang; Noortje Venhuizen
Proceedings of the 10th International Conference on Computational Semantics (IWCS 2013) -- Long Papers | 2013
Noortje Venhuizen; Johan Bos; Harm Brouwer
joint conference on lexical and computational semantics | 2012
Valerio Basile; Johan Bos; Kilian Evang; Noortje Venhuizen
Semantics and Linguistic Theory | 2014
Noortje Venhuizen; Johan Bos; Petra Hendriks; Harm Brouwer