Mathieu Koppen
Radboud University Nijmegen
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mathieu Koppen.
Neural Networks | 2006
Rh Raymond Cuijpers; Hein T. van Schie; Mathieu Koppen; Wolfram Erlhagen; Harold Bekkering
Many of our daily activities are supported by behavioural goals that guide the selection of actions, which allow us to reach these goals effectively. Goals are considered to be important for action observation since they allow the observer to copy the goal of the action without the need to use the exact same means. The importance of being able to use different action means becomes evident when the observer and observed actor have different bodies (robots and humans) or bodily measurements (parents and children), or when the environments of actor and observer differ substantially (when an obstacle is present or absent in either environment). A selective focus on the action goals instead of the action means furthermore circumvents the need to consider the vantage point of the actor, which is consistent with recent findings that people prefer to represent the actions of others from their own individual perspective. In this paper, we use a computational approach to investigate how knowledge about action goals and means are used in action observation. We hypothesise that in action observation human agents are primarily interested in identifying the goals of the observed actors behaviour. Behavioural cues (e.g. the way an object is grasped) may help to disambiguate the goal of the actor (e.g. whether a cup is grasped for drinking or handing it over). Recent advances in cognitive neuroscience are cited in support of the models architecture.
Cognitive Science | 2003
Stefan L. Frank; Mathieu Koppen; Leo G. M. Noordman; Wietske Vonk
Abstract A computational model of inference during story comprehension is presented, in which story situations are represented distributively as points in a high-dimensional “situation-state space.” This state space organizes itself on the basis of a constructed microworld description. From the same description, causal/temporal world knowledge is extracted. The distributed representation of story situations is more flexible than Golden and Rumelhart’s [Discourse Proc 16 (1993) 203] localist representation. A story taking place in the microworld corresponds to a trajectory through situation-state space. During the inference process, world knowledge is applied to the story trajectory. This results in an adjusted trajectory, reflecting the inference of propositions that are likely to be the case. Although inferences do not result from a search for coherence, they do cause story coherence to increase. The results of simulations correspond to empirical data concerning inference, reading time, and depth of processing. An extension of the model for simulating story retention shows how coherence is preserved during retention without controlling the retention process. Simulation results correspond to empirical data concerning story recall and intrusion.
Mathematical Social Sciences | 1998
Mathieu Koppen
Abstract In the context of knowledge structures, alternative representations have been obtained for the class of knowledge spaces, such as surmise mappings and entail relations. In this paper, various additional conditions on surmise mappings are introduced and their consequences for the corresponding spaces are investigated. In particular, the condition describing well-graded knowledge spaces is detected. These results are related to the mathematical theory of convex geometries. In addition, a direct 1–1 correspondence between surmise mappings and entail relations is described and finally an overview of the different representations, together with the corresponding special cases, is presented.
Discourse Processes | 2008
Stefan L. Frank; Mathieu Koppen; Leo G. M. Noordman; Wietske Vonk
Because higher level cognitive processes generally involve the use of world knowledge, computational models of these processes require the implementation of a knowledge base. This article identifies and discusses 4 strategies for dealing with world knowledge in computational models: disregarding world knowledge, ad hoc selection, extraction from text corpora, and implementation of all knowledge about a simplified microworld. Each of these strategies is illustrated by a detailed discussion of a model of discourse comprehension. It is argued that seemingly successful modeling results are uninformative if knowledge is implemented ad hoc or not at all, that knowledge extracted from large text corpora is not appropriate for discourse comprehension, and that a suitable implementation can be obtained by applying the microworld strategy.
Journal of Vision | 2012
I.A.H. Clemens; Luc P. J. Selen; Mathieu Koppen; W.P. Medendorp
In order to maintain visual stability during self-motion, the brain needs to update any egocentric spatial representations of the environment. Here, we use a novel psychophysical approach to investigate how and to what extent the brain integrates visual, extraocular, and vestibular signals pertaining to this spatial update. Participants were oscillated sideways at a frequency of 0.63 Hz while keeping gaze fixed on a stationary light. When the motion direction changed, a reference target was shown either in front of or behind the fixation point. At the next reversal, half a cycle later, we tested updating of this reference location by asking participants to judge whether a briefly flashed probe was shown to the left or right of the memorized target. We show that updating is not only biased, but that the direction and magnitude of this bias depend on both gaze and object location, implying that a gaze-centered reference frame is involved. Using geometric modeling, we further show that the gaze-dependent errors can be caused by an underestimation of translation amplitude, by a bias of visually perceived objects towards the fovea (i.e., a foveal bias), or by a combination of both.
PLOS Computational Biology | 2016
Jeroen Atsma; Femke Maij; Mathieu Koppen; David E. Irwin; W. Pieter Medendorp
Our ability to interact with the environment hinges on creating a stable visual world despite the continuous changes in retinal input. To achieve visual stability, the brain must distinguish the retinal image shifts caused by eye movements and shifts due to movements of the visual scene. This process appears not to be flawless: during saccades, we often fail to detect whether visual objects remain stable or move, which is called saccadic suppression of displacement (SSD). How does the brain evaluate the memorized information of the presaccadic scene and the actual visual feedback of the postsaccadic visual scene in the computations for visual stability? Using a SSD task, we test how participants localize the presaccadic position of the fixation target, the saccade target or a peripheral non-foveated target that was displaced parallel or orthogonal during a horizontal saccade, and subsequently viewed for three different durations. Results showed different localization errors of the three targets, depending on the viewing time of the postsaccadic stimulus and its spatial separation from the presaccadic location. We modeled the data through a Bayesian causal inference mechanism, in which at the trial level an optimal mixing of two possible strategies, integration vs. separation of the presaccadic memory and the postsaccadic sensory signals, is applied. Fits of this model generally outperformed other plausible decision strategies for producing SSD. Our findings suggest that humans exploit a Bayesian inference process with two causal structures to mediate visual stability.
PLOS ONE | 2015
Arjan C. ter Horst; Mathieu Koppen; Luc P. J. Selen; W. Pieter Medendorp
When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading estimation, showing that sensory cues are combined by weighting them in proportion to their reliability, consistent with statistically optimal integration. But while heading estimation could improve with the ongoing motion, due to the constant flow of information, the estimate of how far we move requires the integration of sensory information across the whole displacement. In this study, we investigate whether the brain optimally combines visual and vestibular information during a displacement estimation task, even if their reliability varies from trial to trial. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They were subjected to a passive linear motion involving visual and vestibular cues with different levels of visual coherence to change relative cue reliability and with cue discrepancies to test relative cue weighting. Participants performed a two-interval two-alternative forced-choice task, indicating which of two sequentially perceived displacements was larger. Our results show that humans adapt their weighting of visual and vestibular information from trial to trial in proportion to their reliability. These results provide evidence that humans optimally integrate visual and vestibular information in order to estimate their body displacement.
Memory & Cognition | 2007
Stefan L. Frank; Mathieu Koppen; Leo G. M. Noordman; Wietske Vonk
We present a computational model that provides a unified account of inference, coherence, and disambiguation. It simulates how the build-up of coherence in text leads to the knowledge-based resolution of referential ambiguity. Possible interpretations of an ambiguity are represented bycenters of gravity in a high-dimensional space. The unresolved ambiguity forms a vector in the same space. This vector is attracted by the centers of gravity, while also being affected by context information and world knowledge. When the vector reaches one of the centers of gravity, the ambiguity is resolved to the corresponding interpretation. The model accounts for reading time and error rate data from experiments on ambiguous pronoun resolution and explains the effects of context informativeness, anaphor type, and processing depth. It shows how implicit causality can have an early effect during reading. A novel prediction is that ambiguities can remain unresolved if there is insufficient disambiguating in formation.
Archive | 1994
Mathieu Koppen
In the theory of knowledge spaces the actual construction of a space giving a reasonably valid description of some specific domain of knowledge is a critical problem. A method is presented in which the information needed is obtained from experts in the field who are confronted with a carefully chosen sequence of questions about specific relationships between the problems in the domain. The discussion covers the type of questions which have to be asked, how the responses to those questions permit the construction of the corresponding knowledge space, and how inferences from responses previously obtained can be exploited to make the procedure practicable for a substantial number of items.
Journal of Neurophysiology | 2016
L. Rincon-Gonzalez; Luc P. J. Selen; K. Halfwerk; Mathieu Koppen; Brian D. Corneil; W.P. Medendorp
The natural world continuously presents us with many opportunities for action, and thus a process of target selection must precede action execution. While there has been considerable progress in understanding target selection in stationary environments, little is known about target selection when we are in motion. Here we investigated the effect of self-motion signals on saccadic target selection in a dynamic environment. Human subjects were sinusoidally translated (f = 0.6 Hz, 30-cm peak-to-peak displacement) along an interaural axis with a vestibular sled. During the motion two visual targets were presented asynchronously but equidistantly on either side of fixation. Subjects had to look at one of these targets as quickly as possible. With an adaptive approach, the time delay between these targets was adjusted until the subject selected both targets equally often. We determined this balanced time delay for different phases of the motion in order to distinguish the effects of body acceleration and velocity on saccadic target selection. Results show that acceleration (or position, as these are indistinguishable during sinusoidal motion), but not velocity, affects target selection for saccades. Subjects preferred to look at targets in the direction of the acceleration-the leftward target was preferred when the sled accelerated to the left, and vice versa. Saccadic reaction times mimicked this selection bias by being reliably shorter to targets in the direction of acceleration. Our results provide evidence that saccade target selection mechanisms are modulated by self-motion signals, which could be derived directly from the otolith system.