Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ksander N. de Winkel is active.

Publication


Featured researches published by Ksander N. de Winkel.


PLOS ONE | 2015

Forced Fusion in Multisensory Heading Estimation

Ksander N. de Winkel; Mikhail Katliar; Hh Bülthoff

It has been shown that the Central Nervous System (CNS) integrates visual and inertial information in heading estimation for congruent multisensory stimuli and stimuli with small discrepancies. Multisensory information should, however, only be integrated when the cues are redundant. Here, we investigated how the CNS constructs an estimate of heading for combinations of visual and inertial heading stimuli with a wide range of discrepancies. Participants were presented with 2s visual-only and inertial-only motion stimuli, and combinations thereof. Discrepancies between visual and inertial heading ranging between 0-90° were introduced for the combined stimuli. In the unisensory conditions, it was found that visual heading was generally biased towards the fore-aft axis, while inertial heading was biased away from the fore-aft axis. For multisensory stimuli, it was found that five out of nine participants integrated visual and inertial heading information regardless of the size of the discrepancy; for one participant, the data were best described by a model that explicitly performs causal inference. For the remaining three participants the evidence could not readily distinguish between these models. The finding that multisensory information is integrated is in line with earlier findings, but the finding that even large discrepancies are generally disregarded is surprising. Possibly, people are insensitive to discrepancies in visual-inertial heading angle because such discrepancies are only encountered in artificial environments, making a neural mechanism to account for them otiose. An alternative explanation is that detection of a discrepancy may depend on stimulus duration, where sensitivity to detect discrepancies differs between people.


Neuroscience Letters | 2012

The perception of verticality in lunar and Martian gravity conditions

Ksander N. de Winkel; Gilles Clément; Eric L. Groen; Peter J. Werkhoven

Although the mechanisms of neural adaptation to weightlessness and re-adaptation to Earth-gravity have received a lot of attention since the first human space flight, there is as yet little knowledge about how spatial orientation is affected by partial gravity, such as lunar gravity of 0.16 g or Martian gravity of 0.38 g. Up to now twelve astronauts have spent a cumulated time of approximately 80 h on the lunar surface, but no psychophysical experiments were conducted to investigate their perception of verticality. We investigated how the subjective vertical (SV) was affected by reduced gravity levels during the first European Parabolic Flight Campaign of Partial Gravity. In normal and hypergravity, subjects accurately aligned their SV with the gravitational vertical. However, when gravity was below a certain threshold, subjects aligned their SV with their body longitudinal axis. The value of the threshold varied considerably between subjects, ranging from 0.03 to 0.57 g. Despite the small number of subjects, there was a significant positive correlation of the threshold with subject age, which calls for further investigation.


PLOS ONE | 2017

Causal Inference in Multisensory Heading Estimation.

Ksander N. de Winkel; Mikhail Katliar; Hh Bülthoff

A large body of research shows that the Central Nervous System (CNS) integrates multisensory information. However, this strategy should only apply to multisensory signals that have a common cause; independent signals should be segregated. Causal Inference (CI) models account for this notion. Surprisingly, previous findings suggested that visual and inertial cues on heading of self-motion are integrated regardless of discrepancy. We hypothesized that CI does occur, but that characteristics of the motion profiles affect multisensory processing. Participants estimated heading of visual-inertial motion stimuli with several different motion profiles and a range of intersensory discrepancies. The results support the hypothesis that judgments of signal causality are included in the heading estimation process. Moreover, the data suggest a decreasing tolerance for discrepancies and an increasing reliance on visual cues for longer duration motions.


Neuroscience Letters | 2014

Pre- and post-stimulus EEG patterns associated with the touch-induced illusory flash.

Jan B. F. van Erp; Tom G. Philippi; Ksander N. de Winkel; Peter J. Werkhoven

Pairing two brief auditory beeps with a single flash can evoke the percept of a second, illusory, flash. Investigations of the underlying neural mechanisms are limited to post-stimulus effects of this sound-induced illusory flash. We investigated whether touch modulates the visual evoked potential in a similar vein, and also looked at pre-stimulus activity. Electroencephalogram (EEG) was recorded over occipital and parieto-occipital areas of 12 observers. We compared bimodal EEG to its unimodal constituents (i.e., the difference waves) and found significant positive deflections around 110 ms and 200 ms and negative deflections around 330 ms and 390 ms from stimulus onset. These results are similar to those reported for the sound-induced illusion, albeit somewhat later. Furthermore, comparison of the EEG activity between those trials in which the illusion was perceived and those in which it was absent revealed that the phase of pre-stimulus alpha was linked to perceiving the illusion or not. We conclude that touch can modulate activity in the visual cortex and that similar neural mechanisms underlie perception of the sound- and touch-induced illusory flash and that the phase of the alpha wave at the moment of presentation that affects perception.


Scientific Reports | 2018

Causal Inference in the Perception of Verticality

Ksander N. de Winkel; Mikhail Katliar; Daniel Diers; Hh Bülthoff

The perceptual upright is thought to be constructed by the central nervous system (CNS) as a vector sum; by combining estimates on the upright provided by the visual system and the body’s inertial sensors with prior knowledge that upright is usually above the head. Recent findings furthermore show that the weighting of the respective sensory signals is proportional to their reliability, consistent with a Bayesian interpretation of a vector sum (Forced Fusion, FF). However, violations of FF have also been reported, suggesting that the CNS may rely on a single sensory system (Cue Capture, CC), or choose to process sensory signals based on inferred signal causality (Causal Inference, CI). We developed a novel alternative-reality system to manipulate visual and physical tilt independently. We tasked participants (n = 36) to indicate the perceived upright for various (in-)congruent combinations of visual-inertial stimuli, and compared models based on their agreement with the data. The results favor the CI model over FF, although this effect became unambiguous only for large discrepancies (±60°). We conclude that the notion of a vector sum does not provide a comprehensive explanation of the perception of the upright, and that CI offers a better alternative.


bioRxiv | 2018

An assessment of Causal Inference in visual-inertial traveled distance estimation

Ksander N. de Winkel; Daniel Diers; Maria Lächele; Hh Bülthoff

Recent work indicates that the central nervous system assesses the causality of visual and inertial information in the estimation of qualitative characteristics of self-motion and spatial orientation, and forms multisensory perceptions in accordance with the outcome of these assessments. Here, we extend the assessment of this Causal Inference (CI) strategy to the quantitative domain of traveled distance. We present a formal model of how stimuli result in sensory estimates, how percepts are constructed from sensory estimates, and how responses result from percepts. Starting with this formalization, we derived probabilistic formulations of CI and competing models for perception of traveled distance. In an experiment, participants (n=9) were seated in the Max Planck Cablerobot Simulator, and shown a photo-realistic virtual rendering of the simulator hall via a Head-Mounted Display. Using this setup, the participants were presented with various unisensory and (incongruent) multisensory visual-inertial horizontal linear surge motions, differing only in amplitude (i.e., traveled distance). Participants performed both a Magnitude Estimation and a Two-Interval Forced Choice task. Overall, model comparisons favor the CI model, but individual analysis shows a Cue Capture strategy is preferred in most individual cases. Parameter estimates indicate that visual and inertial sensory estimates follow a Stevens’ power law with positive exponent, and that noise increases with physical distance in accordance with a Weber’s law. Responses were found to be biased towards the mean stimulus distance, consistent with an interaction between percepts and prior knowledge in the formulation of responses. Magnitude estimate data further showed a regression to the mean effect. The experimental data did not provide unambiguous support for the CI model. However, model derivations and fit results demonstrate it can reproduce empirical findings, arguing in favor of the CI model. Moreover, the methods outlined in the present study demonstrate how different sources of distortion in responses may be disentangled by combining psychophysical tasks.


Journal of Vision | 2018

Effects of visual stimulus characteristics and individual differences in heading estimation

Ksander N. de Winkel; Max Kurtz; Hh Bülthoff

Visual heading estimation is subject to periodic patterns of constant (bias) and variable (noise) error. The nature of the errors, however, appears to differ between studies, showing underestimation in some, but overestimation in others. We investigated whether field of view (FOV), the availability of binocular disparity cues, motion profile, and visual scene layout can account for error characteristics, with a potential mediating effect of vection. Twenty participants (12 females) reported heading and rated vection for visual horizontal motion stimuli with headings ranging the full circle, while we systematically varied the above factors. Overall, the results show constant errors away from the fore-aft axis. Error magnitude was affected by FOV, disparity, and scene layout. Variable errors varied with heading angle, and depended on scene layout. Higher vection ratings were associated with smaller variable errors. Vection ratings depended on FOV, motion profile, and scene layout, with the highest ratings for a large FOV, cosine-bell velocity profile, and a ground plane scene rather than a dot cloud scene. Although the factors did affect error magnitude, differences in its direction were observed only between participants. We show that the observations are consistent with prior beliefs that headings align with the cardinal axes, where the attraction of each axis is an idiosyncratic property.


Experimental Brain Research | 2018

Body-relative horizontal-vertical anisotropy in human representations of traveled distances

Thomas Hinterecker; P Pretto; Ksander N. de Winkel; Hans-Otto Karnath; Hh Bülthoff; T Meilinger

A growing number of studies investigated anisotropies in representations of horizontal and vertical spaces. In humans, compelling evidence for such anisotropies exists for representations of multi-floor buildings. In contrast, evidence regarding open spaces is indecisive. Our study aimed at further enhancing the understanding of horizontal and vertical spatial representations in open spaces utilizing a simple traveled distance estimation paradigm. Blindfolded participants were moved along various directions in the sagittal plane. Subsequently, participants passively reproduced the traveled distance from memory. Participants performed this task in an upright and in a 30° backward-pitch orientation. The accuracy of distance estimates in the upright orientation showed a horizontal–vertical anisotropy, with higher accuracy along the horizontal axis compared with the vertical axis. The backward-pitch orientation enabled us to investigate whether this anisotropy was body or earth-centered. The accuracy patterns of the upright condition were positively correlated with the body-relative (not the earth-relative) coordinate mapping of the backward-pitch condition, suggesting a body-centered anisotropy. Overall, this is consistent with findings on motion perception. It suggests that the distance estimation sub-process of path integration is subject to horizontal–vertical anisotropy. Based on the previous studies that showed isotropy in open spaces, we speculate that real physical self-movements or categorical versus isometric encoding are crucial factors for (an)isotropies in spatial representations.


bioRxiv | 2017

What's Up: an assessment of Causal Inference in the Perception of Verticality

Ksander N. de Winkel; Mikhail Katliar; Daniel Diers; Heinrich H. Buelthoff

The perceptual upright is thought to be constructed by the central nervous system (CNS) as a vector sum; by combining estimates on the upright provided by the visual system and the body’s inertial sensors with prior knowledge that the upright is usually above the head. Results from a number of recent studies furthermore show that the weighting of the respective sensory signals is proportional to their reliability, consistent with a Bayesian interpretation of the idea of a vector sum (Forced Fusion, FF). However, findings from a study conducted in partial gravity suggest that the CNS may rely on a single sensory system (Cue Capture, CC), or choose to process sensory signals differently based on inferred signal causality (Causal Inference, CI). We developed a novel Alternative-Reality system to manipulate visual and physical tilt independently, and tasked participants (n=28) to indicate the perceived upright for various (in-)congruent combinations of visual-inertial stimuli. Overall, the data appear best explained by the FF model. However, an evaluation of individual data reveals considerable variability, favoring different models in about equal proportions of participants (FF, n=12; CI, n=7, CC, n=9). Given the observed variability, we conclude that the notion of a vector sum does not provide a comprehensive explanation of the perception of the upright.


I-perception | 2011

Integration of Visual and Vestibular Information Used to Discriminate Rotational Self-Motion

F Soyka; Ksander N. de Winkel; Michael Barnett-Cowan; Eric Groen; Hh Bülthoff

Do humans integrate visual and vestibular information in a statistically optimal fashion when discriminating rotational self-motion stimuli? Recent studies are inconclusive as to whether such integration occurs when discriminating heading direction. In the present study eight participants were consecutively rotated twice (2s sinusoidal acceleration) on a chair about an earth-vertical axis in vestibular-only, visual-only and visual-vestibular trials. The visual stimulus was a video of a moving stripe pattern, synchronized with the inertial motion. Peak acceleration of the reference stimulus was varied and participants reported which rotation was perceived as faster. Just-noticeable differences (JND) were estimated by fitting psychometric functions. The visual-vestibular JND measurements are too high compared to the predictions based on the unimodal JND estimates and there is no JND reduction between visual-vestibular and visual-alone estimates. These findings may be explained by visual capture. Alternatively, the visual precision may not be equal between visual-vestibular and visual-alone conditions, since it has been shown that visual motion sensitivity is reduced during inertial self-motion. Therefore, measuring visual-alone JNDs with an underlying uncorrelated inertial motion might yield higher visual-alone JNDs compared to the stationary measurement. Theoretical calculations show that higher visual-alone JNDs would result in predictions consistent with the JND measurements for the visual-vestibular condition.

Collaboration


Dive into the Ksander N. de Winkel's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge