Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kn de Winkel is active.

Publication


Featured researches published by Kn de Winkel.


Journal of Vision | 2010

Integration of visual and inertial cues in perceived heading of self-motion

Kn de Winkel; H.M. Weesie; Peter J. Werkhoven; Eric L. Groen

In the present study, we investigated whether the perception of heading of linear self-motion can be explained by Maximum Likelihood Integration (MLI) of visual and non-visual sensory cues. MLI predicts smaller variance for multisensory judgments compared to unisensory judgments. Nine participants were exposed to visual, inertial, or visual-inertial motion conditions in a moving base simulator, capable of accelerating along a horizontal linear track with variable heading. Visual random-dot motion stimuli were projected on a display with a 40° horizontal × 32° vertical field of view (FoV). All motion profiles consisted of a raised cosine bell in velocity. Stimulus heading was varied between 0 and 20°. After each stimulus, participants indicated whether perceived self-motion was straight-ahead or not. We fitted cumulative normal distribution functions to the data as a psychometric model and compared this model to a nested model in which the slope of the multisensory condition was subject to the MLI hypothesis. Based on likelihood ratio tests, the MLI model had to be rejected. It seems that the imprecise inertial estimate was weighed relatively more than the precise visual estimate, compared to the MLI predictions. Possibly, this can be attributed to low realism of the visual stimulus. The present results concur with other findings of overweighing of inertial cues in synthetic environments.


Experimental Brain Research | 2013

Integration of visual and inertial cues in the perception of angular self-motion

Kn de Winkel; F Soyka; Michael Barnett-Cowan; Hh Bülthoff; Eric L. Groen; Peter J. Werkhoven

The brain is able to determine angular self-motion from visual, vestibular, and kinesthetic information. There is compelling evidence that both humans and non-human primates integrate visual and inertial (i.e., vestibular and kinesthetic) information in a statistically optimal fashion when discriminating heading direction. In the present study, we investigated whether the brain also integrates information about angular self-motion in a similar manner. Eight participants performed a 2IFC task in which they discriminated yaw-rotations (2-s sinusoidal acceleration) on peak velocity. Just-noticeable differences (JNDs) were determined as a measure of precision in unimodal inertial-only and visual-only trials, as well as in bimodal visual–inertial trials. The visual stimulus was a moving stripe pattern, synchronized with the inertial motion. Peak velocity of comparison stimuli was varied relative to the standard stimulus. Individual analyses showed that data of three participants showed an increase in bimodal precision, consistent with the optimal integration model; while data from the other participants did not conform to maximum-likelihood integration schemes. We suggest that either the sensory cues were not perceived as congruent, that integration might be achieved with fixed weights, or that estimates of visual precision obtained from non-moving observers do not accurately reflect visual precision during self-motion.


Experimental Brain Research | 2013

The time constant of the somatogravic illusion

B.J. Correia Grácio; Kn de Winkel; Eric L. Groen; M. Wentink; Jelte E. Bos

Without visual feedback, humans perceive tilt when experiencing a sustained linear acceleration. This tilt illusion is commonly referred to as the somatogravic illusion. Although the physiological basis of the illusion seems to be well understood, the dynamic behavior is still subject to discussion. In this study, the dynamic behavior of the illusion was measured experimentally for three motion profiles with different frequency content. Subjects were exposed to pure centripetal accelerations in the lateral direction and were asked to indicate their tilt percept by means of a joystick. Variable-radius centrifugation during constant angular rotation was used to generate these motion profiles. Two self-motion perception models were fitted to the experimental data and were used to obtain the time constant of the somatogravic illusion. Results showed that the time constant of the somatogravic illusion was on the order of two seconds, in contrast to the higher time constant found in fixed-radius centrifugation studies. Furthermore, the time constant was significantly affected by the frequency content of the motion profiles. Motion profiles with higher frequency content revealed shorter time constants which cannot be explained by self-motion perception models that assume a fixed time constant. Therefore, these models need to be improved with a mechanism that deals with this variable time constant. Apart from the fundamental importance, these results also have practical consequences for the simulation of sustained accelerations in motion simulators.


AIAA Modeling and Simulation Technologies Conference, Toronto, Canada, 2-5 August 2010; AIAA 2011-7916 | 2010

Visual-inertial coherence zone in the perception of heading

Kn de Winkel; B.J. Correia Grácio; Eric L. Groen; Peter J. Werkhoven

Knowledge of human motion perception can be applied in the optimization of motion cueing algorithms. In the past it has been shown that some discrepancies between the amplitude or phase of a visual and inertial cue go unnoticed. These acceptable discrepancies are referred to as coherence zones. In the present experiment we investigate whether a coherence zone applies to the direction of visual and inertial motion cues. More specifically, we investigated how much heading of an inertial stimulus may deviate from a visual stimulus suggesting ‘straight ahead’ motion, before the ‘straight ahead’ percept falls apart. Subjects were presented with congruent visual-inertial linear horizontal motion stimuli with varying heading and incongruent visual-inertial linear horizontal motion stimuli, in which a visual cue suggesting straight ahead motion was coupled with an inertial heading cue with varying heading. Subjects judged I) whether or not they moved straight ahead, and II) whether or not the visual and inertial stimulus were congruent. We fitted psychometric curves to the combined judgments and calculated detection thresholds for a violation of either criterion. The results show that the 50% detection thresholds are larger in the incongruent than in the congruent condition. We interpret the threshold for the incongruent condition as the size of the coherence zone. In conclusion: we provide evidence of a coherence zone for heading, as wella as a measure of the size of the heading coherence zone.


PLOS ONE | 2017

Accumulation of Inertial Sensory Information in the Perception of Whole Body Yaw Rotation.

Alessandro Nesti; Kn de Winkel; Hh Bülthoff

While moving through the environment, our central nervous system accumulates sensory information over time to provide an estimate of our self-motion, allowing for completing crucial tasks such as maintaining balance. However, little is known on how the duration of the motion stimuli influences our performances in a self-motion discrimination task. Here we study the human ability to discriminate intensities of sinusoidal (0.5 Hz) self-rotations around the vertical axis (yaw) for four different stimulus durations (1, 2, 3 and 5 s) in darkness. In a typical trial, participants experienced two consecutive rotations of equal duration and different peak amplitude, and reported the one perceived as stronger. For each stimulus duration, we determined the smallest detectable change in stimulus intensity (differential threshold) for a reference velocity of 15 deg/s. Results indicate that differential thresholds decrease with stimulus duration and asymptotically converge to a constant, positive value. This suggests that the central nervous system accumulates sensory information on self-motion over time, resulting in improved discrimination performances. Observed trends in differential thresholds are consistent with predictions based on a drift diffusion model with leaky integration of sensory evidence.


Neuroscience Letters | 2017

Neural correlates of decision making on whole body yaw rotation: an fNIRS study

Kn de Winkel; Alessandro Nesti; Hasan Ayaz; Hh Bülthoff

Prominent accounts of decision making state that decisions are made on the basis of an accumulation of sensory evidence, orchestrated by networks of prefrontal and parietal neural populations. Here we assess whether these findings generalize to decisions on self-motion. Participants were presented with whole body yaw rotations of different durations in a 2-Interval-Forced-Choice paradigm, and tasked to discriminate motions on the basis of their amplitude. The cortical hemodynamic response was recorded using functional near-infrared spectroscopy (fNIRS) while participants were performing the task. The imaging data was used to predict the specific response on individual experimental trials, and to predict whether the comparison stimulus would be judged larger than the reference. Classifier performance on the former variable was negligible. However, considerable performance was achieved for the latter variable, specifically using parietal imaging data. The findings provide support for the notion that activity in the parietal cortex reflects modality independent decision variables that represent the strength of the neural evidence in favor of a decision. The results are encouraging for the use of fNIRS as a method to perform neuroimaging in moving individuals.


34th European Conference on Visual Perception | 2011

Multisensory integration in the perception of self-motion about an Earth-vertical yaw axis

Kn de Winkel; F Soyka; Michael Barnett-Cowan; Eric L. Groen; Hh Bülthoff

Newer technology allows for more realistic virtual environments by providing visual image quality that is very similar to that in the real world, this includes adding in virtual self-animated avatars [Slater et al, 2010 PLoS ONE 5(5); Sanchez-Vives et al, 2010 PLoS ONE 5(4)]. To investigate the influence of relative size changes between the visual environment and the visual body, we immersed participants into a full cue virtual environment where they viewed a self-animated avatar from behind and at the same eye-height as the avatar. We systematically manipulated the size of the avatar and the size of the virtual room (which included familiar objects). Both before and after exposure to the virtual room and body, participants performed an action-based measurement and made verbal estimates about the size of self and the world. Additionally we measured their subjective sense of body ownership. The results indicate that the size of the self-representing avatar can change how the user perceives and interacts within the virtual environment. These results have implications for scientists interested in visual space perception and also could potentially be useful for creating positive visual illusions (ie the feeling of being in a more spacious room).Two experiments assessed the development of children’s part and configural (part-relational) processing in object recognition during adolescence. In total 280 school children aged 7–16 and 56 adults were tested in 3AFC tasks to judge the correct appearance of upright and inverted presented familiar animals, artifacts, and newly learned multi-part objects, which had been manipulated either in terms of individual parts or part relations. Manipulation of part relations was constrained to either metric (animals and artifacts) or categorical (multi-part objects) changes. For animals and artifacts, even the youngest children were close to adult levels for the correct recognition of an individual part change. By contrast, it was not until aged 11–12 that they achieved similar levels of performance with regard to altered metric part relations. For the newly-learned multipart objects, performance for categorical part-specific and part-relational changes was equivalent throughout the tested age range for upright presented stimuli. The results provide converging evidence, with studies of face recognition, for a surprisingly late consolidation of configural-metric relative to part-based object recognition.According to the functional approach to the perception of spatial layout, angular optic variables that indicate extents are scaled to the body and its action capabilities [cf Proffitt, 2006 Perspectives on Psychological Science 1(2) 110–122]. For example, reachable extents are perceived as a proportion of the maximum extent to which one can reach, and the apparent sizes of graspable objects are perceived as a proportion of the maximum extent that one can grasp (Linkenauger et al, 2009 Journal of Experimental Psychology: Human Perceptiion and Performance; 2010 Psychological Science). Therefore, apparent sizes and distances should be influenced by changing scaling aspects of the body. To test this notion, we immersed participants into a full cue virtual environment. Participants’ head, arm and hand movements were tracked and mapped onto a first-person, self-representing avatar in real time. We manipulated the participants’ visual information about their body by changing aspects of the self-avatar (hand size and arm length). Perceptual verbal and action judgments of the sizes and shapes of virtual objects’ (spheres and cubes) varied as a function of the hand/arm scaling factor. These findings provide support for a body-based approach to perception and highlight the impact of self-avatars’ bodily dimensions for users’ perceptions of space in virtual environments.


DSC 2015 Europe: Driving Simulation Conference Exhibition | 2015

Perception-based motion cueing: validation in driving simulation

Joost Venrooij; P Pretto; Mikhail Katliar; Sae Nooij; Alessandro Nesti; Maria Lächele; Kn de Winkel; Diane Cleij; Hh Bülthoff


DSC 2015 Europe: Driving Simulation Conference Exhibition | 2015

Impact of MPC Prediction Horizon on Motion Cueing Fidelity

Mikhail Katliar; Kn de Winkel; Joost Venrooij; P Pretto; Hh Bülthoff


19th International Multisensory Research Forum (IMRF 2018) | 2018

Visual-Inertial interactions in the perception of translational motion

Kn de Winkel; Hh Bülthoff

Collaboration


Dive into the Kn de Winkel's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

B.J. Correia Grácio

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge