Peter J. Werkhoven
Utrecht University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Peter J. Werkhoven.
Vision Research | 1992
Peter J. Werkhoven; Herman P. Snippe; Toet Alexander
We present data on the human sensitivity to optic acceleration, i.e. temporal modulations of the speed and direction of moving objects. Modulation thresholds are measured as a function of modulation frequency and speed for different periodical velocity vector modulation functions using a localized target. Evidence is presented that human detection of velocity vector modulations is not directly based on the acceleration signal (the temporal derivative of the velocity vector modulation). Instead, modulation detection is accurately described by a two-stage model: a low-pass temporal filter transformation of the true velocity vector modulation followed by a variance detection stage. A functional description of the first stage is a second order low-pass temporal filter having a characteristic time constant of 40 msec. In effect, the temporal low-pass filter is an integration of the velocity vector modulation within a temporal window of 100-140 msec. A non-trivial link of this low-pass filter stage to the temporal characteristics of standard motion detection mechanisms will be discussed. Velocity vector modulations are detected in the second-stage, whenever the variance of the filtered velocity vector exceeds a certain threshold variance in either the speed or direction dimension. The threshold standard deviations for this variance detection stage are estimated to be 17% for speed modulations and 9% for motion direction modulations.
Journal of Vision | 2010
Kn de Winkel; H.M. Weesie; Peter J. Werkhoven; Eric L. Groen
In the present study, we investigated whether the perception of heading of linear self-motion can be explained by Maximum Likelihood Integration (MLI) of visual and non-visual sensory cues. MLI predicts smaller variance for multisensory judgments compared to unisensory judgments. Nine participants were exposed to visual, inertial, or visual-inertial motion conditions in a moving base simulator, capable of accelerating along a horizontal linear track with variable heading. Visual random-dot motion stimuli were projected on a display with a 40° horizontal × 32° vertical field of view (FoV). All motion profiles consisted of a raised cosine bell in velocity. Stimulus heading was varied between 0 and 20°. After each stimulus, participants indicated whether perceived self-motion was straight-ahead or not. We fitted cumulative normal distribution functions to the data as a psychometric model and compared this model to a nested model in which the slope of the multisensory condition was subject to the MLI hypothesis. Based on likelihood ratio tests, the MLI model had to be rejected. It seems that the imprecise inertial estimate was weighed relatively more than the precise visual estimate, compared to the MLI predictions. Possibly, this can be attributed to low realism of the visual stimulus. The present results concur with other findings of overweighing of inertial cues in synthetic environments.
Small Group Research | 2009
Rick van der Kleij; Johannes Martinus Cornelis Schraagen; Peter J. Werkhoven; Carsten K. W. De Dreu
An experiment was conducted to examine how communication patterns and task performance differ as a function of the groups communication environment and how these processes change over time. In a longitudinal design, three-person groups had to select and argue the correct answer out of a set of three alternatives for ten questions. Compared with face-to-face groups, video-teleconferencing groups took fewer turns, required more time for turns, and interrupted each other less. Listeners appeared to be more polite, waiting for a speaker to finish before making their conversational contribution. Although groups were able to maintain comparable performance scores across communication conditions, initial differences between conditions in communication patterns disappeared over time, indicating that the video-teleconferencing groups adapted to the newness and limitations of their communication environment. Moreover, because of increased experience with the task and the group, groups in both conditions needed less conversation to complete the task at later rounds. Implications are discussed for practice, training, and possibilities for future research.
Attention Perception & Psychophysics | 1990
Peter J. Werkhoven; Herman P. Snippe; Jan J. Koenderink
We present an ambiguous motion paradigm that allows us to quantify the influence of aspects of form relevant to the perception of apparent motion. We report on the role of bar element orientation in motion paths. The effect of orientation differences between bar elements in a motion path is small with respect to the crucial role of the orientation of bar elements relative to motion direction. Motion perception between elements oriented along the motion direction dominates motion perception between elements oriented perpendicularly to motion direction. The perception of apparent motion is affected by bar length and width and is anisotropic.
Brain Research | 2008
Tom G. Philippi; Jan B. F. van Erp; Peter J. Werkhoven
In temporal numerosity judgment, observers systematically underestimate the number of pulses. The strongest underestimations occur when stimuli are presented with a short interstimulus interval (ISI) and are stronger for vision than for audition and touch. We investigated if multisensory presentation leads to a reduction of underestimation. Participants were presented with 2 to 10 (combinations of) auditory beeps, tactile taps to the index finger and visual flashes at different ISIs (20 to 320 ms). For all presentation modes, we found underestimation, except for small number of pulses. A control experiment showed that the latter is due to a (cognitive) range effect. Averaged over conditions, the order of performance of sensory modalities is touch, audition and last vision. Generally, multisensory presentation improves performance over the unisensory presentations. For larger ISIs (160 and 320 ms), we found a tendency toward a reduction in variance for the multisensory presentation modes. For smaller ISIs (20 and 40 ms), we found a reduction in underestimation, but an increase in variance for the multisensory presentation modes. In the discussion, we relate these two findings to Maximum Likelihood Estimation (MLE) models predicting that multisensory integration reduces variance.
Perception | 2004
Jan B. F. van Erp; Peter J. Werkhoven
We investigated the consistency between tactually and visually designated empty time intervals. In a forced-choice discrimination task, participants judged whether the second of two intervals was shorter or longer than the first interval. Two pulses defined the intervals. The pulse was either a vibro-tactile burst presented to the fingertip, or a foveally presented white square. The comparisons were made for uni-modal and cross-modal intervals. We used four levels of standard interval durations in the range of 100–800 ms. The results showed that tactile empty intervals must be 8.5% shorter to be perceived as long as visual intervals. This cross-modal bias is larger for small intervals and decreases with increasing standard intervals. The Weber fractions (the threshold divided by the standard interval) are 20% and are constant over the standard intervals. This indicates that the Weber law holds for the range of interval lengths tested. Furthermore, the Weber fractions are consistent over uni-modal and cross-modal comparisons, which indicates that there is no additional noise involved in the cross-modal comparisons.
Ergonomics | 2012
Marieke E. Thurlings; J.B.F. van Erp; Anne-Marie Brouwer; Benjamin Blankertz; Peter J. Werkhoven
Event-related potential (ERP) based brain–computer interfaces (BCIs) employ differences in brain responses to attended and ignored stimuli. When using a tactile ERP-BCI for navigation, mapping is required between navigation directions on a visual display and unambiguously corresponding tactile stimuli (tactors) from a tactile control device: control-display mapping (CDM). We investigated the effect of congruent (both display and control horizontal or both vertical) and incongruent (vertical display, horizontal control) CDMs on task performance, the ERP and potential BCI performance. Ten participants attended to a target (determined via CDM), in a stream of sequentially vibrating tactors. We show that congruent CDM yields best task performance, enhanced the P300 and results in increased estimated BCI performance. This suggests a reduced availability of attentional resources when operating an ERP-BCI with incongruent CDM. Additionally, we found an enhanced N2 for incongruent CDM, which indicates a conflict between visual display and tactile control orientations. Practitioner Summary: Incongruency in control-display mapping reduces task performance. In this study, brain responses, task and system performance are related to (in)congruent mapping of command options and the corresponding stimuli in a brain–computer interface (BCI). Directional congruency reduces task errors, increases available attentional resources, improves BCI performance and thus facilitates human–computer interaction.
Attention Perception & Psychophysics | 1991
Peter J. Werkhoven; Jan J. Koenderink
Local descriptions of velocity fields (e.g., rotation, divergence, and deformation) contain a wealth of information for form perception and ego motion. In spite of this, human psychophysical performance in estimating these entities has not yet been thoroughly examined. In this paper, we report on the visual discrimination of rotary motion. A sequence of image frames is used to elicit an apparent rotation of an annulus, composed of dots in the frontoparallel plane, around a fixation spot at the center of the annulus. Differential angular velocity thresholds are measured as a function of the angular velocity, the diameter of the annulus, the number of dots, the display time per frame, and the number of frames. The results show a U-shaped dependence of angular velocity discrimination on spatial scale, with minimal Weber fractions of 7%. Experiments with a scatter in the distance of the individual dots to the center of rotation demonstrate th-at angular velocity cannot be assessed directly; perceived angular velocity depends strongly on the distance of the dots relative to the center of rotation. We suggest that the estimation of rotary motion is mediated by local estimations of linear velocity.
50th Annual Meeting of the Human Factors and Ergonomics Society, HFES 2006, 16 October 2006 through 20 October 2006, San Francisco, CA, 1687-1691 | 2006
Jan B. F. van Erp; Peter J. Werkhoven
Access to navigation information rapidly becomes standard in many situations, for example through GPS receivers and collision avoidance systems in cars. However, perceiving and processing the information may result in overloading the users visual sense and cognitive resources. Intuitive navigation information presentation concepts using the sense of touch are claimed to be a solution to both threats. Employing the sense of touch can reduce the visual load, and the proverbial “tap-on-the-shoulder” may sheer automatically evoke the correct (control) behavior. This recently resulted in the development of car seats with vibrating elements, belts with vibrators for soldiers, tactile vests for pilots, and many similar displays. This paper presents a model for human behavior in platform navigation and control called prenav. Prenav allows discussing issues such as the accuracy of spatial information displays, effects on workload, and effects of external stressors. Prenav guided the validation studies we conducted in the last decade. Based on these studies, we concluded that tactile torso displays can potentially provide a major workload reduction and safety enhancement in platform control.
Attention Perception & Psychophysics | 2009
Peter J. Werkhoven; Jan B. F. van Erp; Tom G. Philippi
Irrelevant events in one sensory modality can influence the number of events that are perceived in another modality. Previously, the underlying process of sensory integration was studied in conditions in which participants knew a priori which sensory modality was relevant and which was not. Consequently, (bottom-up) sensory interference and (top-down) selective attention were confounded. We disentangled these effects by measuring the influence of visual flashes on the number of tactile taps that were perceived, and vice versa, in two conditions. In the cue condition, participants were instructed on which modality to report before the bimodal stimulus was presented. In the no-cue condition, they were instructed after stimulus presentation. Participants reported the number of events that they perceived for bimodal combinations of one, two, or three flashes and one, two, or three taps. Our main findings were that (1) in no-cue conditions, the influence of vision on touch was stronger than was the influence of touch on vision; (2) in cue conditions, the integration effects were smaller than those in no-cue conditions; and (3) irrelevant taps were less easily ignored than were irrelevant flashes. This study disentangled previously confounded bottom-up and top-down effects: The bottom-up influence of vision on touch was stronger, but vision was also more easily suppressed by top-down selective attention. We have compared our results qualitatively and quantitatively with recently proposed sensory-integration models.