Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Diederick C Niehorster is active.

Publication


Featured researches published by Diederick C Niehorster.


I-perception | 2017

The Accuracy and Precision of Position and Orientation Tracking in the HTC Vive Virtual Reality System for Scientific Research

Diederick C Niehorster; Li Li; Markus Lappe

The advent of inexpensive consumer virtual reality equipment enables many more researchers to study perception with naturally moving observers. One such system, the HTC Vive, offers a large field-of-view, high-resolution head mounted display together with a room-scale tracking system for less than a thousand U.S. dollars. If the position and orientation tracking of this system is of sufficient accuracy and precision, it could be suitable for much research that is currently done with far more expensive systems. Here we present a quantitative test of the HTC Vive’s position and orientation tracking as well as its end-to-end system latency. We report that while the precision of the Vive’s tracking measurements is high and its system latency (22 ms) is low, its position and orientation measurements are provided in a coordinate system that is tilted with respect to the physical ground plane. Because large changes in offset were found whenever tracking was briefly lost, it cannot be corrected for with a one-time calibration procedure. We conclude that the varying offset between the virtual and the physical tracking space makes the HTC Vive at present unsuitable for scientific experiments that require accurate visual stimulation of self-motion through a virtual world. It may however be suited for other experiments that do not have this requirement.


Behavior Research Methods | 2018

What to expect from your remote eye-tracker when participants are unrestrained

Diederick C Niehorster; Tim Cornelissen; Kenneth Holmqvist; Ignace T. C. Hooge; Roy S. Hessels

The marketing materials of remote eye-trackers suggest that data quality is invariant to the position and orientation of the participant as long as the eyes of the participant are within the eye-tracker’s headbox, the area where tracking is possible. As such, remote eye-trackers are marketed as allowing the reliable recording of gaze from participant groups that cannot be restrained, such as infants, schoolchildren and patients with muscular or brain disorders. Practical experience and previous research, however, tells us that eye-tracking data quality, e.g. the accuracy of the recorded gaze position and the amount of data loss, deteriorates (compared to well-trained participants in chinrests) when the participant is unrestrained and assumes a non-optimal pose in front of the eye-tracker. How then can researchers working with unrestrained participants choose an eye-tracker? Here we investigated the performance of five popular remote eye-trackers from EyeTribe, SMI, SR Research, and Tobii in a series of tasks where participants took on non-optimal poses. We report that the tested systems varied in the amount of data loss and systematic offsets observed during our tasks. The EyeLink and EyeTribe in particular had large problems. Furthermore, the Tobii eye-trackers reported data for two eyes when only one eye was visible to the eye-tracker. This study provides practical insight into how popular remote eye-trackers perform when recording from unrestrained participants. It furthermore provides a testing method for evaluating whether a tracker is suitable for studying a certain target population, and that manufacturers can use during the development of new eye-trackers.


Behavior Research Methods | 2017

Noise-robust fixation detection in eye movement data: Identification by two-means clustering (I2MC).

Roy S. Hessels; Diederick C Niehorster; Chantal Kemner; Ignace T. C. Hooge

Eye-tracking research in infants and older children has gained a lot of momentum over the last decades. Although eye-tracking research in these participant groups has become easier with the advance of the remote eye-tracker, this often comes at the cost of poorer data quality than in research with well-trained adults (Hessels, Andersson, Hooge, Nyström, & Kemner Infancy, 20, 601–633, 2015; Wass, Forssman, & Leppänen Infancy, 19, 427–460, 2014). Current fixation detection algorithms are not built for data from infants and young children. As a result, some researchers have even turned to hand correction of fixation detections (Saez de Urabain, Johnson, & Smith Behavior Research Methods, 47, 53–72, 2015). Here we introduce a fixation detection algorithm—identification by two-means clustering (I2MC)—built specifically for data across a wide range of noise levels and when periods of data loss may occur. We evaluated the I2MC algorithm against seven state-of-the-art event detection algorithms, and report that the I2MC algorithm’s output is the most robust to high noise and data loss levels. The algorithm is automatic, works offline, and is suitable for eye-tracking data recorded with remote or tower-mounted eye-trackers using static stimuli. In addition to application of the I2MC algorithm in eye-tracking research with infants, school children, and certain patient groups, the I2MC algorithm also may be useful when the noise and data loss levels are markedly different between trials, participants, or time points (e.g., longitudinal research).


Behavior Research Methods | 2018

Using machine learning to detect events in eye-tracking data

Raimondas Zemblys; Diederick C Niehorster; Oleg V. Komogortsev; Kenneth Holmqvist

Event detection is a challenging stage in eye movement data analysis. A major drawback of current event detection methods is that parameters have to be adjusted based on eye movement data quality. Here we show that a fully automated classification of raw gaze samples as belonging to fixations, saccades, or other oculomotor events can be achieved using a machine-learning approach. Any already manually or algorithmically detected events can be used to train a classifier to produce similar classification of other data without the need for a user to set parameters. In this study, we explore the application of random forest machine-learning technique for the detection of fixations, saccades, and post-saccadic oscillations (PSOs). In an effort to show practical utility of the proposed method to the applications that employ eye movement classification algorithms, we provide an example where the method is employed in an eye movement-driven biometric application. We conclude that machine-learning techniques lead to superior detection compared to current state-of-the-art event detection algorithms and can reach the performance of manual coding.


Journal of Vision | 2010

A Bayesian model for estimating observer translation and rotation from optic flow and extra-retinal input.

Jeffrey A. Saunders; Diederick C Niehorster

We present a Bayesian ideal observer model that estimates observer translation and rotation from optic flow and an extra-retinal eye movement signal. The model assumes a rigid environment and noise in velocity measurements, and that eye movement provides a probabilistic cue for rotation. The model can simulate human heading perception across a range of conditions, including: translation with simulated vs. actual eye rotations, environments with various depth structures, and the presence of independently moving objects.


Journal of Vision | 2015

Manual tracking enhances smooth pursuit eye movements

Diederick C Niehorster; Wilfred W. F. Siu; Li Li

Previous studies have reported that concurrent manual tracking enhances smooth pursuit eye movements only when tracking a self-driven or a predictable moving target. Here, we used a control-theoretic approach to examine whether concurrent manual tracking enhances smooth pursuit of an unpredictable moving target. In the eye-hand tracking condition, participants used their eyes to track a Gaussian target that moved randomly along a horizontal axis. In the meantime, they used their dominant hand to move a mouse to control the horizontal movement of a Gaussian cursor to vertically align it with the target. In the eye-alone tracking condition, the target and cursor positions recorded in the eye-hand tracking condition were replayed, and participants only performed eye tracking of the target. Catch-up saccades were identified and removed from the recorded eye movements, allowing for a frequency-response analysis of the smooth pursuit response to unpredictable target motion. We found that the overall smooth pursuit gain was higher and the number of catch-up saccades made was less when eye tracking was accompanied by manual tracking than when not. We conclude that concurrent manual tracking enhances smooth pursuit. This enhancement is a fundamental property of eye-hand coordination that occurs regardless of the predictability of the target motion.


Behavior Research Methods | 2018

Is human classification by experienced untrained observers a gold standard in fixation detection

Ignace T. C. Hooge; Diederick C Niehorster; Marcus Nyström; Richard Andersson; Roy S. Hessels

Manual classification is still a common method to evaluate event detection algorithms. The procedure is often as follows: Two or three human coders and the algorithm classify a significant quantity of data. In the gold standard approach, deviations from the human classifications are considered to be due to mistakes of the algorithm. However, little is known about human classification in eye tracking. To what extent do the classifications from a larger group of human coders agree? Twelve experienced but untrained human coders classified fixations in 6 min of adult and infant eye-tracking data. When using the sample-based Cohen’s kappa, the classifications of the humans agreed near perfectly. However, we found substantial differences between the classifications when we examined fixation duration and number of fixations. We hypothesized that the human coders applied different (implicit) thresholds and selection rules. Indeed, when spatially close fixations were merged, most of the classification differences disappeared. On the basis of the nature of these intercoder differences, we concluded that fixation classification by experienced untrained human coders is not a gold standard. To bridge the gap between agreement measures (e.g., Cohen’s kappa) and eye movement parameters (fixation duration, number of fixations), we suggest the use of the event-based F1 score and two new measures: the relative timing offset (RTO) and the relative timing deviation (RTD).


Journal of Neurophysiology | 2014

Influence of optic flow on the control of heading and target egocentric direction during steering toward a goal

Li Li; Diederick C Niehorster

Although previous studies have shown that people use both optic flow and target egocentric direction to walk or steer toward a goal, it remains in question how enriching the optic flow field affects the control of heading specified by optic flow and the control of target egocentric direction during goal-oriented locomotion. In the current study, we used a control-theoretic approach to separate the control response specific to these two cues in the visual control of steering toward a goal. The results showed that the addition of optic flow information (such as foreground motion and global flow) in the display improved the overall control precision, the amplitude, and the response delay of the control of heading. The amplitude and the response delay of the control of target egocentric direction were, however, not affected. The improvement in the control of heading with enriched optic flow displays was mirrored by an increase in the accuracy of heading perception. The findings provide direct support for the claim that people use the heading specified by optic flow as well as target egocentric direction to walk or steer toward a goal and suggest that the visual system does not internally weigh these two cues for goal-oriented locomotion control.


The Journal of Neuroscience | 2017

The primary role of flow processing in the identification of scene-relative object movement

Simon K. Rushton; Diederick C Niehorster; Paul A. Warren; Li Li

Retinal image motion could be due to the movement of the observer through space or an object relative to the scene. Optic flow, form, and change of position cues all provide information that could be used to separate out retinal motion due to object movement from retinal motion due to observer movement. In Experiment 1, we used a minimal display to examine the contribution of optic flow and form cues. Human participants indicated the direction of movement of a probe object presented against a background of radially moving pairs of dots. By independently controlling the orientation of each dot pair, we were able to put flow cues to self-movement direction (the point from which all the motion radiated) and form cues to self-movement direction (the point toward which all the dot pairs were oriented) in conflict. We found that only flow cues influenced perceived probe movement. In Experiment 2, we switched to a rich stereo display composed of 3D objects to examine the contribution of flow and position cues. We moved the scene objects to simulate a lateral translation and counter-rotation of gaze. By changing the polarity of the scene objects (from light to dark and vice versa) between frames, we placed flow cues to self-movement direction in opposition to change of position cues. We found that again flow cues dominated the perceived probe movement relative to the scene. Together, these experiments indicate the neural network that processes optic flow has a primary role in the identification of scene-relative object movement. SIGNIFICANCE STATEMENT Motion of an object in the retinal image indicates relative movement between the observer and the object, but it does not indicate its cause: movement of an object in the scene; movement of the observer; or both. To isolate retinal motion due to movement of a scene object, the brain must parse out the retinal motion due to movement of the eye (“flow parsing”). Optic flow, form, and position cues all have potential roles in this process. We pitted the cues against each other and assessed their influence. We found that flow parsing relies on optic flow alone. These results indicate the primary role of the neural network that processes optic flow in the identification of scene-relative object movement.


I-perception | 2017

Accuracy and Tuning of Flow Parsing for Visual Perception of Object Motion During Self-Motion

Diederick C Niehorster; Li Li

How do we perceive object motion during self-motion using visual information alone? Previous studies have reported that the visual system can use optic flow to identify and globally subtract the retinal motion component resulting from self-motion to recover scene-relative object motion, a process called flow parsing. In this article, we developed a retinal motion nulling method to directly measure and quantify the magnitude of flow parsing (i.e., flow parsing gain) in various scenarios to examine the accuracy and tuning of flow parsing for the visual perception of object motion during self-motion. We found that flow parsing gains were below unity for all displays in all experiments; and that increasing self-motion and object motion speed did not alter flow parsing gain. We conclude that visual information alone is not sufficient for the accurate perception of scene-relative motion during self-motion. Although flow parsing performs global subtraction, its accuracy also depends on local motion information in the retinal vicinity of the moving object. Furthermore, the flow parsing gain was constant across common self-motion or object motion speeds. These results can be used to inform and validate computational models of flow parsing.

Collaboration


Dive into the Diederick C Niehorster's collaboration.

Top Co-Authors

Avatar

Li Li

University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge