Jan Drewes
University of Trento
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jan Drewes.
Frontiers in Psychology | 2011
Rufin VanRullen; Niko A. Busch; Jan Drewes; Julien Dubois
Even in well-controlled laboratory environments, apparently identical repetitions of an experimental trial can give rise to highly variable perceptual outcomes and behavioral responses. This variability is generally discarded as a reflection of intrinsic noise in neuronal systems. However, part of this variability may be accounted for by trial-by-trial fluctuations of the phase of ongoing oscillations at the moment of stimulus presentation. For example, the phase of an electro-encephalogram (EEG) oscillation reflecting the rapid waxing and waning of sustained attention can predict the perception of a subsequent visual stimulus at threshold. Similar ongoing periodicities account for a portion of the trial-by-trial variability of visual reaction times. We review the available experimental evidence linking ongoing EEG phase to perceptual and attentional variability, and the corresponding methodology. We propose future tests of this relation, and discuss the theoretical implications for understanding the neuronal dynamics of sensory perception.
The Journal of Neuroscience | 2011
Jan Drewes; Rufin VanRullen
Motor reaction times in humans are highly variable from one trial to the next, even for simple and automatic tasks, such as shifting your gaze to a suddenly appearing target. Although classic models of reaction time generation consider this variability to reflect intrinsic noise, some portion of it could also be attributed to ongoing neuronal processes. For example, variations of alpha rhythm frequency (8–12 Hz) across individuals, or alpha amplitude across trials, have been related previously to manual reaction time variability. Here we investigate the trial-by-trial influence of oscillatory phase, a dynamic marker of ongoing activity, on saccadic reaction time in three paradigms of increasing cognitive demand (simple reaction time, choice reaction time, and visual discrimination tasks). The phase of ongoing prestimulus activity in the high alpha/low beta range (11–17 Hz) at frontocentral locations was strongly associated with saccadic response latencies. This relation, present in all three paradigms, peaked for phases recorded ∼50 ms before fixation point offset and 250 ms before target onset. Reaction times in the most demanding discrimination task fell into two distinct modes reflecting a fast but inaccurate strategy or a slow and efficient one. The phase effect was markedly stronger in the group of subjects using the faster strategy. We conclude that periodic fluctuations of electrical activity attributable to neuronal oscillations can modulate the efficiency of the oculomotor system on a rapid timescale; however, this relation may be obscured when cognitive load also adds a significant contribution to response time variability.
eye tracking research & application | 2012
Jan Drewes; Guillaume S. Masson; Anna Montagnini
Camera-based eye trackers are the mainstay of todays eye movement research and countless practical applications of eye tracking. Recently, a significant impact of changes in pupil size on the accuracy of camera-based eye trackers during fixation has been reported [Wyatt 2010]. We compared the pupil-size effect between a scleral search coil based eye tracker (DNI) and an up-to-date infrared camera-based eye tracker (SR Research Eyelink 1000) by simultaneously recording human eye movements with both techniques. Between pupil-constricted and pupil-relaxed conditions we find a subject-specific shift in reported gaze position exceeding 2 degrees only with the camera based eye tracker, while the scleral search coil system simultaneously reported steady fixation. This confirms that the actual point of fixation did not change during pupil constriction/relaxation, and the resulting shift in measured gaze position is solely an artifact of the camera-based eye tracking system. We demonstrate a method to partially compensate the pupil-based shift using separate calibrations in pupil-constricted and pupil-dilated conditions, with pupil size as an index to dynamically weight the two calibrations.
Journal of Vision | 2011
Jan Drewes; Julia Trommershäuser; Karl R. Gegenfurtner
Human observers are capable of detecting animals within novel natural scenes with remarkable speed and accuracy. Recent studies found human response times to be as fast as 120 ms in a dual-presentation (2-AFC) setup (H. Kirchner & S. J. Thorpe, 2005). In most previous experiments, pairs of randomly chosen images were presented, frequently from very different contexts (e.g., a zebra in Africa vs. the New York Skyline). Here, we tested the effect of background size and contiguity on human performance by using a new, contiguous background image set. Individual images contained a single animal surrounded by a large, animal-free image area. The image could be positioned and cropped in such a manner that the animal could occur in one of eight evenly spaced positions on an imaginary circle (radius 10-deg visual angle). In the first (8-Choice) experiment, all eight positions were used, whereas in the second (2-Choice) and third (2-Image) experiments, the animals were only presented on the two positions to the left and right of the screen center. In the third experiment, additional rectangular frames were used to mimic the conditions of earlier studies. Average latencies on successful trials differed only slightly between conditions, indicating that the number of possible animal locations within the display does not affect decision latency. Detailed analysis of saccade targets revealed a preference toward both the head and the center of gravity of the target animal, affecting hit ratio, latency, and the number of saccades required to reach the target. These results illustrate that rapid animal detection operates scene-wide and is fast and efficient even when the animals are embedded in their natural backgrounds.
Scientific Reports | 2015
Jan Drewes; Weina Zhu; Andreas Wutz; David Melcher
Perceptual systems must create discrete objects and events out of a continuous flow of sensory information. Previous studies have demonstrated oscillatory effects in the behavioral outcome of low-level visual tasks, suggesting a cyclic nature of visual processing as the solution. To investigate whether these effects extend to more complex tasks, a stream of “neutral” photographic images (not containing targets) was rapidly presented (20 ms/image). Embedded were one or two presentations of a randomly selected target image (vehicles and animals). Subjects reported the perceived target category. On dual-presentation trials, the ISI varied systematically from 0 to 600 ms. At randomized timing before first target presentation, the screen was flashed with the intent of creating a phase reset in the visual system. Sorting trials by temporal distance between flash and first target presentation revealed strong oscillations in behavioral performance, peaking at 5 Hz. On dual-target trials, longer ISIs led to reduced performance, implying a temporal integration window for object category discrimination. The “animal” trials exhibited a significant oscillatory component around 5 Hz. Our results indicate that oscillatory effects are not mere fringe effects relevant only with simple stimuli, but are resultant from the core mechanisms of visual processing and may well extend into real-life scenarios.
PLOS ONE | 2014
Jan Drewes; Weina Zhu; Yingzhou Hu; Xintian Hu
Camera-based eye trackers are the mainstay of eye movement research and countless practical applications of eye tracking. Recently, a significant impact of changes in pupil size on gaze position as measured by camera-based eye trackers has been reported. In an attempt to improve the understanding of the magnitude and population-wise distribution of the pupil-size dependent shift in reported gaze position, we present the first collection of binocular pupil drift measurements recorded from 39 subjects. The pupil-size dependent shift varied greatly between subjects (from 0.3 to 5.2 deg of deviation, mean 2.6 deg), but also between the eyes of individual subjects (0.1 to 3.0 deg difference, mean difference 1.0 deg). We observed a wide range of drift direction, mostly downward and nasal. We demonstrate two methods to partially compensate the pupil-based shift using separate calibrations in pupil-constricted and pupil-dilated conditions, and evaluate an improved method of compensation based on individual look-up-tables, achieving up to 74% of compensation.
PLOS ONE | 2016
Weina Zhu; Jan Drewes; David Melcher
Visual processing is not instantaneous, but instead our conscious perception depends on the integration of sensory input over time. In the case of Continuous Flash Suppression (CFS), masks are flashed to one eye, suppressing awareness of stimuli presented to the other eye. One potential explanation of CFS is that it depends, at least in part, on the flashing mask continually interrupting visual processing before the stimulus reaches awareness. We investigated the temporal features of masks in two ways. First, we measured the suppression effectiveness of a wide range of masking frequencies (0-32Hz), using both complex (faces/houses) and simple (closed/open geometric shapes) stimuli. Second, we varied whether the different frequencies were interleaved within blocks or separated in homogenous blocks, in order to see if suppression was stronger or weaker when the frequency remained constant across trials. We found that break-through contrast differed dramatically between masking frequencies, with mask effectiveness following a skewed-normal curve peaking around 6Hz and little or no masking for low and high temporal frequencies. Peak frequency was similar for trial-randomized and block randomized conditions. In terms of type of stimulus, we found no significant difference in peak frequency between the stimulus groups (complex/simple, face/house, closed/open). These findings suggest that temporal factors play a critical role in perceptual awareness, perhaps due to interactions between mask frequency and the time frame of visual processing.
information sciences, signal processing and their applications | 2003
Erhardt Barth; Jan Drewes; Thomas Martinetz
We present a model for predicting the eye-movements of observers who is viewing dynamic sequences of images. As an indicator for the degree of saliency we evaluate an invariant of the spatio-temporal structure tensor that indicates an intrinsic dimension of at least two. The saliency is used to derive a list of candidate locations. Out of this list, the currently attended location is selected according to a mapping found by supervised learning. The true locations used for learning are obtained with an eye-tracker. In addition to the saliency-based candidates, the selection algorithm uses a limited history of locations attended in the past. The mapping is linear and can thus be quickly adapted to the individual observer. The mapping is optimal in the sense that it is obtained by minimizing, by gradient descent, the overall quadratic difference between the predicted and the actually attended location.
human vision and electronic imaging conference | 2003
Erhardt Barth; Jan Drewes; Thomas Martinetz
We present a model that predicts saccadic eye-movements and can be tuned to a particular human observer who is viewing a dynamic sequence of images. Our work is motivated by applications that involve gaze-contingent interactive displays on which information is displayed as a function of gaze direction. The approach therefore differs from standard approaches in two ways: (1) we deal with dynamic scenes, and (2) we provide means of adapting the model to a particular observer. As an indicator for the degree of saliency we evaluate the intrinsic dimension of the image sequence within a geometric approach implemented by using the structure tensor. Out of these candidate saliency-based locations, the currently attended location is selected according to a strategy found by supervised learning. The data are obtained with an eye-tracker and subjects who view video sequences. The selection algorithm receives candidate locations of current and past frames and a limited history of locations attended in the past. We use a linear mapping that is obtained by minimizing the quadratic difference between the predicted and the actually attended location by gradient descent. Being linear, the learned mapping can be quickly adapted to the individual observer.
PLOS ONE | 2013
Weina Zhu; Jan Drewes; Karl R. Gegenfurtner
The visual system has a remarkable ability to extract categorical information from complex natural scenes. In order to elucidate the role of low-level image features for the recognition of objects in natural scenes, we recorded saccadic eye movements and event-related potentials (ERPs) in two experiments, in which human subjects had to detect animals in previously unseen natural images. We used a new natural image database (ANID) that is free of some of the potential artifacts that have plagued the widely used COREL images. Color and grayscale images picked from the ANID and COREL databases were used. In all experiments, color images induced a greater N1 EEG component at earlier time points than grayscale images. We suggest that this influence of color in animal detection may be masked by later processes when measuring reation times. The ERP results of go/nogo and forced choice tasks were similar to those reported earlier. The non-animal stimuli induced bigger N1 than animal stimuli both in the COREL and ANID databases. This result indicates ultra-fast processing of animal images is possible irrespective of the particular database. With the ANID images, the difference between color and grayscale images is more pronounced than with the COREL images. The earlier use of the COREL images might have led to an underestimation of the contribution of color. Therefore, we conclude that the ANID image database is better suited for the investigation of the processing of natural scenes than other databases commonly used.