Maarten Demeyer
Katholieke Universiteit Leuven
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Maarten Demeyer.
Journal of Vision | 2009
Maarten Demeyer; Peter De Graef; Johan Wagemans; Karl Verfaillie
Multiple times per second, the visual system succeeds in making a seamless transition between presaccadic and postsaccadic perception. The nature of the transsaccadic representation needed to support this was commonly thought to be sparse and abstract. However, recent studies have suggested that detailed visual information is transferred across saccades as well. Here, we seek to confirm that preview effects of visual detail on postsaccadic perception do indeed occur. We presented subjects with highly similar artificial shapes, preceded by a congruent, an incongruent, or no preview. Postsaccadic recognition performance was measured, while the contrast of presaccadic and postsaccadic stimuli was manipulated independently. The results show that congruent previews provided a benefit to the recognition performance of postsaccadic stimuli, compared to no-preview conditions. Incongruent previews induced a recognition accuracy cost, combined with a recognition speed benefit. A second experiment showed that these effects can disappear when stimulus presentation is interrupted with a postsaccadic visual mask. We conclude that visual detail contained in transsaccadic memory can affect the postsaccadic percept. Furthermore, we find that the transsaccadic representation supporting this process is contrast-independent, but that postsaccadic contrast, through its effect on the reliability of information, can affect the degree to which it is employed.
Journal of Vision | 2010
Maarten Demeyer; Peter De Graef; Johan Wagemans; Karl Verfaillie
Stimulus displacements coinciding with a saccadic eye movement are poorly detected by human observers. In recent years, converging evidence has shown that this phenomenon does not result from poor transsaccadic retention of presaccadic stimulus position information, but from the visual systems efforts to spatially align presaccadic and postsaccadic perception on the basis of visual landmarks. It is known that this process can be disrupted, and transsaccadic displacement detection performance can be improved, by briefly blanking the stimulus display during and immediately after the saccade. In the present study, we investigated whether this improvement could also follow from a discontinuity in the task-irrelevant form of the displaced stimulus. We observed this to be the case: Subjects more accurately identified the direction of intrasaccadic displacements when the displaced stimulus simultaneously changed form, compared to conditions without a form change. However, larger improvements were still observed under blanking conditions. In a second experiment, we show that facilitation induced by form changes and blanks can combine. We conclude that a strong assumption of visual stability underlies the suppression of transsaccadic change detection performance, the rejection of which generalizes from stimulus form to stimulus position.
Behavior Research Methods | 2012
Maarten Demeyer; Bart Machilsen
To study perceptual grouping processes, vision scientists often use stimuli consisting of spatially separated local elements that, together, elicit the percept of a global structure. We developed a set of methods for constructing such displays and implemented them in an open-source MATLAB toolbox, GERT (Grouping Elements Rendering Toolbox). The main purpose of GERT is to embed a contour in a field of randomly positioned elements, while avoiding the introduction of a local density cue. However, GERT’s modular implementation enables the user to create a far greater variety of perceptual grouping displays, using these same methods. A generic rendering engine grants access to a variety of element-drawing functions, including Gabors, Gaussians, letters, and polygons.
Vision Research | 2010
Maarten Demeyer; Peter De Graef; Johan Wagemans; Karl Verfaillie
Through saccadic eye movements, the retinal projection of an extrafoveally glimpsed object can be brought into foveal vision quickly. We investigated what influence visual detail collected before the saccade exerts on the postsaccadic percept. Participants were instructed to saccade towards a peripheral stimulus, and to indicate on a continuum of ellipses with varying aspect ratios which exact shape they had perceived to be present after saccade landing. Compared to both an identical ellipse preview and a qualitatively different square preview, a quantitatively different ellipse preview was observed to shift the mean postsaccadic percept towards the presaccadic aspect ratio parameter value. This integration of subtly different form information was accompanied by an integration of the identity of both stimuli presented: In the great majority of these trials, subjects indicated that they had not noticed the occurrence of a change to the stimulus. When a blank screen preceded the postsaccadic stimulus onset the influence of presaccadic stimulus information on postsaccadic perception was weaker. An immediate postsaccadic mask on the other hand abolished the effect entirely. We conclude that integration of parametric visual form information occurs across saccades, but that it relies on a quickly decaying and maskable visual memory.
PLOS ONE | 2011
Maarten Demeyer; Peter De Graef; Karl Verfaillie; Johan Wagemans
Human observers explore scenes by shifting their gaze from object to object. Before each eye movement, a peripheral glimpse of the next object to be fixated has however already been caught. Here we investigate whether the perceptual organization extracted from such a preview could guide the perceptual analysis of the same object during the next fixation. We observed that participants were indeed significantly faster at grouping together spatially separate elements into an object contour, when the same contour elements had also been grouped together in the peripheral preview display. Importantly, this facilitation occurred despite a change in the grouping cue defining the object contour (similarity versus collinearity). We conclude that an intermediate-level description of object shape persists in the visual system across gaze shifts, providing it with a robust basis for balancing efficiency and continuity during scene exploration.
Spatial Vision | 2007
Maarten Demeyer; Peter Zaenen; Johan Wagemans
Viewpoint-dependent recognition performance of 3-D objects has often been taken as an indication of a viewpoint-dependent object representation. This viewpoint dependence is most often found using metrically manipulated objects. We aim to investigate whether instead these results can be explained by viewpoint and object property (e.g. curvature) information not being processed independently at a lower level, prior to object recognition itself. Multidimensional signal detection theory offers a useful framework, allowing us to model this as a low-level correlation between the internal noise distributions of viewpoint and object property dimensions. In Experiment 1, we measured these correlations using both Yes/No and adjustment tasks. We found a good correspondence across tasks, but large individual differences. In Experiment 2, we compared these results to the viewpoint dependence of object recognition through a Yes/No categorization task. We found that viewpoint-independent object recognition could not be fully reached using our stimuli, and that the pattern of viewpoint dependence was strongly correlated with the low-level correlations we measured earlier. In part, however, the viewpoint was abstracted despite these correlations. We conclude that low-level correlations do exist prior to object recognition, and can offer an explanation for some viewpoint effects on the discrimination of metrically manipulated 3-D objects.
Symmetry | 2014
Michaë l Sassi; Maarten Demeyer; Johan Wagemans
Integrating shape contours in the visual periphery is vital to our ability to locate objects and thus make targeted saccadic eye movements to efficiently explore our surroundings. We tested whether global shape symmetry facilitates peripheral contour integration and saccade targeting in three experiments, in which observers responded to a successful peripheral contour detection by making a saccade towards the target shape. The target contours were horizontally (Experiment 1) or vertically (Experiments 2 and 3) mirror symmetric. Observers responded by making a horizontal (Experiments 1 and 2) or vertical (Experiment 3) eye movement. Based on an analysis of the saccadic latency and accuracy, we conclude that the figure-ground cue of global mirror symmetry in the periphery has little effect on contour integration or on the speed and precision with which saccades are targeted towards objects. The role of mirror symmetry may be more apparent under natural viewing conditions with multiple objects competing for attention, where symmetric regions in the visual field can pre-attentively signal the presence of objects, and thus attract eye movements.
Vision Research | 2016
Bart Machilsen; Johan Wagemans; Maarten Demeyer
Perceptual grouping processes are typically studied using sparse displays of spatially separated elements. Unless the grouping cue of interest is a proximity cue, researchers will want to ascertain that such a cue is absent from the display. Various solutions to this problem have been employed in the literature; however, no validation of these methods exists. Here, we test a number of local density metrics both through their performance as constrained ideal observer models, and through a comparison with a large dataset of human detection trials. We conclude that for the selection of stimuli without a density cue, the Voronoi density metric is preferable, especially if combined with a measurement of the distance to each elements nearest neighbor. We offer the entirety of the dataset as a benchmark for the evaluation of future, possibly improved, metrics. With regard to human processes of grouping by proximity, we found observers to be insensitive to target groupings that are more sparse than the surrounding distractor elements, and less sensitive to regularity cues in element positioning than to local clusterings of target elements.
Journal of Vision | 2014
Michaël Sassi; Maarten Demeyer; Bart Machilsen; Tom Putzeys; Johan Wagemans
Research has shown that contour detection is impaired in the visual periphery for snake-shaped Gabor contours but not for circular and elliptical contours. This discrepancy in findings could be due to differences in intrinsic shape properties, including shape closure and curvature variation, as well as to differences in stimulus predictability and familiarity. In a detection task using only circular contours, the target shape is both more familiar and more predictable to the observer compared with a detection task in which a different snake-shaped contour is presented on each trial. In this study, we investigated the effects of stimulus familiarity and predictability on contour integration by manipulating and disentangling the familiarity and predictability of snakelike stimuli. We manipulated stimulus familiarity by extensively training observers with one particular snake shape. Predictability was varied by alternating trial blocks with only a single target shape and trial blocks with multiple target shapes. Our results show that both predictability and familiarity facilitated contour integration, which constitutes novel behavioral evidence for the adaptivity of the contour integration mechanism in humans. If familiarity or predictability facilitated contour integration in the periphery specifically, this could explain the discrepant findings obtained with snake contours as compared with circles or ellipses. However, we found that their facilitatory effects did not differ between central and peripheral vision and thus cannot explain that particular discrepancy in the literature.
Journal of Vision | 2015
Bart Machilsen; Maarten Demeyer; Naoki Kogo
The perceptual organization of natural scenes requires a global evaluation of spatially separated retinal image features. In the encoding of border-ownership, for instance, information from remote image locations co-determines the belongingness of a boundary to one of the two abutting surfaces. In natural viewing conditions a stable perceptual organization is established in a split-second, despite the low quality and the inherent ambiguity of the retinal input. This quick inference seems only possible through the use of prior knowledge on the structure of both the world and the visual input resulting from that world. Over the past few decades, natural systems analysis has successfully identified local-scale structure in natural images, and has related this structure to locally defined principles of perceptual organization (e.g., good continuation, co-linearity). In the current study, we extract more global scene statistics related to contours, shapes and objects, and examine their role in global-scale organizational processes. First, we record the contrast-polarity of luminance-defined oriented edges in natural images. We then analyze the consistency of contrast-polarity within a pair of spatially separated edges. We find that this consistency statistically depends on the geometrical relationship between the edges. Specifically, the resulting pattern reveals a co-circular organization of edges in natural images. To investigate the source of the observed co-circularity, we apply the same analysis on a subset of edges that do or do not coincide with human-traced object boundaries. We show that co-circularity is mainly preserved when the analysis is constrained on edges that fall on object boundaries, while it is largely reduced for edges that do not coincide with object boundaries. We conclude that large-scale statistical regularities in the input can be exploited by the visual system in global processes of perceptual organization in general, and in the computation of border-ownership in particular. Meeting abstract presented at VSS 2015.