Maarten W. A. Wijntjes
Delft University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Maarten W. A. Wijntjes.
Journal of Vision | 2010
Maarten W. A. Wijntjes; Sylvia C. Pont
It has recently been shown that an increase of the relief height of a glossy surface positively correlates with the perceived level of gloss (Y.-H. Ho, M. S. Landy, & L. T. Maloney, 2008). In the study presented here we investigated whether this relation could be explained by the finding that glossiness perception correlates with the skewness of the luminance histogram (I. Motoyoshi, S. Nishida, L. Sharan, & E. H. Adelson, 2007). First, we formally derived a general relation between the depth range of a Lambertian surface, the illumination direction and the associated image intensity transformation. From this intensity transformation we could numerically simulate the relation between relief stretch and the skewness statistic. This relation predicts that skewness increases with increasing surface depth. Furthermore, it predicts that the correlation between skewness and illumination can be either positive or negative, depending on the depth range. We experimentally tested whether changes in the depth range and illumination direction alter the appearance. We indeed find a convincingly strong illusory gloss effect on stretched Lambertian surfaces. However, the results could not be fully explained by the skewness hypothesis. We reinterpreted our results in the context of the bas-relief ambiguity (P. N. Belhumeur, D. J. Kriegman, & L. Yuille, 1999) and show that this model qualitatively predicts illusory highlights on locations that differ from actual specular highlight locations with increasing illumination direction.
Experimental Brain Research | 2009
Maarten W. A. Wijntjes; Robert Volcic; Sylvia C. Pont; Jan J. Koenderink; Astrid M. L. Kappers
We studied the influence of haptics on visual perception of three-dimensional shape. Observers were shown pictures of an oblate spheroid in two different orientations. A gauge-figure task was used to measure their perception of the global shape. In the first two sessions only vision was used. The results showed that observers made large errors and interpreted the oblate spheroid as a sphere. They also misinterpreted the rotated oblate spheroid for a prolate spheroid. In two subsequent sessions observers were allowed to touch the stimulus while performing the task. The visual input remained unchanged: the observers were looking at the picture and could not see their hands. The results revealed that observers perceived a shape that was different from the vision-only sessions and closer to the veridical shape. Whereas, in general, vision is subject to ambiguities that arise from interpreting the retinal projection, our study shows that haptic input helps to disambiguate and reinterpret the visual input more veridically.
Acta Psychologica | 2009
Robert Volcic; Maarten W. A. Wijntjes; Astrid M. L. Kappers
The nature of reference frames involved in haptic spatial processing was addressed by means of a haptic mental rotation task. Participants assessed the parity of two objects located in various spatial locations by exploring them with different hand orientations. The resulting response times were fitted with a triangle wave function. Phase shifts were found to depend on the relation between the hands and the objects, and between the objects and the body. We rejected the possibility that a single reference frame drives spatial processing. Instead, we found evidence of multiple interacting reference frames with the hand-centered reference frame playing the dominant role. We propose that a weighted average of the allocentric, the hand-centered and the body-centered reference frames influences the haptic encoding of spatial information. In addition, we showed that previous results can be reinterpreted within the framework of multiple reference frames. This mechanism has proved to be ubiquitously present in haptic spatial processing.
Experimental Brain Research | 2010
Robert Volcic; Maarten W. A. Wijntjes; Erik C. Kool; Astrid M. L. Kappers
The simple experience of a coherent percept while looking and touching an object conceals an intriguing issue: different senses encode and compare information in different modality-specific reference frames. We addressed this problem in a cross-modal visuo-haptic mental rotation task. Two objects in various orientations were presented at the same spatial location, one visually and one haptically. Participants had to identify the objects as same or different. The relative angle between viewing direction and hand orientation was manipulated (Aligned versus Orthogonal). In an additional condition (Delay), a temporal delay was introduced between haptic and visual explorations while the viewing direction and the hand orientation were orthogonal to each other. Whereas the phase shift of the response time function was close to 0° in the Aligned condition, we observed a consistent phase shift in the hand’s direction in the Orthogonal condition. A phase shift, although reduced, was also found in the Delay condition. Counterintuitively, these results mean that seen and touched objects do not need to be physically aligned for optimal performance to occur. The present results suggest that the information about an object is acquired in separate visual and hand-centered reference frames, which directly influence each other and which combine in a time-dependent manner.
Vision Research | 2015
Dicle N. Dövencioğlu; Maarten W. A. Wijntjes; Ohad Ben-Shahar; Katja Doerschner
In dynamic scenes, relative motion between the object, the observer, and/or the environment projects as dynamic visual information onto the retina (optic flow) that facilitates 3D shape perception. When the object is diffusely reflective, e.g. a matte painted surface, this optic flow is directly linked to object shape, a property found at the foundations of most traditional shape-from-motion (SfM) schemes. When the object is specular, the corresponding specular flow is related to shape curvature, a regime change that challenges the visual system to determine concurrently both the shape and the distortions of the (sometimes unknown) environment reflected from its surface. While human observers are able to judge the global 3D shape of most specular objects, shape-from-specular-flow (SFSF) is not veridical. In fact, recent studies have also shown systematic biases in the perceived motion of such objects. Here we focus on the perception of local shape from specular flow and compare it to that of matte-textured rotating objects. Observers judged local surface shape by adjusting a rotation and scale invariant shape index probe. Compared to shape judgments of static objects we find that object motion decreases intra-observer variability in local shape estimation. Moreover, object motion introduces systematic changes in perceived shape between matte-textured and specular conditions. Taken together, this study provides a new insight toward the contribution of motion and surface material to local shape perception.
tests and proofs | 2010
Maarten W. A. Wijntjes; Sylvia C. Pont
Although there has recently been a large increase in commercial 3D applications, relatively little is known about the quantitative perceptual improvement from binocular disparity. In this study we developed a method to measure the perceived relative depth structure of natural scenes. Observers were instructed to adjust the direction of a virtual pointer from one object to another. The pointing data was used to reconstruct the relative logarithmic depths of the objects in pictorial space. The results showed that the relative depth structure is more similar between observers for stereo images than for mono images in two out of three scenes. A similar result was found for the depth range: for the same two scenes the stereo images were perceived as having more depth than the monocular images. In addition, our method allowed us to determine the subjective center of projection. We found that the pointing settings fitted the reconstructed depth best for substantially wider fields of view than the veridical center of projection for both mono and stereo images. The results indicate that the improvement from binocular disparity depends on the scene content: scenes with sufficient monocular information may not profit much from binocular disparity.
Experimental Brain Research | 2009
Maarten W. A. Wijntjes; Astrid M. L. Kappers
It is known that our senses are influenced by contrast effects and aftereffects. For haptic perception, the curvature aftereffect has been studied in depth but little is known about curvature contrast. In this study we let observers explore two shapes simultaneously. The shape felt by the index finger could either be flat or convexly curved. The curvature at the thumb was varied to quantify the curvature of a subjectively flat shape. We found that when the index finger was presented with a convex shape, a flat shape at the thumb was also perceived to be convex. The effect is rather strong, on average 20% of the contrasting curvature. The contrast effect was present for both raised line stimuli and solid shapes. Movement measurements revealed that the curvature of the path taken by the metacarpus (part of the hand that connects the fingers) was approximately the average of the path curvatures taken by the thumb and index finger. A failure to correct for the movement of the hand could explain the contrast effect.
I-perception | 2013
Harold T. Nefs; Arthur van Bilsen; Sylvia C. Pont; Huib de Ridder; Maarten W. A. Wijntjes; Andrea J. van Doorn
In this paper, we focus on how people perceive the aspect ratio of city squares. Earlier research has focused on distance perception but not so much on the perceived aspect ratio of the surrounding space. Furthermore, those studies have focused on “open” spaces rather than urban areas enclosed by walls, houses and filled with people, cars, etc. In two experiments, we therefore measured, using a direct and an indirect method, the perceived aspect ratio of five city squares in the historic city center of Delft, the Netherlands. We also evaluated whether the perceived aspect ratio of city squares was affected by the position of the observer on the square. In the first experiment, participants were asked to set the aspect ratio of a small rectangle such that it matched the perceived aspect ratio of the city square. In the second experiment, participants were asked to estimate the length and width of the city square separately. In the first experiment, we found that the perceived aspect ratio was in general lower than the physical aspect ratio. However, in the second experiment, we found that the calculated ratios were close to veridical except for the most elongated city square. We conclude therefore that the outcome depends on how the measurements are performed. Furthermore, although indirect measurements are nearly veridical, the perceived aspect ratio is an underestimation of the physical aspect ratio when measured in a direct way. Moreover, the perceived aspect ratio also depends on the location of the observer. These results may be beneficial to the design of large open urban environments, and in particular to rectangular city squares.
Cognition | 2018
Maarten W. A. Wijntjes; Ruth Rosenholtz
Object recognition is often conceived of as proceeding by segmenting an object from its surround, then integrating its features. In turn, peripheral visions sensitivity to clutter, known as visual crowding, has been framed as due to a failure to restrict that integration to features belonging to the object. We hand-segment objects from their background, and find that rather than helping peripheral recognition, this impairs it when compared to viewing the object in its real-world context. Context is in fact so important that it alone (no visible target object) is just as informative, in our experiments, as seeing the object alone. Finally, we find no advantage to separately viewing the context and segmented object. These results, taken together, suggest that we should not think of recognition as ideally operating on pre-segmented objects, nor of crowding as the failure to do so.
I-perception | 2017
Maarten W. A. Wijntjes
The plastic effect is historically used to denote various forms of stereopsis. The vivid impression of depth often associated with binocular stereopsis can also be achieved in other ways, for example, using a synopter. Accounts of this go back over a hundred years. These ways of viewing all aim to diminish sensorial evidence that the picture is physically flat. Although various viewing modes have been proposed in the literature, their effects have never been compared. In the current study, we compared three viewing modes: monocular blur, synoptic viewing, and free viewing (using a placebo synopter). By designing a physical embodiment that was indistinguishable for the three experimental conditions, we kept observers naïve with respect to the differences between them; 197 observers participated in an experiment where the three viewing modes were compared by performing a rating task. Results indicate that synoptic viewing causes the largest plastic effect. Monocular blur scores lower than synoptic viewing but is still rated significantly higher than the baseline conditions. The results strengthen the idea that synoptic viewing is not due to a placebo effect. Furthermore, monocular blur has been verified for the first time as a way of experiencing the plastic effect, although the effect is smaller than synoptic viewing. We discuss the results with respect to the theoretical basis for the plastic effect. We show that current theories are not described with sufficient details to explain the differences we found.