Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vivian C. Paulun is active.

Publication


Featured researches published by Vivian C. Paulun.


Journal of Vision | 2017

Shape, motion, and optical cues to stiffness of elastic objects

Vivian C. Paulun; Filipp Schmidt; Jan Jaap R. van Assen; Roland W. Fleming

Nonrigid materials, such as jelly, rubber, or sponge move and deform in distinctive ways depending on their stiffness. Which cues do we use to infer stiffness? We simulated cubes of varying stiffness and optical appearance (e.g., wood, metal, wax, jelly) being subjected to two kinds of deformation: (a) a rigid cylinder pushing downwards into the cube to various extents (shape change, but little motion: shape dominant), (b) a rigid cylinder retracting rapidly from the cube (same initial shapes, differences in motion: motion dominant). Observers rated the apparent softness/hardness of the cubes. In the shape-dominant condition, ratings mainly depended on how deeply the rod penetrated the cube and were almost unaffected by the cubes intrinsic physical properties. In contrast, in the motion-dominant condition, ratings varied systematically with the cubes intrinsic stiffness, and were less influenced by the extent of the perturbation. We find that both results are well predicted by the absolute magnitude of deformation, suggesting that when asked to judge stiffness, observers resort to simple heuristics based on the amount of deformation. Softness ratings for static, unperturbed cubes varied substantially and systematically depending on the optical properties. However, when animated, the ratings were again dominated by the extent of the deformation, and the effect of optical appearance was negligible. Together, our results suggest that to estimate stiffness, the visual system strongly relies on measures of the extent to which an object changes shape in response to forces.


Experimental Brain Research | 2016

Effects of material properties and object orientation on precision grip kinematics.

Vivian C. Paulun; Karl R. Gegenfurtner; Melvyn A. Goodale; Roland W. Fleming

Successfully picking up and handling objects requires taking into account their physical properties (e.g., material) and position relative to the body. Such features are often inferred by sight, but it remains unclear to what extent observers vary their actions depending on the perceived properties. To investigate this, we asked participants to grasp, lift and carry cylinders to a goal location with a precision grip. The cylinders were made of four different materials (Styrofoam, wood, brass and an additional brass cylinder covered with Vaseline) and were presented at six different orientations with respect to the participant (0°, 30°, 60°, 90°, 120°, 150°). Analysis of their grasping kinematics revealed differences in timing and spatial modulation at all stages of the movement that depended on both material and orientation. Object orientation affected the spatial configuration of index finger and thumb during the grasp, but also the timing of handling and transport duration. Material affected the choice of local grasp points and the duration of the movement from the first visual input until release of the object. We find that conditions that make grasping more difficult (orientation with the base pointing toward the participant, high weight and low surface friction) lead to longer durations of individual movement segments and a more careful placement of the fingers on the object.


Vision Research | 2015

Visual search under scotopic lighting conditions

Vivian C. Paulun; Alexander C. Schütz; Melchi Michel; Wilson S. Geisler; Karl R. Gegenfurtner

When we search for visual targets in a cluttered background we systematically move our eyes around to bring different regions of the scene into foveal view. We explored how visual search behavior changes when the fovea is not functional, as is the case in scotopic vision. Scotopic contrast sensitivity is significantly lower overall, with a functional scotoma in the fovea. We found that in scotopic search, for a medium- and a low-spatial-frequency target, individuals made longer lasting fixations that were not broadly distributed across the entire search display but tended to peak in the upper center, especially for the medium-frequency target. The distributions of fixation locations are qualitatively similar to those of an ideal searcher that has human scotopic detectability across the visual field, and interestingly, these predicted distributions are different from those predicted by an ideal searcher with human photopic detectability. We conclude that although there are some qualitative differences between human and ideal search behavior, humans make principled adjustments in their search behavior as ambient light level decreases.


Experimental Brain Research | 2014

Center or side: biases in selecting grasp points on small bars

Vivian C. Paulun; Urs Kleinholdermann; Karl R. Gegenfurtner; Jeroen B. J. Smeets; Eli Brenner

Abstract Choosing appropriate grasp points is necessary for successfully interacting with objects in our environment. We brought two possible determinants of grasp point selection into conflict: the attempt to grasp an object near its center of mass to minimize torque and ensure stability and the attempt to minimize movement distance. We let our participants grasp two elongated objects of different mass and surface friction that were approached from different distances to both sides of the object. Maximizing stability predicts grasp points close to the object’s center, while minimizing movement costs predicts a bias of the grasp axis toward the side at which the movement started. We found smaller deviations from the center of mass for the smooth and heavy object, presumably because the larger torques and more slippery surface for the heavy object increase the chance of unwanted object rotation. However, our right-handed participants tended to grasp the objects to the right of the center of mass, irrespective of where the movement started. The rightward bias persisted when vision was removed once the hand was half way to the object. It was reduced when the required precision was increased. Starting the movement above the object eliminated the bias. Grasping with the left hand, participants tended to grasp the object to the left of its center. Thus, the selected grasp points seem to reflect a compromise between maximizing stability by grasping near the center of mass and grasping on the side of the acting hand, perhaps to increase visibility of the object.


Journal of Vision | 2017

Inferring the stiffness of unfamiliar objects from optical, shape, and motion cues

Filipp Schmidt; Vivian C. Paulun; Jan Jaap R. van Assen; Roland W. Fleming

Visually inferring the stiffness of objects is important for many tasks but is challenging because, unlike optical properties (e.g., gloss), mechanical properties do not directly affect image values. Stiffness must be inferred either (a) by recognizing materials and recalling their properties (associative approach) or (b) from shape and motion cues when the material is deformed (estimation approach). Here, we investigated interactions between these two inference types. Participants viewed renderings of unfamiliar shapes with 28 materials (e.g., nickel, wax, cork). In Experiment 1, they viewed nondeformed, static versions of the objects and rated 11 material attributes (e.g., soft, fragile, heavy). The results confirm that the optical materials elicited a wide range of apparent properties. In Experiment 2, using a blue plastic material with intermediate apparent softness, the objects were subjected to physical simulations of 12 shape-transforming processes (e.g., twisting, crushing, stretching). Participants rated softness and extent of deformation. Both correlated with the physical magnitude of deformation. Experiment 3 combined variations in optical cues with shape cues. We find that optical cues completely dominate. Experiment 4 included the entire motion sequence of the deformation, yielding significant contributions of optical as well as motion cues. Our findings suggest participants integrate shape, motion, and optical cues to infer stiffness, with optical cues playing a major role for our range of stimuli.


Journal of Vision | 2015

A tetrachromatic display for the spatiotemporal control of rod and cone stimulation

Florian S. Bayer; Vivian C. Paulun; David Weiss; Karl R. Gegenfurtner

We present an apparatus that allows independent stimulation of rods and short (S)-, middle (M)-, and long (L)-wavelength-sensitive cones. Previously presented devices allow rod and cone stimulation independently, but only for a spatially invariant stimulus design (Pokorny, Smithson, & Quinlan, 2004; Sun, Pokorny, & Smith, 2001b). We overcame this limitation by using two spectrally filtered projectors with overlapping projections. This approach allows independent rod and cone stimulation in a dynamic two-dimensional scene with appropriate resolution in the spatial, temporal, and receptor domains. Modulation depths were ±15% for M-cones and L-cones, ±20% for rods, and ±50% for S-cones, all with respect to an equal-energy mesopic background at 3.4 cd/m2. Validation was provided by radiometric measures and behavioral data from two trichromats, one protanope, one deuteranope, and one night-blind observer.


Journal of Vision | 2015

Modulation of the Material-Weight Illusion in objects made of more than one material

Vivian C. Paulun; Gavin Buckingham; Karl R. Gegenfurtner; Roland W. Fleming; Melvyn A. Goodale

Knowledge about the material properties of objects is essential for successful manual interactions. Vision can provide useful information about features such as weight or friction even before interaction, allowing us to prepare the action appropriately, e.g. adjusting initial forces applied to an object when lifting it. But visual information can also alter multisensory perception of object properties during interaction. When violated, visually-inferred expectations can result in perceptual illusions such as the material-weight illusion (MWI). In this illusion, an object that appears to be made of a low-weight material (e.g., polystyrene) feels heavier than an equally-weighted object of a heavier-looking material (e.g., wood). However, objects are often made of more than one material. Thus, in the present study, we investigated the perceived heaviness of symmetrical objects consisting of two halves, which appeared to be made of different materials: polystyrene, wood, or stone. The true mass of these bipartite objects was identical (400g) and evenly distributed around their geometric centre. Thus, the objects and their halves were visually distinct, but identical in terms of their weight and mass distribution. Participants were asked to lift the objects by a small handle attached centrally, while forces and torques were recorded. Additionally, they were asked to report the perceived weight of both halves of the objects. The visual appearance did indeed alter perceived heaviness. Although estimates of heavier and lighter portions of the objects converged after lifting the objects, the heavier-looking materials in our bipartite objects were still perceived as heavier than the lighter-looking materials. Again, prior expectations appear to affect the perception, but in a direction opposite to that of the MWI. Despite the effects of the visual appearance on perceived heaviness, no corresponding effects were observed on forces or torques. Meeting abstract presented at VSS 2015.


international conference on human haptic sensing and touch enabled computer applications | 2018

Influence of Different Types of Prior Knowledge on Haptic Exploration of Soft Objects

Aaron Cedric Zöller; Alexandra Lezkan; Vivian C. Paulun; Roland W. Fleming; Knut Drewing

When estimating the softness of an object by active touch, humans typically indent the object’s surface several times with their finger, applying higher peak indentation forces when they expect to explore harder as compared to softer stimuli [1]. Here, we compared how different types of prior knowledge differentially influence exploratory forces in softness discrimination. On each trial, participants successively explored two silicone rubber stimuli which were either both relatively soft or both relatively hard, and judged which of the two were softer. We measured peak forces of the first indentation. In the control condition, participants obtained no information about whether the upcoming stimulus pair would be from the hard or the soft category. In three test conditions, participants received implicit (pairs from the same category were blocked), semantic (the words soft and hard), or visual prior knowledge about the softness category. Visual information was provided by displaying the rendering of a compliant object deformed by a probe. Given implicit information, participants again used significantly more force in their first touch when exploring harder as compared to softer objects. Surprisingly, when given visual information, participants used significantly less force in the first touch when exploring harder objects. There was no effect when participants were given semantic information. We conclude that different types of prior knowledge influence the exploration behavior in very different ways. Thus, the mechanisms through which prior knowledge is integrated in the exploration process might be more complex than expected.


Vision Research | 2015

Seeing liquids from static snapshots

Vivian C. Paulun; Takahiro Kawabe; Shin'ya Nishida; Roland W. Fleming


Journal of Vision | 2012

Goop! On the visual perception of fluid viscosity

Roland W. Fleming; Vivian C. Paulun

Collaboration


Dive into the Vivian C. Paulun's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Melvyn A. Goodale

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eli Brenner

VU University Amsterdam

View shared research outputs
Researchain Logo
Decentralizing Knowledge