Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stephan de la Rosa is active.

Publication


Featured researches published by Stephan de la Rosa.


Displays | 2013

Egocentric distance perception in large screen immersive displays

Ivelina V. Piryankova; Stephan de la Rosa; Uwe Kloos; Hh Bülthoff; Betty J. Mohler

Abstract Many scientists have demonstrated that compared to the real world egocentric distances in head-mounted display virtual environments are underestimated. However, distance perception in large screen immersive displays has received less attention. We investigate egocentric distance perception in a virtual office room projected using a semi-spherical, a Max Planck Institute CyberMotion Simulator cabin and a flat large screen immersive display. The goal of our research is to systematically investigate distance perception in large screen immersive displays with commonly used technical specifications. We specifically investigate the role of distance to the target, stereoscopic projection and motion parallax on distance perception. We use verbal reports and blind walking as response measures for the real world experiment. Due to the limited space in the three large screen immersive displays we use only verbal reports as the response measure for the experiments in the virtual environment. Our results show an overall underestimation of distance perception in the large screen immersive displays, while verbal estimates of distances are nearly veridical in the real world. We find that even when providing motion parallax and stereoscopic depth cues to the observer in the flat large screen immersive display, participants estimate the distances to be smaller than intended. Although stereo cues in the flat large screen immersive display do increase distance estimates for the nearest distance, the impact of the stereoscopic depth cues is not enough to result in veridical distance perception. Further, we demonstrate that the distance to the target significantly influences the percent error of verbal estimates in both the real and virtual world. The impact of the distance to the target on the distance judgments is the same in the real world and in two of the used large screen displays, namely, the MPI CyberMotion Simulator cabin and the flat displays. However, in the semi-spherical display we observe a significantly different influence of distance to the target on verbal estimates of egocentric distances. Finally, we discuss potential reasons for our results. Based on the findings from our research we give general suggestions that could serve as methods for improving the LSIDs in terms of the accuracy of depth perception and suggest methods to compensate for the underestimation of verbal distance estimates in large screen immersive displays.


applied perception in graphics and visualization | 2010

Egocentric distance judgments in a large screen display immersive virtual environment

Ivelina V. Alexandrova; Paolina T. Teneva; Stephan de la Rosa; Uwe Kloos; Hh Bülthoff; Betty J. Mohler

People underestimate egocentric distances in head-mounted display virtual environments, as compared to estimates done in the real world. Our work investigates whether distances are still compressed in a large screen display immersive virtual environment, where participants are able to see their own body surrounded by the virtual environment. We conducted our experiment in both the real world using a real room and the large screen display immersive virtual environment using a 3D model of the real room. Our results showed a significant underestimation of verbal reports of egocentric distances in the large screen display immersive virtual environment, while the distance judgments of the real world were closer to veridical. Moreover, we observed a significant effect of distances in both environments. In the real world closer distances were slightly underestimated, while further distances were slightly overestimated. In contrast to the real world in the virtual environment participants overestimated closer distances (up to 2.5m) and underestimated distances that were further than 3m. A possible reason for this effect of distances in the virtual environment may be that participants perceived stereo cues differently when the target was projected on the floor versus on the front of the large screen.


PLOS ONE | 2014

The MPI Emotional Body Expressions Database for Narrative Scenarios

Ekaterina P. Volkova; Stephan de la Rosa; Hh Bülthoff; Betty J. Mohler

Emotion expression in human-human interaction takes place via various types of information, including body motion. Research on the perceptual-cognitive mechanisms underlying the processing of natural emotional body language can benefit greatly from datasets of natural emotional body expressions that facilitate stimulus manipulation and analysis. The existing databases have so far focused on few emotion categories which display predominantly prototypical, exaggerated emotion expressions. Moreover, many of these databases consist of video recordings which limit the ability to manipulate and analyse the physical properties of these stimuli. We present a new database consisting of a large set (over 1400) of natural emotional body expressions typical of monologues. To achieve close-to-natural emotional body expressions, amateur actors were narrating coherent stories while their body movements were recorded with motion capture technology. The resulting 3-dimensional motion data recorded at a high frame rate (120 frames per second) provides fine-grained information about body movements and allows the manipulation of movement on a body joint basis. For each expression it gives the positions and orientations in space of 23 body joints for every frame. We report the results of physical motion properties analysis and of an emotion categorisation study. The reactions of observers from the emotion categorisation study are included in the database. Moreover, we recorded the intended emotion expression for each motion sequence from the actor to allow for investigations regarding the link between intended and perceived emotions. The motion sequences along with the accompanying information are made available in a searchable MPI Emotional Body Expression Database. We hope that this database will enable researchers to study expression and perception of naturally occurring emotional body expressions in greater depth.


applied perception in graphics and visualization | 2011

The influence of avatar (self and character) animations on distance estimation, object interaction and locomotion in immersive virtual environments

Erin A. McManus; Bobby Bodenheimer; Stephan Streuber; Stephan de la Rosa; Hh Bülthoff; Betty J. Mohler

Humans have been shown to perceive and perform actions differently in immersive virtual environments (VEs) as compared to the real world. Immersive VEs often lack the presence of virtual characters; users are rarely presented with a representation of their own body and have little to no experience with other human avatars/characters. However, virtual characters and avatars are more often being used in immersive VEs. In a two-phase experiment, we investigated the impact of seeing an animated character or a self-avatar in a head-mounted display VE on task performance. In particular, we examined performance on three different behavioral tasks in the VE. In a learning phase, participants either saw a character animation or an animation of a cone. In the task performance phase, we varied whether participants saw a co-located animated self-avatar. Participants performed a distance estimation, an object interaction and a stepping stone locomotion task within the VE. We find no impact of a character animation or a self-avatar on distance estimates. We find that both the animation and the self-avatar influenced task performance which involved interaction with elements in the environment; the object interaction and the stepping stone tasks. Overall the participants performed the tasks faster and more accurately when they either had a self-avatar or saw a character animation. The results suggest that including character animations or self-avatars before or during task execution is beneficial to performance on some common interaction tasks within the VE. Finally, we see that in all cases (even without seeing a character or self-avatar animation) participants learned to perform the tasks more quickly and/or more accurately over time.


Experimental Brain Research | 2015

Objects exhibit body model like shape distortions

Aurelie Saulton; Trevor J. Dodds; Hh Bülthoff; Stephan de la Rosa

Accurate knowledge about size and shape of the body derived from somatosensation is important to locate one’s own body in space. The internal representation of these body metrics (body model) has been assessed by contrasting the distortions of participants’ body estimates across two types of tasks (localization task vs. template matching task). Here, we examined to which extent this contrast is linked to the human body. We compared participants’ shape estimates of their own hand and non-corporeal objects (rake, post-it pad, CD-box) between a localization task and a template matching task. While most items were perceived accurately in the visual template matching task, they appeared to be distorted in the localization task. All items’ distortions were characterized by larger length underestimation compared to width. This pattern of distortion was maintained across orientation for the rake item only, suggesting that the biases measured on the rake were bound to an item-centric reference frame. This was previously assumed to be the case only for the hand. Although similar results can be found between non-corporeal items and the hand, the hand appears significantly more distorted than other items in the localization task. Therefore, we conclude that the magnitude of the distortions measured in the localization task is specific to the hand. Our results are in line with the idea that the localization task for the hand measures contributions of both an implicit body model that is not utilized in landmark localization with objects and other factors that are common to objects and the hand.


PLOS ONE | 2014

Putting Actions in Context: Visual Action Adaptation Aftereffects Are Modulated by Social Contexts

Stephan de la Rosa; Stephan Streuber; Martin A. Giese; Hh Bülthoff; C Curio

The social context in which an action is embedded provides important information for the interpretation of an action. Is this social context integrated during the visual recognition of an action? We used a behavioural visual adaptation paradigm to address this question and measured participants’ perceptual bias of a test action after they were adapted to one of two adaptors (adaptation after-effect). The action adaptation after-effect was measured for the same set of adaptors in two different social contexts. Our results indicate that the size of the adaptation effect varied with social context (social context modulation) although the physical appearance of the adaptors remained unchanged. Three additional experiments provided evidence that the observed social context modulation of the adaptation effect are owed to the adaptation of visual action recognition processes. We found that adaptation is critical for the social context modulation (experiment 2). Moreover, the effect is not mediated by emotional content of the action alone (experiment 3) and visual information about the action seems to be critical for the emergence of action adaptation effects (experiment 4). Taken together these results suggest that processes underlying visual action recognition are sensitive to the social context of an action.


Journal of Experimental Psychology: Human Perception and Performance | 2011

Visual object detection, categorization, and identification tasks are associated with different time courses and sensitivities.

Stephan de la Rosa; R Choudhery; A Chatziastros

Recent evidence suggests that the recognition of an objects presence and its explicit recognition are temporally closely related. Here we re-examined the time course (using a fine and a coarse temporal resolution) and the sensitivity of three possible component processes of visual object recognition. In particular, participants saw briefly presented (Experiment I to III) or noise masked (Experiment IV) static images of objects and non-object textures. Participants reported the presence of an object, its basic level category, and its subordinate category while we measured recognition performance by means of accuracy and reaction times. All three recognition tasks were clearly separable in terms of their time course and sensitivity. Finally, the use of a coarser temporal sampling of presentation times decreased performance differences between the detection and basic level categorization task suggesting that a fine temporal sampling for the dissociation of recognition performances is important. Overall the three probed recognition processes were associated with different time courses and sensitivities.


Acta Psychologica | 2016

The role of visual similarity and memory in body model distortions

Aurelie Saulton; Matthew R. Longo; Hong Yu Wong; Hh Bülthoff; Stephan de la Rosa

Several studies have shown that the perception of ones own hand size is distorted in proprioceptive localization tasks. It has been suggested that those distortions mirror somatosensory anisotropies. Recent research suggests that non-corporeal items also show some spatial distortions. In order to investigate the psychological processes underlying the localization task, we investigated the influences of visual similarity and memory on distortions observed on corporeal and non-corporeal items. In experiment 1, participants indicated the location of landmarks on: their own hand, a rubber hand (rated as most similar to the real hand), and a rake (rated as least similar to the real hand). Results show no significant differences between rake and rubber hand distortions but both items were significantly less distorted than the hand. Experiments 2 and 3 explored the role of memory in spatial distance judgments of the hand, the rake and the rubber hand. Spatial representations of items measured in experiments 2 and 3 were also distorted but showed the tendency to be smaller than in localization tasks. While memory and visual similarity seem to contribute to explain qualitative similarities in distortions between the hand and non-corporeal items, those factors cannot explain the larger magnitude observed in hand distortions.


PLOS ONE | 2013

The Structure of Conscious Bodily Self-Perception during Full-Body Illusions

Martin Dobricki; Stephan de la Rosa

Previous research suggests that bodily self-identification, bodily self-localization, agency, and the sense of being present in space are critical aspects of conscious full-body self-perception. However, none of the existing studies have investigated the relationship of these aspects to each other, i.e., whether they can be identified to be distinguishable components of the structure of conscious full-body self-perception. Therefore, the objective of the present investigation is to elucidate the structure of conscious full-body self-perception. We performed two studies in which we stroked the back of healthy individuals for three minutes while they watched the back of a distant virtual body being synchronously stroked with a virtual stick. After visuo-tactile stimulation, participants assessed changes in their bodily self-perception with a custom made self-report questionnaire. In the first study, we investigated the structure of conscious full-body self-perception by analyzing the responses to the questionnaire by means of multidimensional scaling combined with cluster analysis. In the second study, we then extended the questionnaire and validated the stability of the structure of conscious full-body self-perception found in the first study within a larger sample of individuals by performing a principle components analysis of the questionnaire responses. The results of the two studies converge in suggesting that the structure of conscious full-body self-perception consists of the following three distinct components: bodily self-identification, space-related self-perception (spatial presence), and agency.


Experimental Brain Research | 2011

The effect of social context on the use of visual information

Stephan Streuber; Günther Knoblich; Natalie Sebanz; Hh Bülthoff; Stephan de la Rosa

Social context modulates action kinematics. Less is known about whether social context also affects the use of task relevant visual information. We tested this hypothesis by examining whether the instruction to play table tennis competitively or cooperatively affected the kind of visual cues necessary for successful table tennis performance. In two experiments, participants played table tennis in a dark room with only the ball, net, and table visible. Visual information about both players’ actions was manipulated by means of self-glowing markers. We recorded the number of successful passes for each player individually. The results showed that participants’ performance increased when their own body was rendered visible in both the cooperative and the competitive condition. However, social context modulated the importance of different sources of visual information about the other player. In the cooperative condition, seeing the other player’s racket had the largest effects on performance increase, whereas in the competitive condition, seeing the other player’s body resulted in the largest performance increase. These results suggest that social context selectively modulates the use of visual information about others’ actions in social interactions.

Collaboration


Dive into the Stephan de la Rosa's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge