Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ekaterina P. Volkova is active.

Publication


Featured researches published by Ekaterina P. Volkova.


PLOS ONE | 2014

The MPI Emotional Body Expressions Database for Narrative Scenarios

Ekaterina P. Volkova; Stephan de la Rosa; Hh Bülthoff; Betty J. Mohler

Emotion expression in human-human interaction takes place via various types of information, including body motion. Research on the perceptual-cognitive mechanisms underlying the processing of natural emotional body language can benefit greatly from datasets of natural emotional body expressions that facilitate stimulus manipulation and analysis. The existing databases have so far focused on few emotion categories which display predominantly prototypical, exaggerated emotion expressions. Moreover, many of these databases consist of video recordings which limit the ability to manipulate and analyse the physical properties of these stimuli. We present a new database consisting of a large set (over 1400) of natural emotional body expressions typical of monologues. To achieve close-to-natural emotional body expressions, amateur actors were narrating coherent stories while their body movements were recorded with motion capture technology. The resulting 3-dimensional motion data recorded at a high frame rate (120 frames per second) provides fine-grained information about body movements and allows the manipulation of movement on a body joint basis. For each expression it gives the positions and orientations in space of 23 body joints for every frame. We report the results of physical motion properties analysis and of an emotion categorisation study. The reactions of observers from the emotion categorisation study are included in the database. Moreover, we recorded the intended emotion expression for each motion sequence from the actor to allow for investigations regarding the link between intended and perceived emotions. The motion sequences along with the accompanying information are made available in a searchable MPI Emotional Body Expression Database. We hope that this database will enable researchers to study expression and perception of naturally occurring emotional body expressions in greater depth.


Memory & Cognition | 2011

The effect of landmark and body-based sensory information on route knowledge.

Roy A. Ruddle; Ekaterina P. Volkova; Betty J. Mohler; Hh Bülthoff

Two experiments investigated the effects of landmarks and body-based information on route knowledge. Participants made four out-and-back journeys along a route, guided only on the first outward trip and with feedback every time an error was made. Experiment 1 used 3-D virtual environments (VEs) with a desktop monitor display, and participants were provided with no supplementary landmarks, only global landmarks, only local landmarks, or both global and local landmarks. Local landmarks significantly reduced the number of errors that participants made, but global landmarks did not. Experiment 2 used a head-mounted display; here, participants who physically walked through the VE (translational and rotational body-based information) made 36% fewer errors than did participants who traveled by physically turning but changing position using a joystick. Overall, the experiments showed that participants were less sure of where to turn than which way, and journey direction interacted with sensory information to affect the number and types of errors participants made.


eurographics | 2010

Short paper: virtual storyteller in immersive virtual environments using fairy tales annotated for emotion states

Ivelina V. Alexandrova; Ekaterina P. Volkova; Uwe Kloos; Hh Bülthoff; Betty J. Mohler

This paper describes the implementation of an automatically generated virtual storyteller from fairy tale texts which were previously annotated for emotion. In order to gain insight into the effectiveness of our virtual storyteller we recorded face, body and voice of an amateur actor and created an actor animation video of one of the fairy tales. We also got the actors annotation of the fairy tale text and used this to create a virtual storyteller video. With these two videos, the virtual storyteller and the actor animation, we conducted a user study to determine the effectiveness of our virtual storyteller at conveying the intended emotions of the actor. Encouragingly, participants performed best (when compared to the intended emotions of the actor) when they marked the emotions of the virtual storyteller. Interestingly, the actor himself was not able to annotate the animated actor video with high accuracy as compared to his annotated text. This argues that for future work we must have our actors also annotate their body and facial expressions, not just the text, in order to further investigate the effectiveness of our virtual storyteller. This research is a first step towards using our virtual storyteller in real-time immersive virtual environments.


acm symposium on applied perception | 2013

Perception of emotional body expressions in narrative scenarios

Ekaterina P. Volkova; Betty J. Mohler; Trevor J. Dodds; Joachim Tesch; Hh Bülthoff

People use body motion to express and recognise emotions. We investigated whether emotional body expressions can be recognised when they are recorded during natural narration, where actors freely express the emotional colouring of a story told. We then took only the upper body motion trajectories and presented them to participants in the form of animated stick figures. The observers were asked to categorise the emotions expressed in short motion sequences. The results show that recognition level of eleven emotions shown via upper body is significantly above chance level and the responses to motion sequences are consistent across observers.


Frontiers in Psychology | 2014

Emotion categorization of body expressions in narrative scenarios

Ekaterina P. Volkova; Betty J. Mohler; Trevor J. Dodds; Joachim Tesch; Hh Bülthoff


ACM Transactions on Computer-Human Interaction | 2011

Walking improves your cognitive map in environments that are large-scale and large in extent

Roy A. Ruddle; Ekaterina P. Volkova; Hh Bülthoff


tests and proofs | 2013

Learning to walk in virtual reality

Roy A. Ruddle; Ekaterina P. Volkova; Hh Bülthoff


north american chapter of the association for computational linguistics | 2010

Emotional Perception of Fairy Tales: Achieving Agreement in Emotion Annotation of Text

Ekaterina P. Volkova; Betty J. Mohler; Detmar Meurers; Dale Gerdemann; Hh Bülthoff


acm symposium on applied perception | 2013

Recognizing your own motions on virtual avatars: is it me or not?

Anna C. Wellerdiek; Markus Leyrer; Ekaterina P. Volkova; Dong-Seon Chang; Betty J. Mohler


Perception | 2011

Integration of visual and auditory stimuli in the perception of emotional expression in virtual characters

Ekaterina P. Volkova; Sally A. Linkenauger; Ivelina V. Alexandrova; Hh Bülthoff; Betty J. Mohler

Collaboration


Dive into the Ekaterina P. Volkova's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge