Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thomas Waltemate is active.

Publication


Featured researches published by Thomas Waltemate.


virtual reality software and technology | 2015

Realizing a low-latency virtual reality environment for motor learning

Thomas Waltemate; Felix Hülsmann; Thies Pfeiffer; Stefan Kopp; Mario Botsch

Virtual Reality (VR) has the potential to support motor learning in ways exceeding beyond the possibilities provided by real world environments. New feedback mechanisms can be implemented that support motor learning during the performance of the trainee and afterwards as a performance review. As a consequence, VR environments excel in controlled evaluations, which has been proven in many other application scenarios. However, in the context of motor learning of complex tasks, including full-body movements, questions regarding the main technical parameters of such a system, in particular that of the required maximum latency, have not been addressed in depth. To fill this gap, we propose a set of requirements towards VR systems for motor learning, with a special focus on motion capturing and rendering. We then assess and evaluate state-of-the-art techniques and technologies for motion capturing and rendering, in order to provide data on latencies for different setups. We focus on the end-to-end latency of the overall system, and present an evaluation of an exemplary system that has been developed to meet these requirements.


virtual reality software and technology | 2016

The impact of latency on perceptual judgments and motor performance in closed-loop interaction in virtual reality

Thomas Waltemate; Irene Senna; Felix Hülsmann; Marieke Rohde; Stefan Kopp; Marc O. Ernst; Mario Botsch

Latency between a users movement and visual feedback is inevitable in every Virtual Reality application, as signal transmission and processing take time. Unfortunately, a high end-to-end latency impairs perception and motor performance. While it is possible to reduce feedback delay to tens of milliseconds, these delays will never completely vanish. Currently, there is a gap in literature regarding the impact of feedback delays on perception and motor performance as well as on their interplay in virtual environments employing full-body avatars. With the present study at hand, we address this gap by performing a systematic investigation of different levels of delay across a variety of perceptual and motor tasks during full-body action inside a Cave Automatic Virtual Environment. We presented participants with their virtual mirror image, which responded to their actions with feedback delays ranging from 45 to 350 ms. We measured the impact of these delays on motor performance, sense of agency, sense of body ownership and simultaneity perception by means of psychophysical procedures. Furthermore, we looked at interaction effects between these aspects to identify possible dependencies. The results show that motor performance and simultaneity perception are affected by latencies above 75 ms. Although sense of agency and body ownership only decline at a latency higher than 125 ms, and deteriorate for a latency greater than 300 ms, they do not break down completely even at the highest tested delay. Interestingly, participants perceptually infer the presence of delays more from their motor error in the task than from the actual level of delay. Whether or not participants notice a delay in a virtual environment might therefore depend on the motor task and their performance rather than on the actual delay.


virtual reality software and technology | 2017

Fast generation of realistic virtual humans

Jascha Achenbach; Thomas Waltemate; Marc Erich Latoschik; Mario Botsch

In this paper we present a complete pipeline to create ready-to-animate virtual humans by fitting a template character to a point set obtained by scanning a real person using multi-view stereo reconstruction. Our virtual humans are built upon a holistic character model and feature a detailed skeleton, fingers, eyes, teeth, and a rich set of facial blendshapes. Furthermore, due to the careful selection of techniques and technology, our reconstructed humans are quite realistic in terms of both geometry and texture. Since we represent our models as single-layer triangle meshes and animate them through standard skeleton-based skinning and facial blendshapes, our characters can be used in standard VR engines out of the box. By optimizing for computation time and minimizing manual intervention, our reconstruction pipeline is capable of processing whole characters in less than ten minutes.


intelligent virtual agents | 2017

The Intelligent Coaching Space: A Demonstration

Iwan de Kok; Felix Hülsmann; Thomas Waltemate; Cornelia Frank; Julian Hough; Thies Pfeiffer; David Schlangen; Thomas Schack; Mario Botsch; Stefan Kopp

Here we demonstrate our Intelligent Coaching Space, an immersive virtual environment in which users learn a motor action (e.g. a squat) under the supervision of a virtual coach. We detail how we assess the ability of the coachee in executing the motor action, how the intelligent coaching space and its features are realized and how the virtual coach leads the coachee through a coaching session.


Computer Animation and Virtual Worlds | 2017

Design and evaluation of reduced marker layouts for hand motion capture

Matthias Schröder; Thomas Waltemate; Jonathan Maycock; Tobias Röhlig; Helge Ritter; Mario Botsch

We present a method for automatically generating reduced marker layouts for marker‐based optical motion capture of human hands. The employed motion reconstruction method is based on subspace‐constrained inverse kinematics, which allows for the recovery of realistic hand movements even from sparse input data. We additionally present a user‐specific hand model calibration procedure that fits an articulated hand model to point cloud data of the users hand. Our marker layout optimization is sensitive to the kinematic structure and the subspace representations of hand articulations utilized in the reconstruction method, in order to generate sparse marker configurations that are optimal for solving the constrained inverse kinematics problem. We propose specific quality criteria for reduced marker sets that combine numerical stability with geometric feasibility of the resulting layout. These criteria are combined in an objective function that is minimized using a specialized surface‐constrained particle swarm optimization scheme, which generates marker layouts bound to the surface of an animated hand model. Our method provides a principled way for determining reduced marker layouts based on subspace representations of hand articulations. We demonstrate the effectiveness of our motion reconstruction and model calibration methods in a thorough evaluation.


eurographics | 2014

Membrane mapping: combining mesoscopic and molecular cell visualization

Thomas Waltemate; Björn Sommer; Mario Botsch

Three-dimensional cell visualization is an important topic in todays cytology-affiliated community. Cell illustrations and animations are used for scientific as well as for educational purposes. Unfortunately, there exist only few tools to support the cell modeling process on a molecular level. A major problem is the immense intracellular size variation between relatively large mesoscopic cell components and small molecular membrane patches. This makes both modeling and visualization of whole cells a challenging task. In this paper we propose Membrane Mapping as an interactive tool for combining the mesoscopic and molecular level. Based on instantly computed local parameterizations we map patches of molecular membrane structures onto user-selected regions of cell components. By designing an efficient and GPU-friendly mapping technique, our approach allows to visualize and map pre-computed molecular dynamics simulations of membrane patches to mesoscopic structures in real-time. This enables the visualization of whole cells on a mesoscopic level with an interactive magnifier tool for inspecting their molecular structure and dynamic behavior.


IEEE Transactions on Visualization and Computer Graphics | 2018

The Impact of Avatar Personalization and Immersion on Virtual Body Ownership, Presence, and Emotional Response

Thomas Waltemate; Dominik Gall; Daniel Roth; Mario Botsch; Marc Erich Latoschik


virtual reality software and technology | 2017

The effect of avatar realism in immersive social virtual realities

Marc Erich Latoschik; Daniel Roth; Dominik Gall; Jascha Achenbach; Thomas Waltemate; Mario Botsch


Proceedings of the 19th Workshop on the Semantics and Pragmatics of Dialogue | 2015

Demonstrating the Dialogue System of the Intelligent Coaching Space

Iwan de Kok; Julian Hough; Felix Hülsmann; Thomas Waltemate; Mario Botsch; David Schlangen; Stefan Kopp


Archive | 2016

Latency, sensorimotor feedback and virtual agents: Feedback channels for motor learning using the ICSPACE platform

Cornelia Frank; Iwan de Kok; Irene Senna; Thomas Waltemate; Felix Hülsmann; Thies Pfeiffer; Marc O. Ernst; Stefan Kopp; Mario Botsch; Thomas Schack

Collaboration


Dive into the Thomas Waltemate's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel Roth

University of Würzburg

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge