Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dominik Rausch is active.

Publication


Featured researches published by Dominik Rausch.


symposium on 3d user interfaces | 2014

Reorientation in virtual environments using interactive portals

Sebastian Freitag; Dominik Rausch; Torsten W. Kuhlen

Real walking is the most natural method of navigation in virtual environments. However, physical space limitations often prevent or complicate its continuous use. Thus, many real walking interfaces, among them redirected walking techniques, depend on a reorientation technique that redirects the user away from physical boundaries when they are reached. However, existing reorientation techniques typically actively interrupt the user, or depend on the application of rotation gain that can lead to simulator sickness. In our approach, the user is reoriented using portals. While one portal is placed automatically to guide the user to a safe position, she controls the target selection and physically walks through the portal herself to perform the reorientation. In a formal user study we show that the method does not cause additional simulator sickness, and participants walk more than with point-and-fly navigation or teleportation, at the expense of longer completion times.


eurographics | 2011

Bimanual haptic simulator for medical training: system architecture and performance measurements

Sebastian Ullrich; Dominik Rausch; Torsten W. Kuhlen

In this paper we present a simulator for two-handed haptic interaction. As an application example, we chose a medical scenario that requires simultaneous interaction with a hand and a needle on a simulated patient. The system combines bimanual haptic interaction with a physics-based soft tissue simulation. To our knowledge the combination of finite element methods for the simulation of deformable objects with haptic rendering is seldom addressed, especially with two haptic devices in a non-trivial scenario. Challenges are to find a balance between real-time constraints and high computational demands for fidelity in simulation and to synchronize data between system components. The system has been successfully implemented and tested on two different hardware platforms: one mobile on a laptop and another stationary on a semi-immersive VR system. These two platforms have been chosen to demonstrate scaleability in terms of fidelity and costs. To compare performance and estimate latency, we measured timings of update loops and logged event-based timings of several components in the software.


symposium on spatial user interaction | 2015

BlowClick: A Non-Verbal Vocal Input Metaphor for Clicking

Daniel Zielasko; Sebastian Freitag; Dominik Rausch; Yuen C. Law; Benjamin Weyers; Torsten W. Kuhlen

In contrast to the wide-spread use of 6-DOF pointing devices, free-hand user interfaces in Immersive Virtual Environments (IVE) are non-intrusive. However, for gesture interfaces, the definition of trigger signals is challenging. The use of mechanical devices, dedicated trigger gestures, or speech recognition are often used options, but each comes with its own drawbacks. In this paper, we present an alternative approach, which allows to precisely trigger events with a low latency using microphone input. In contrast to speech recognition, the user only blows into the microphone. The audio signature of such blow events can be recognized quickly and precisely. The results of an user study show that the proposed method allows to successfully complete a standard selection task and performs better than expected against a standard interaction device, the Flystick.


symposium on 3d user interfaces | 2015

Cirque des bouteilles: The art of blowing on bottles

Daniel Zielasko; Dominik Rausch; Yuen C. Law; Thomas Knott; Sebastian Pick; Sven Porsche; Joachim Herber; Johannes Hummel; Torsten W. Kuhlen

Making music by blowing on bottles is fun but challenging. We introduce a novel 3D user interface to play songs on virtual bottles. For this purpose the user blows into a microphone and the stream of air is recreated in the virtual environment and redirected to virtual bottles she is pointing to with her fingers. This is easy to learn and subsequently opens up opportunities for quickly switching between bottles and playing groups of them together to form complex melodies. Furthermore, our interface enables the customization of the virtual environment, by means of moving bottles, changing their type or filling level.


VRIPHYS | 2010

3D Sketch Recognition for Interaction in Virtual Environments

Dominik Rausch; Ingo Assenmacher; Torsten W. Kuhlen

We present a comprehensive 3D sketch recognition framework for interaction within Virtual Environments that allows to trigger commands by drawing symbols, which are recognized by a multi-level analysis. It proceeds in three steps: The segmentation partitions each input line into meaningful segments, which are then recognized as a primitive shape, and finally analyzed as a whole sketch by a symbol matching step. The whole framework is configurable over well-defined interfaces, utilizing a fuzzy logic algorithm for primitive shape learning and a textual description language to define compound symbols. It allows an individualized interaction approach that can be used without much training and provides a good balance between abstraction and intuition. We show the real-time applicability of our approach by performance measurements.


virtual reality software and technology | 2016

Vista widgets: a framework for designing 3D user interfaces from reusable interaction building blocks

Sascha Gebhardt; Till Petersen-Krau; Sebastian Pick; Dominik Rausch; Christian Nowke; Thomas Knott; Patric Schmitz; Daniel Zielasko; Bernd Hentschel; Torsten W. Kuhlen

Virtual Reality (VR) has been an active field of research for several decades, with 3D interaction and 3D User Interfaces (UIs) as important sub-disciplines. However, the development of 3D interaction techniques and in particular combining several of them to construct complex and usable 3D UIs remains challenging, especially in a VR context. In addition, there is currently only limited reusable software for implementing such techniques in comparison to traditional 2D UIs. To overcome this issue, we present ViSTA Widgets, a software framework for creating 3D UIs for immersive virtual environments. It extends the ViSTA VR framework by providing functionality to create multi-device, multi-focus-strategy interaction building blocks and means to easily combine them into complex 3D UIs. This is realized by introducing a device abstraction layer along sophisticated focus management and functionality to create novel 3D interaction techniques and 3D widgets. We present the framework and illustrate its effectiveness with code and application examples accompanied by performance evaluations.


Archive | 2017

Modal sound synthesis for interactive virtual environments

Dominik Rausch; Michael Vorländer; Torsten W. Kuhlen

This thesis will present methods for sound synthesis for real-time application. In an initial study, the applicability and usability of synthesized vibration sounds will be examined for a virtual drilling task. The study shows that for the chosen scenario, realistic drilling sound can support interaction in a similar way to haptic vibrations, and can be utilized to compensate for a lack of haptic feedback. Modal Synthesis is a promising approach for an automatic synthesis of physically-based contact sounds from the geometry and material properties of scene objects. However, some limitations still restrict the applicability of modal synthesis, which will be addressed in this thesis. Synthesizing sounds in real-time can be a challenging problem. For this, Modal Synthesis is a promising approach that allows generating the contact of objects based on their physical properties. Modal Synthesis requires a Modal Analysis must be performed. This is a computationally expensive task and usually performed in a pre-processing step. In this thesis, approaches for the computation of a Modal Analysis at run-time will be proposed, which enable the use of Modal Synthesis for objects that cannot be analyzed upfront, e.g. because they are created interactively or are modified by the user. For this, the run-time requirements will be evaluated, and appropriate Levels-of-Detail approximations will be presented. When a Modal Analysis has been computed, the resulting modal data can be used to compute the vibration sound produced by an object. These vibrations are excited by forces acting on the object. At run-time, the Modal Synthesis has to evaluate the modal vibrations and apply the force excitation. While these computations can be performed on a CPU, this strongly limits the number and complexity of sounding objects and the forces acting on them. This thesis will present specialized algorithms to compute the Modal Synthesis with active forces on a graphics card, allowing for a high number of sounding objects.


VRIPHYS | 2015

Level-of-Detail Modal Analysis for Real-time Sound Synthesis

Dominik Rausch; Bernd Hentschel; Torsten W. Kuhlen

Modal sound synthesis is a promising approach for real-time physically-based sound synthesis. A modal analysis is used to compute characteristic vibration modes from the geometry and material properties of scene objects. These modes allow an efficient sound synthesis at run-time, but the analysis is computationally expensive and thus typically computed in a pre-processing step. In interactive applications, however, objects may be created or modified at run-time. Unless the new shapes are known upfront, the modal data cannot be pre-computed and thus a modal analysis has to be performed at run-time. In this paper, we present a system to compute modal sound data at run-time for interactive applications. We evaluate the computational requirements of the modal analysis to determine the computation time for objects of different complexity. Based on these limits, we propose using different levels-of-detail for the modal analysis, using different geometric approximations that trade speed for accuracy, and evaluate the errors introduced by lower-resolution results. Additionally, we present an asynchronous architecture to distribute and prioritize modal analysis computations.


2014 IEEE VR Workshop: Sonic Interaction in Virtual Environments (SIVE) | 2014

Efficient modal sound synthesis on GPUs

Dominik Rausch; Bernd Hentschel; Torsten W. Kuhlen

Modal sound synthesis is a useful method to interactively generate sounds for Virtual Environments. Forces acting on objects excite modes, which then have to be accumulated to generate the output sound. Due to the high audio sampling rate, algorithms using the CPU typically can handle only a few actively sounding objects. Additionally, force excitation should be applied at a high sampling rate. We present different algorithms to compute the synthesized sound using a GPU, and compare them to CPU implementations. The GPU algorithms shows a significantly higher performance, and allows many sounding objects simultaneously.


ICAT/EGVE/EuroVR | 2012

Comparing Auditory and Haptic Feedback for a Virtual Drilling Task

Dominik Rausch; Lukas Aspöck; Thomas Knott; Sönke Pelzer; Michael Vorländer; Torsten W. Kuhlen

While visual feedback is dominant in Virtual Environments, the use of other modalities like haptics and acoustics can enhance believability, immersion, and interaction performance. Haptic feedback is especially helpful for many interaction tasks like working with medical or precision tools. However, unlike visual and auditory feedback, haptic reproduction is often difficult to achieve due to hardware limitations. This article describes a user study to examine how auditory feedback can be used to substitute haptic feedback when interacting with a vibrating tool. Participants remove some target material with a round-headed drill while avoiding damage to the underlying surface. In the experiment, varying combinations of surface force feedback, vibration feedback, and auditory feedback are used. We describe the design of the user study and present the results, which show that auditory feedback can compensate the lack of haptic feedback.

Collaboration


Dive into the Dominik Rausch's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuen C. Law

RWTH Aachen University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge