Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael Rietzler is active.

Publication


Featured researches published by Michael Rietzler.


ubiquitous computing | 2013

homeBLOX: making home automation usable

Marcel Walch; Michael Rietzler; Julia Greim; Florian Schaub; Björn Wiedersheim; Michael Weber

Home automation aims to increase convenience of residential living. The homeBLOX system uses a process-driven execution model to enable complex automation tasks with heterogeneous devices, while providing a user interface that abstracts from lower-level complexity. Complex automation tasks are created as sequences consisting of events and actions linked to physical and virtual devices, which are translated into BPEL code for execution. We outline the key concepts, architecture, and prototype of our system.


ubiquitous computing | 2013

homeBLOX: introducing process-driven home automation

Michael Rietzler; Julia Greim; Marcel Walch; Florian Schaub; Björn Wiedersheim; Michael Weber

Home automation promises more convenience for residential living. We propose process-driven home automation as an approach to reduce the difficulty of specifying automation tasks without restricting users in terms of customizability and complexity of supported scenarios. Our graph-based user interface abstracts from the complexity of process specification, while created sequences are automatically translated into BPEL code for execution. Our homeBLOX architecture extends a process engine with the capabilities to communicate with heterogeneous smart devices, integrate virtual devices, and support different home automation protocols. We report on initial user tests with our automation interface and demonstrate the customizability and expressiveness of our system based on realized example use cases.


ACM Transactions on Computer-Human Interaction | 2017

Exploring End User Programming Needs in Home Automation

Julia Brich; Marcel Walch; Michael Rietzler; Michael Weber; Florian Schaub

Home automation faces the challenge of providing ubiquitous, unobtrusive services while empowering users with approachable configuration interfaces. These interfaces need to provide sufficient expressiveness to support complex automation, and notations need to be devised that enable less tech-savvy users to express such scenarios. Rule-based and process-oriented paradigms have emerged as opposing ends of the spectrum; however, their underlying concepts have not been studied comparatively. We report on a contextual inquiry study in which we collected qualitative data from 18 participants in 12 households on the current potential and acceptance of home automation, as well as explored the respective benefits and drawbacks of these two notation paradigms for end users. Results show that rule-based notations are sufficient for simple automation tasks but not flexible enough for more complex use cases. The resulting insights can inform the design of interfaces for smart homes to enable usable real-world home automation for end users.


engineering interactive computing system | 2016

FusionKit: a generic toolkit for skeleton, marker and rigid-body tracking

Michael Rietzler; Florian Geiselhart; Janek Thomas; Enrico Rukzio

We present a toolkit for markerless skeleton tracking and marker-based object tracking utilizing data fusion with an arbitrary number of depth cameras. As depth-camera based skeletal tracking is always inaccurate due to technology limitations, our goal was to be able to preestimate systematic errors for given tracking situations to improve fusion. Previous work analyzed various aspects of depth camera accuracy, however to our best knowledge, there has been neither systematic error modelling nor an application of such a model for skeletal fusion. Our paper presents such a model for the Kinect v2 camera, by using statistical modelling on capture datasets using such cameras and a marker-based ground truth capture system. By applying this model, we are able to improve the overall accuracy of the fusion output by 68% by predicting data quality with an error of around 3.2 cm. Our toolkit is available for use by other researchers to easily create larger capture spaces with higher tracking accuracy based on the error model when compared to single depth cameras.


human factors in computing systems | 2017

VaiR: Simulating 3D Airflows in Virtual Reality

Michael Rietzler; Katrin Plaumann; Taras Kränzle; Marcel Erath; Alexander Stahl; Enrico Rukzio

The integration of multi-sensory stimuli, e.g. haptic airflow, in virtual reality (VR) has become an important topic of VR research and proved to enhance the feeling of presence. VaiR focuses on an accurate and realistic airflow simulation that goes far beyond wind. While previous works on the topic of airflow in VR are restricted to wind, while focusing on the feeling of presence, there is to the best of our knowledge no work considering the conceptual background or on the various application areas. Our pneumatic prototype emits short and long term flows with a minimum delay and is able to animate wind sources in 3D space around the users head. To get insights on how airflow can be used in VR and how such a device should be designed, we arranged focus groups and discussed the topic. Based on the gathered knowledge, we developed a prototype which proved to increase presence, as well as enjoyment and realism, while not disturbing the VR experience.


virtual reality software and technology | 2017

The matrix has you: realizing slow motion in full-body virtual reality

Michael Rietzler; Florian Geiselhart; Enrico Rukzio

We perceive the flow of time as a constant factor in the real world, but there are examples in media, like films or games, where time is being manipulated and slowed down. Manipulating temporal cues is simple in linear media by slowing down video and audio. Interactive media like VR however poses additional challenges, because user interaction speed is independent from media speed. While the speed of the environment can still be manipulated easily, interaction is a new aspect to consider. We implemented such manipulation by slowing down visual feedback of user movements. In prior experiments we slowed down the virtual representation of a user by applying a velocity based low pass filter and by visually redirecting the motion. We found such a manipulation to be even contributing to realism, enjoyment or presence as long as it is consistent with the experience.While we perceive time as a constant factor in the real world, it can be manipulated in media. Being quite easy for linear media, this is used for various aspects of storytelling e.g., by applying slow motion in movies or TV. Interactive media like VR however poses additional challenges, because user interaction speed is independent from media speed. While it is still possible to change the speed of the environment, for interaction it is also necessary to deal with the emerging speed mismatch, e.g., by slowing down visual feedback of user movements. In this paper, we explore the possibility of such manipulations of visual cues, with the goal of enabling the use of slowmotion also in immersive interactive media like VR. We conducted a user study to investigate the impact of limiting angular velocity of a virtual character in first person view in VR. Our findings show that it is possible to use slow motion in VR while maintaining the same levels of presence, enjoyment and susceptibility to motion sickness, while users adjust to the maximum speed quickly. Moreover, our results also show an impact of slowing down user movements on their time estimations.


ubiquitous computing | 2016

EyeVR: low-cost VR eye-based interaction

Florian Geiselhart; Michael Rietzler; Enrico Rukzio

The EyeVR system enables eye and gaze interactions in VR glasses at a price below


human factors in computing systems | 2018

Breaking the Tracking: Enabling Weight Perception using Perceivable Tracking Offsets

Michael Rietzler; Florian Geiselhart; Jan Gugenheimer; Enrico Rukzio

100, while providing sufficient performance for many typical VR use cases, like foveated rendering, gaming, or attention-based storytelling. It is based on off-the-shelf hardware like a Raspberry Pi, can be used in wireless, mobile settings and allows for a widespread use of gaze tracking in different applications through open interfaces. Besides standard gaze tracking, another focus of the system lies on exploring new forms of interactions besides only gaze direction, by providing additional details about the eye like pupil size or eyelid movement.


human factors in computing systems | 2018

Conveying the Perception of Kinesthetic Feedback in Virtual Reality using State-of-the-Art Hardware

Michael Rietzler; Florian Geiselhart; Julian Frommel; Enrico Rukzio

Virtual reality (VR) technology strives to enable a highly immersive experience for the user by including a wide variety of modalities (e.g. visuals, haptics). Current VR hardware however lacks a sufficient way of communicating the perception of weight of an object, resulting in scenarios where users can not distinguish between lifting a bowling ball or a feather. We propose a solely software based approach of simulating weight in VR by deliberately using perceivable tracking offsets. These tracking offsets nudge users to lift their arm higher and result in a visual and haptic perception of weight. We conducted two user studies showing that participants intuitively associated them with the sensation of weight and accept them as part of the virtual world. We further show that compared to no weight simulation, our approach led to significantly higher levels of presence, immersion and enjoyment. Finally, we report perceptional thresholds and offset boundaries as design guidelines for practitioners.


designing interactive systems | 2018

VRSpinning: Exploring the Design Space of a 1D Rotation Platform to Increase the Perception of Self-Motion in VR

Michael Rietzler; Teresa Hirzle; Jan Gugenheimer; Julian Frommel; Thomas Dreja; Enrico Rukzio

Including haptic feedback in current consumer VR applications is frequently challenging, since technical possibilities to create haptic feedback in consumer-grade VR are limited. While most systems include and make use of the possibility to create tactile feedback through vibration, kinesthetic feedback systems almost exclusively rely on external mechanical hardware to induce actual sensations so far. In this paper, we describe an approach to create a feeling of such sensations by using unmodified off-the-shelf hardware and a software solution for a multi-modal pseudo-haptics approach. We first explore this design space by applying user-elicited methods, and afterwards evaluate our refined solution in a user study. The results show that it is indeed possible to communicate kinesthetic feedback by visual and tactile cues only and even induce its perception. While visual clipping was generally unappreciated, our approach led to significant increases of enjoyment and presence.

Collaboration


Dive into the Michael Rietzler's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge