Michael Stengel
Braunschweig University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael Stengel.
acm multimedia | 2015
Michael Stengel; Steve Grogorick; Martin Eisemann; Elmar Eisemann; Marcus A. Magnor
Immersion is the ultimate goal of head-mounted displays (HMD) for Virtual Reality (VR) in order to produce a convincing user experience. Two important aspects in this context are motion sickness, often due to imprecise calibration, and the integration of a reliable eye tracking. We propose an affordable hard- and software solution for drift-free eye-tracking and user-friendly lens calibration within an HMD. The use of dichroic mirrors leads to a lean design that provides the full field-of-view (FOV) while using commodity cameras for eye tracking. Our prototype supports personalizable lens positioning to accommodate for different interocular distances. On the software side, a model-based calibration procedure adjusts the eye tracking system and gaze estimation to varying lens positions. Challenges such as partial occlusions due to the lens holders and eye lids are handled by a novel robust monocular pupil-tracking approach. We present four applications of our work: Gaze map estimation, foveated rendering for depth of field, gaze-contingent level-of-detail, and gaze control of virtual avatars.
ACM Transactions on Graphics | 2014
Lorenz Rogge; Felix Klose; Michael Stengel; Martin Eisemann; Marcus A. Magnor
We present a semi-automatic approach to exchange the clothes of an actor for arbitrary virtual garments in conventional monocular video footage as a postprocess. We reconstruct the actors body shape and motion from the input video using a parameterized body model. The reconstructed dynamic 3D geometry of the actor serves as an animated mannequin for simulating the virtual garment. It also aids in scene illumination estimation, necessary to realistically light the virtual garment. An image-based warping technique ensures realistic compositing of the rendered virtual garment and the original video. We present results for eight real-world video sequences featuring complex test cases to evaluate performance for different types of motion, camera settings, and illumination conditions.
eurographics | 2016
Michael Stengel; Steve Grogorick; Martin Eisemann; Marcus A. Magnor
With ever‐increasing display resolution for wide field‐of‐view displays—such as head‐mounted displays or 8k projectors—shading has become the major computational cost in rasterization. To reduce computational effort, we propose an algorithm that only shades visible features of the image while cost‐effectively interpolating the remaining features without affecting perceived quality. In contrast to previous approaches we do not only simulate acuity falloff but also introduce a sampling scheme that incorporates multiple aspects of the human visual system: acuity, eye motion, contrast (stemming from geometry, material or lighting properties), and brightness adaptation. Our sampling scheme is incorporated into a deferred shading pipeline to shade the images perceptually relevant fragments while a pull‐push algorithm interpolates the radiance for the rest of the image. Our approach does not impose any restrictions on the performed shading. We conduct a number of psycho‐visual experiments to validate scene‐ and task‐independence of our approach. The number of fragments that need to be shaded is reduced by 50 % to 80 %. Our algorithm scales favorably with increasing resolution and field‐of‐view, rendering it well‐suited for head‐mounted displays and wide‐field‐of‐view projection.
IEEE Transactions on Visualization and Computer Graphics | 2015
Michael Stengel; Pablo Bauszat; Martin Eisemann; Elmar Eisemann; Marcus A. Magnor
We propose the computation of a perceptual motion blur in videos. Our technique takes the predicted eye motion into account when watching the video. Compared to traditional motion blur recorded by a video camera our approach results in a perceptual blur that is closer to reality. This postprocess can also be used to simulate different shutter effects or for other artistic purposes. It handles real and artificial video input, is easy to compute and has a low additional cost for rendered content. We illustrate its advantages in a user study using eye tracking.
Computer Graphics Forum | 2017
Martin Weier; Michael Stengel; Thorsten Roth; Piotr Didyk; Elmar Eisemann; Martin Eisemann; Steve Grogorick; André Hinkenjann; Ernst Kruijff; Marcus A. Magnor; Karol Myszkowski; Philipp Slusallek
Advances in computer graphics enable us to create digital images of astonishing complexity and realism. However, processing resources are still a limiting factor. Hence, many costly but desirable aspects of realism are often not accounted for, including global illumination, accurate depth of field and motion blur, spectral effects, etc. especially in real‐time rendering. At the same time, there is a strong trend towards more pixels per display due to larger displays, higher pixel densities or larger fields of view. Further observable trends in current display technology include more bits per pixel (high dynamic range, wider color gamut/fidelity), increasing refresh rates (better motion depiction), and an increasing number of displayed views per pixel (stereo, multi‐view, all the way to holographic or lightfield displays). These developments cause significant unsolved technical challenges due to aspects such as limited compute power and bandwidth. Fortunately, the human visual system has certain limitations, which mean that providing the highest possible visual quality is not always necessary. In this report, we present the key research and models that exploit the limitations of perception to tackle visual quality and workload alike. Moreover, we present the open problems and promising future research targeting the question of how we can minimize the effort to compute and display only the necessary pixels while still offering a user full visual experience.
acm symposium on applied perception | 2017
Steve Grogorick; Michael Stengel; Elmar Eisemann; Marcus A. Magnor
Immersive displays allow presentation of rich video content over a wide field of view. We present a method to boost visual importance for a selected - possibly invisible - scene part in a cluttered virtual environment. This desirable feature enables to unobtrusively guide the gaze direction of a user to any location within the immersive 360° surrounding. Our method is based on subtle gaze direction which did not include head rotations in previous work. For covering the full 360° environment and wide field of view, we contribute an approach for dynamic stimulus positioning and shape variation based on eccentricity to compensate for visibility differences across the visual field. Our approach is calibrated in a perceptual study for a head-mounted display with binocular eye tracking. An additional study validates the method within an immersive visual search task.
IEEE Transactions on Image Processing | 2013
Michael Stengel; Martin Eisemann; Stephan Wenger; Benjamin Hell; Marcus A. Magnor
Display resolution is frequently exceeded by available image resolution. Recently, apparent display resolution enhancement (ADRE) techniques show how characteristics of the human visual system can be exploited to provide super-resolution on high refresh rate displays. In this paper, we address the problem of generalizing the ADRE technique to conventional videos of arbitrary content. We propose an optimization-based approach to continuously translate the video frames in such a way that the added motion enables apparent resolution enhancement for the salient image region. The optimization considers the optimal velocity, smoothness, and similarity to compute an appropriate trajectory. In addition, we provide an intuitive user interface that allows to guide the algorithm interactively and preserves important compositions within the video. We present a user study evaluating apparent rendering quality and show versatility of our method on a variety of general test scenes.
eurographics | 2014
Michael Stengel; Steve Grogorick; Lorenz Rogge; Marcus A. Magnor
W e present a solution for integrating a binocular eye tracker into current state-of-the-art lens-based head-mounted displays (HMDs) without affecting the available field-of-view on the display. Estimating the relative eye gaze of the user opens the door for HMDs to a much wider spectrum of virtual reality applications and games. Further, we present a concept of a low-cost head-mounted display with eye tracking and discuss applications which strongly depend on or benefit from gaze estimation.
IEEE Signal Processing Magazine | 2016
Michael Stengel; Marcus A. Magnor
Contemporary digital displays feature multimillion pixels at ever-increasing refresh rates. Reality, on the other hand, provides us with a view of the world that is continuous in space and in time. The discrepancy between viewing the physical world and its sampled depiction on digital displays gives rise to perceptual quality degradation. By measuring or estimating where we look, a new breed of gaze-contingent algorithms aims to exploit the way we visually perceive digital images and videos to remedy visible artifacts. In this article, we provide an overview of recent developments in computational display algorithms that enhance perceived visual quality of conventional video footage when viewed on commodity monitors, projectors, or headmounted displays (HMDs).
ieee virtual reality conference | 2015
Michael Stengel; Steve Grogorick; Martin Eisemann; Elmar Eisemann; Marcus A. Magnor
We present a complete hardware and software solution for integrating binocular eye tracking into current state-of-the-art lens-based Head-mounted Displays (HMDs) without affecting the users wide field-of-view off the display. The system uses robust and efficient new algorithms for calibration and pupil tracking and allows realtime eye tracking and gaze estimation. Estimating the relative gaze direction of the user opens the door to a much wider spectrum of virtual reality applications and games when using HMDs. We show a 3d-printed prototype of a low-cost HMD with eye tracking that is simple to fabricate and discuss a variety of VR applications utilizing gaze estimation.