Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thorsten Roth is active.

Publication


Featured researches published by Thorsten Roth.


Computer Graphics Forum | 2017

Perception-driven Accelerated Rendering

Martin Weier; Michael Stengel; Thorsten Roth; Piotr Didyk; Elmar Eisemann; Martin Eisemann; Steve Grogorick; André Hinkenjann; Ernst Kruijff; Marcus A. Magnor; Karol Myszkowski; Philipp Slusallek

Advances in computer graphics enable us to create digital images of astonishing complexity and realism. However, processing resources are still a limiting factor. Hence, many costly but desirable aspects of realism are often not accounted for, including global illumination, accurate depth of field and motion blur, spectral effects, etc. especially in real‐time rendering. At the same time, there is a strong trend towards more pixels per display due to larger displays, higher pixel densities or larger fields of view. Further observable trends in current display technology include more bits per pixel (high dynamic range, wider color gamut/fidelity), increasing refresh rates (better motion depiction), and an increasing number of displayed views per pixel (stereo, multi‐view, all the way to holographic or lightfield displays). These developments cause significant unsolved technical challenges due to aspects such as limited compute power and bandwidth. Fortunately, the human visual system has certain limitations, which mean that providing the highest possible visual quality is not always necessary. In this report, we present the key research and models that exploit the limitations of perception to tackle visual quality and workload alike. Moreover, we present the open problems and promising future research targeting the question of how we can minimize the effort to compute and display only the necessary pixels while still offering a user full visual experience.


pacific conference on computer graphics and applications | 2016

Foveated real-time ray tracing for head-mounted displays

Martin Weier; Thorsten Roth; Ernst Kruijff; André Hinkenjann; Arsène Pérard-Gayot; Philipp Slusallek; Yongmin Li

Head‐mounted displays with dense pixel arrays used for virtual reality applications require high frame rates and low latency rendering. This forms a challenging use case for any rendering approach. In addition to its ability of generating realistic images, ray tracing offers a number of distinct advantages, but has been held back mainly by its performance. In this paper, we present an approach that significantly improves image generation performance of ray tracing. This is done by combining foveated rendering based on eye tracking with reprojection rendering using previous frames in order to drastically reduce the number of new image samples per frame. To reproject samples a coarse geometry is reconstructed from a G‐Buffer. Possible errors introduced by this reprojection as well as parts that are critical to the perception are scheduled for resampling. Additionally, a coarse color buffer is used to provide an initial image, refined smoothly by more samples were needed. Evaluations and user tests show that our method achieves real‐time frame rates, while visual differences compared to fully rendered images are hardly perceivable. As a result, we can ray trace non‐trivial static scenes for the Oculus DK2 HMD at 1182 × 1464 per eye within the the VSync limits without perceived visual differences.


2013 6th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS) | 2013

DRiVE: An example of distributed rendering in virtual environments

Anton Sigitov; Thorsten Roth; Florian Mannuss; André Hinkenjann

Most Virtual Reality (VR) applications use rendering methods which implement local illumination models, simulating only direct interaction of light with 3D objects. They do not take into account the energy exchange between the objects themselves, making the resulting images look non-optimal. The main reason for this is the simulation of global illumination having a high computational complexity, decreasing the frame rate extremely. As a result this makes for example user interaction quite challenging. One way to decrease the time of image generation using rendering methods which implement global illumination models is to involve additional compute nodes in the process of image creation, distribute the rendering subtasks among these and then collate the results of the subtasks into a single image. Such a strategy is called distributed rendering. In this paper we introduce a software interface which gives a recommendation how the distributed rendering approach may be integrated into VR frameworks to achieve lower generation time of high quality, realistic images. The interface describes a client-server architecture which realizes the communication between visualization and compute nodes including data and rendering subtask distribution and may be used for the implementation of different load-balancing methods. We show an example of the implementation of the proposed interface in the context of realistic rendering of buildings for decisions on interior options.


2016 IEEE Second Workshop on Eye Tracking and Visualization (ETVIS) | 2016

An analysis of eye-tracking data in foveated ray tracing

Thorsten Roth; Martin Weier; André Hinkenjann; Yongmin Li; Philipp Slusallek

We present an analysis of eye tracking data produced during a quality-focused user study of our own foveated ray tracing method. Generally, foveated rendering serves the purpose of adapting actual rendering methods to a user’s gaze. This leads to performance improvements which also allow for the use of methods like ray tracing, which would be computationally too expensive otherwise, in fields like virtual reality (VR), where high rendering performance is important to achieve immersion, or fields like scientific and information visualization, where large amounts of data may hinder real-time rendering capabilities. We provide an overview of our rendering system itself as well as information about the data we collected during the user study, based on fixation tasks to be fulfilled during flights through virtual scenes displayed on a head-mounted display (HMD). We analyze the tracking data regarding its precision and take a closer look at the accuracy achieved by participants when focusing the fixation targets. This information is then put into context with the quality ratings given by the users, leading to a surprising relation between fixation accuracy and quality ratings.


Procedia Computer Science | 2013

Towards VR-based Systems for School Experiments☆

Anton Sigitov; André Hinkenjann; Thorsten Roth

In this paper we present the steps towards a well-designed concept of a VR system for school experiments in scientific domains like physics, biology and chemistry. The steps include the analysis of system requirements in general, the analysis of school experiments and the analysis of input and output devices demands. Based on the results of these steps we show a taxonomy of school experiments and provide a comparison between several currently available devices which can be used for building such a system. We also compare the advantages and shortcomings of VR and AR systems in general to show why, in our opinion, VR systems are better suited for school-use.


international symposium on visual computing | 2015

Guided High-Quality Rendering

Thorsten Roth; Martin Weier; Jens Maiero; André Hinkenjann; Yongmin Li

We present a system which allows for guiding the image quality in global illumination (GI) methods by user-specified regions of interest (ROIs). This is done with either a tracked interaction device or a mouse-based method, making it possible to create a visualization with varying convergence rates throughout one image towards a GI solution. To achieve this, we introduce a scheduling approach based on Sparse Matrix Compression (SMC) for efficient generation and distribution of rendering tasks on the GPU that allows for altering the sampling density over the image plane. Moreover, we present a prototypical approach for filtering the newly, possibly sparse samples to a final image. Finally, we show how large-scale display systems can benefit from rendering with ROIs.


canadian conference on computer and robot vision | 2013

Real-Time Simulation of Camera Errors and Their Effect on Some Basic Robotic Vision Algorithms

André Hinkenjann; Thorsten Roth; Jessica Millberg; Hojun Yun; Yongmin Li

We present a real-time approximate simulation of some camera errors and the effects these errors have on some common computer vision algorithms for robots. The simulation uses a software framework for real-time post processing of image data. We analyse the performance of some basic algorithms for robotic vision when adding modifications to images due to camera errors. The result of each algorithm / error combination is presented. This simulation is useful to tune robotic algorithms to make them more robust to imperfections of real cameras.


international symposium on visual computing | 2009

Efficient Strategies for Acceleration Structure Updates in Interactive Ray Tracing Applications on the Cell Processor

Martin Weier; Thorsten Roth; André Hinkenjann

We present fast complete rebuild strategies, as well as adapted intelligent local update strategies for acceleration data structures for interactive ray tracing environments. Both approaches can be combined. Although the proposed strategies could be used with other data structures and architectures as well, they are currently tailored to the Bounding Interval Hierarchy on the Cell chip.


Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications | 2018

Predicting the gaze depth in head-mounted displays using multiple feature regression

Martin Weier; Thorsten Roth; André Hinkenjann; Philipp Slusallek

Head-mounted displays (HMDs) with integrated eye trackers have opened up a new realm for gaze-contingent rendering. The accurate estimation of gaze depth is essential when modeling the optical capabilities of the eye. Most recently multifocal displays are gaining importance, requiring focus estimates to control displays or lenses. Deriving the gaze depth solely by sampling the scenes depth at the point-of-regard fails for complex or thin objects as eye tracking is suffering from inaccuracies. Gaze depth measures using the eyes vergence only provide an accurate depth estimate for the first meter. In this work, we combine vergence measures and multiple depth measures into feature sets. This data is used to train a regression model to deliver improved estimates. We present a study showing that using multiple features allows for an accurate estimation of the focused depth (MSE<0.1m) over a wide range (first 6m).


2015 IEEE 8th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS) | 2015

Enabling global illumination rendering on large, high-resolution displays

Anton Sigitov; Thorsten Roth; André Hinkenjann

The simulation of global illumination for Virtual Reality (VR) applications is a challenging process. The main reason for this is the high computational complexity of the Monte Carlo integration which makes sufficient frame rates hard to achieve. It is even more challenging to adopt this process for large, high-resolution displays (LHRD), because the resolution of an image to be generated becomes huge compared to a single display. One possibility to decrease the time of image rendering without worsening image quality is to involve additional computational nodes. The process of image creation has to be split into multiple rendering-tasks which may be computed independently. The resulting image data has to be conveyed to the display nodes where it is combined and passed to corresponding output devices. In this paper we introduce an extended version of our software interface which allows to integrate a flexible distributed rendering approach into VR frameworks, thus enabling high-quality realistic image generation on LHRDs. The interface describes a software architecture which realizes the communication between manager, computational and display nodes including rendering subtasks and data distribution and allows for the implementation of different load-balancing methods.

Collaboration


Dive into the Thorsten Roth's collaboration.

Top Co-Authors

Avatar

André Hinkenjann

Bonn-Rhein-Sieg University of Applied Sciences

View shared research outputs
Top Co-Authors

Avatar

Martin Weier

Bonn-Rhein-Sieg University of Applied Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yongmin Li

Brunel University London

View shared research outputs
Top Co-Authors

Avatar

Anton Sigitov

Bonn-Rhein-Sieg University of Applied Sciences

View shared research outputs
Top Co-Authors

Avatar

Ernst Kruijff

Bonn-Rhein-Sieg University of Applied Sciences

View shared research outputs
Top Co-Authors

Avatar

Jens Maiero

Bonn-Rhein-Sieg University of Applied Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge