Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Markus Billeter is active.

Publication


Featured researches published by Markus Billeter.


high performance graphics | 2009

Efficient stream compaction on wide SIMD many-core architectures

Markus Billeter; Ola Olsson; Ulf Assarsson

Stream compaction is a common parallel primitive used to remove unwanted elements in sparse data. This allows highly parallel algorithms to maintain performance over several processing steps and reduces overall memory usage. For wide SIMD many-core architectures, we present a novel stream compaction algorithm and explore several variations thereof. Our algorithm is designed to maximize concurrent execution, with minimal use of synchronization. Bandwidth and auxiliary storage requirements are reduced significantly, which allows for substantially better performance. We have tested our algorithms using CUDA on a PC with an NVIDIA GeForce GTX280 GPU. On this hardware, our reference implementation provides a 3x speedup over previous published algorithms.


Computer Graphics Forum | 2011

Two‐Level Grids for Ray Tracing on GPUs

Javor Kalojanov; Markus Billeter; Philipp Slusallek

We investigate the use of two‐level nested grids as acceleration structure for ray tracing of dynamic scenes. We propose a massively parallel, sort‐based construction algorithm and show that the two‐level grid is one of the structures that is fastest to construct on modern graphics processors. The structure handles non‐uniform primitive distributions more robustly than the uniform grid and its traversal performance is comparable to those of other high quality acceleration structures used for dynamic scenes. We propose a cost model to determine the grid resolution and improve SIMD utilization during ray‐triangle intersection by employing a hybrid packetization strategy. The build times and ray traversal acceleration provide overall rendering performance superior to previous approaches for real time rendering of animated scenes on GPUs.


high performance graphics | 2012

Clustered deferred and forward shading

Ola Olsson; Markus Billeter; Ulf Assarsson

This paper presents and investigates Clustered Shading for deferred and forward rendering. In Clustered Shading, view samples with similar properties (e.g. 3D-position and/or normal) are grouped into clusters. This is comparable to tiled shading, where view samples are grouped into tiles based on 2D-position only. We show that Clustered Shading creates a better mapping of light sources to view samples than tiled shading, resulting in a significant reduction of lighting computations during shading. Additionally, Clustered Shading enables using normal information to perform per-cluster back-face culling of lights, again reducing the number of lighting computations. We also show that Clustered Shading not only outperforms tiled shading in many scenes, but also exhibits better worst case behaviour under tricky conditions (e.g. when looking at high-frequency geometry with large discontinuities in depth). Additionally, Clustered Shading enables real-time scenes with two to three orders of magnitudes more lights than previously feasible (up to around one million light sources).


interactive 3d graphics and games | 2014

Efficient virtual shadow maps for many lights

Ola Olsson; Erik Sintorn; Viktor Kämpe; Markus Billeter; Ulf Assarsson

Recently, several algorithms have been introduced that enable real-time performance for many lights in applications such as games. In this paper, we explore the use of hardware-supported virtual cube-map shadows to efficiently implement high-quality shadows from hundreds of light sources in real time and within a bounded memory footprint. In addition, we explore the utility of ray tracing for shadows from many lights and present a hybrid algorithm combining ray tracing with cube maps to exploit their respective strengths. Our solution supports real-time performance with hundreds of lights in fully dynamic high-detail scenes.


Computer Graphics Forum | 2017

Makeup Lamps: Live Augmentation of Human Faces via Projection

Amit Bermano; Markus Billeter; Daisuke Iwai; Anselm Grundhöfer

We propose the first system for live dynamic augmentation of human faces. Using projector‐based illumination, we alter the appearance of human performers during novel performances. The key challenge of live augmentation is latency — an image is generated according to a specific pose, but is displayed on a different facial configuration by the time it is projected. Therefore, our system aims at reducing latency during every step of the process, from capture, through processing, to projection. Using infrared illumination, an optically and computationally aligned high‐speed camera detects facial orientation as well as expression. The estimated expression blendshapes are mapped onto a lower dimensional space, and the facial motion and non‐rigid deformation are estimated, smoothed and predicted through adaptive Kalman filtering. Finally, the desired appearance is generated interpolating precomputed offset textures according to time, global position, and expression. We have evaluated our system through an optimized CPU and GPU prototype, and demonstrated successful low latency augmentation for different performers and performances with varying facial play and motion speed. In contrast to existing methods, the presented system is the first method which fully supports dynamic facial projection mapping without the requirement of any physical tracking markers and incorporates facial expressions.


interactive 3d graphics and games | 2012

Real-time multiple scattering using light propagation volumes

Markus Billeter; Erik Sintorn; Ulf Assarsson

This paper introduces a new GPU-based, real-time method for rendering volumetric lighting effects produced by scattering in a participating medium. The method includes support for indirect illumination by scattered light, high-quality single-scattered volumetric shadows, and approximate multiple scattered volumetric lighting effects in isotropic and homogeneous media. The method builds upon an improved propagation scheme for light propagation volumes. This scheme models scattering according to the radiative light transfer equation during propagation. The initial state of the light propagation volumes is based on single-scattered light identified with shadow maps; this allows generation of a high quality initial distribution of radiance. After propagation, the resulting distribution is used as a source of diffuse light during rendering and is also ray marched for volumetric effects from multiple scattering. Volumetric shadows from single-scattered light are rendered separately. We compare the new method to single-scattered volumetric shadows produced by contemporary techniques, plain light propagation volumes (which this new method extends), and a simple composition thereof.


international conference on computer graphics and interactive techniques | 2016

Deferred vector map visualization

Matthias Thöny; Markus Billeter; Renato Pajarola

Interactive rendering of large scale vector maps is a key challenge for high-quality geographic visualization software systems. In this paper we present a novel approach for the visualization of large scale vector maps over detailed height-field terrains. Our method uses a deferred line shading approach to render large scale vector maps directly in a screen-space shading stage over a terrain visualization. The fact that there is no traditional geometric polygonal rendering involved allows our algorithm to outperform conventional vector map rendering algorithms for geographic information systems. Our flexible clustered deferred line rendering approach allows a user to interactively customize and apply advanced vector styling methods, as well as the integration into a vector map level-of-detail system.


IEEE Transactions on Visualization and Computer Graphics | 2015

More Efficient Virtual Shadow Maps for Many Lights

Ola Olsson; Markus Billeter; Erik Sintorn; Viktor Kämpe; Ulf Assarsson

Recently, several algorithms have been introduced that enable real-time performance for many lights in applications such as games. In this paper, we explore the use of hardware-supported virtual cube-map shadows to efficiently implement high-quality shadows from hundreds of light sources in real time and within a bounded memory footprint. In addition, we explore the utility of ray tracing for shadows from many lights and present a hybrid algorithm combining ray tracing with cube maps to exploit their respective strengths. Our solution supports real-time performance with hundreds of lights in fully dynamic high-detail scenes.


international conference on computer graphics and interactive techniques | 2012

Tiled and clustered forward shading: supporting transparency and MSAA

Ola Olsson; Markus Billeter; Ulf Assarsson

We present details of Tiled and Clustered Forward Shading in its application to rendering transparent geometry and using Multi Sampling Anti Aliasing (MSAA). We detail how transparency and MSAA is supported, and present performance results measured on modern GPUs. Previous techniques for handling large numbers of lights are usually based on deferred shading [Andersson 2009; Lauritzen 2010]. However, deferred shading techniques struggle with impractically large frame buffers when MSAA is used, and make supporting transparency difficult. In addition, deferred shading makes it more difficult to support custom shaders on geometry. Tiled Forward Shading is a new and highly practical approach to real-time shading scenes with thousands of light sources, introduced by Olsson and Assarsson in 2011 [2011]. Their results, measured on an GTX 280 GPU, indicated that tiled forward shading was impractically slow. Performance on more recent GPUs has improved considerably (approaching that of tiled deferred), which opens up the possibility of using the technique to support transparency and MSAA. Clustered Shading further extends tiled shading by adding depth partitioning [Olsson et al. 2012]. We show how Clustered Forward Shading can be extended to support transparency efficiently. Forward shading naturally supports both transparency and MSAA, which has been shown in previous work. However, the performance and implementation details have not previously been investigated. We provide details on how to construct the light grid for use with transparency. When the transparent geometry is considered, the depth range optimization cannot be fully used. Instead, only a more conventional hierarchical depth test can be used. The grid structure can be built once, and quickly pruned to prepare a more efficient instance for opaque geometry. However, as each transparent layer must consider all the lights in the tile, performance does not scale linearly with the depth complexity, but far worse (Figure 1, right). To improve on this we extend clustered forward shading by constructing the grid using a pre-pass over all geometry (not just opaque), and flagging clusters as a side effect. This allows us to quickly find the unique clusters used. As clusters contain only space around actual samples that need shading, efficiency is much better (Figure 1, left). For deferred shading a single 1080p, 16x MSAA, 16-bit float RGBA buffer requires over 250Mb of memory. In addition, each sample may need to be shaded individually, effectively running shading at a per-sample frequency. For forward shading, no G-Buffers are required and MSAA is trivially enabled. A brief performance and memory comparison is shown in Figure 2, showing that clustered forward outperforms tiled forward by more than 2 times, and also outperforms tiled deferred, if MSAA is used.


international symposium on mixed and augmented reality | 2016

A LED-Based IR/RGB End-to-End Latency Measurement Device

Markus Billeter; Gerhard Rothlin; Jan Wezel; Daisuke Iwai; Anselm Grundhöfer

Achieving a minimal latency within augmented reality (AR) systems is one of the most important factors to achieve a convincing visual impression. It is even more crucial for non-video augmentations such as dynamic projection mappings because in that case the superimposed imagery has to exactly match the dynamic real surface, which obviously cannot be directly influenced or delayed in its movement. In those cases, the inevitable latency is usually compensated for using prediction and extrapolation operations, which require accurate information about the occurring overall latency to exactly predict to the right time frame for the augmentation. Different strategies have been applied to accurately compute this latency. Since some of these AR systems operate within different spectral bands for input and output, it is not possible to apply latency measurement methods encoding time stamps directly into the presented output images as these might not be sensed by used input device.We present a generic latency measurement device which can be used to accurately measure the overall end-to-end latency of camera-based AR systems with an accuracy below one millisecond. It comprises a LED-based time stamp generator displaying the time as a gray code on spatially and spectrally multiple locations. It is controlled by a micro-controller and sensed by an external camera device observing the output display as well as the LED device at the same time.

Collaboration


Dive into the Markus Billeter's collaboration.

Top Co-Authors

Avatar

Ulf Assarsson

Chalmers University of Technology

View shared research outputs
Top Co-Authors

Avatar

Ola Olsson

Chalmers University of Technology

View shared research outputs
Top Co-Authors

Avatar

Erik Sintorn

Chalmers University of Technology

View shared research outputs
Top Co-Authors

Avatar

Viktor Kämpe

Chalmers University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sverker Rasmuson

Chalmers University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge