Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael Schwärzler is active.

Publication


Featured researches published by Michael Schwärzler.


ACM Transactions on Graphics | 2013

O-snap: Optimization-based snapping for modeling architecture

Murat Arikan; Michael Schwärzler; Simon Flöry; Michael Wimmer; Stefan Maierhofer

In this article, we introduce a novel reconstruction and modeling pipeline to create polygonal models from unstructured point clouds. We propose an automatic polygonal reconstruction that can then be interactively refined by the user. An initial model is automatically created by extracting a set of RANSAC-based locally fitted planar primitives along with their boundary polygons, and then searching for local adjacency relations among parts of the polygons. The extracted set of adjacency relations is enforced to snap polygon elements together, while simultaneously fitting to the input point cloud and ensuring the planarity of the polygons. This optimization-based snapping algorithm may also be interleaved with user interaction. This allows the user to sketch modifications with coarse and loose 2D strokes, as the exact alignment of the polygons is automatically performed by the snapping. The generated models are coarse, offer simple editing possibilities by design, and are suitable for interactive 3D applications like games, virtual environments, etc. The main innovation in our approach lies in the tight coupling between interactive input and automatic optimization, as well as in an algorithm that robustly discovers the set of adjacency relations.


international symposium on visual computing | 2009

Real-Time Soft Shadows Using Temporal Coherence

Daniel Scherzer; Michael Schwärzler; Oliver Mattausch; Michael Wimmer

A vast amount of soft shadow map algorithms have been presented in recent years. Most use a single sample hard shadow map together with some clever filtering technique to calculate perceptually or even physically plausible soft shadows. On the other hand there is the class of much slower algorithms that calculate physically correct soft shadows by taking and combining many samples of the light. In this paper we present a new soft shadow method that combines the benefits of these approaches. It samples the light source over multiple frames instead of a single frame, creating only a single shadow map each frame. Where temporal coherence is low we use spatial filtering to estimate additional samples to create correct and very fast soft shadows.


interactive 3d graphics and games | 2013

Fast percentage closer soft shadows using temporal coherence

Michael Schwärzler; Christian Luksch; Daniel Scherzer; Michael Wimmer

We propose a novel way to efficiently calculate soft shadows in real-time applications by overcoming the high computational effort involved with the complex corresponding visibility estimation each frame: We exploit the temporal coherence prevalent in typical scene movement, making the estimation of a new shadow value only necessary whenever regions are newly disoccluded due to camera adjustment, or the shadow situation changes due to object movement. By extending the typical shadow mapping algorithm by an additional light-weight buffer for the tracking of dynamic scene objects, we can robustly and efficiently detect all screen space fragments that need to be updated, including not only the moving objects themselves, but also the soft shadows they cast. By applying this strategy to the popular Percentage Closer Soft Shadow algorithm (PCSS), we double rendering performance in scenes with both static and dynamic objects -- as prevalent in various 3D game levels -- while maintaining the visual quality of the original approach.


interactive 3d graphics and games | 2013

Fast light-map computation with virtual polygon lights

Christian Luksch; Robert F. Tobler; Ralf Habel; Michael Schwärzler; Michael Wimmer

We propose a new method for the fast computation of light maps using a many-light global-illumination solution. A complete scene can be light mapped on the order of seconds to minutes, allowing fast and consistent previews for editing or even generation at loading time. In our method, virtual point lights are clustered into a set of virtual polygon lights, which represent a compact description of the illumination in the scene. The actual light-map generation is performed directly on the GPU. Our approach degrades gracefully, avoiding objectionable artifacts even for very short computation times.


IEEE Transactions on Visualization and Computer Graphics | 2016

LiteVis: Integrated Visualization for Simulation-Based Decision Support in Lighting Design

Johannes Sorger; Thomas Ortner; Christian Luksch; Michael Schwärzler; M. Eduard Gröller; Harald Piringer

State-of-the-art lighting design is based on physically accurate lighting simulations of scenes such as offices. The simulation results support lighting designers in the creation of lighting configurations, which must meet contradicting customer objectives regarding quality and price while conforming to industry standards. However, current tools for lighting design impede rapid feedback cycles. On the one side, they decouple analysis and simulation specification. On the other side, they lack capabilities for a detailed comparison of multiple configurations. The primary contribution of this paper is a design study of LiteVis, a system for efficient decision support in lighting design. LiteVis tightly integrates global illumination-based lighting simulation, a spatial representation of the scene, and non-spatial visualizations of parameters and result indicators. This enables an efficient iterative cycle of simulation parametrization and analysis. Specifically, a novel visualization supports decision making by ranking simulated lighting configurations with regard to a weight-based prioritization of objectives that considers both spatial and non-spatial characteristics. In the spatial domain, novel concepts support a detailed comparison of illumination scenarios. We demonstrate LiteVis using a real-world use case and report qualitative feedback of lighting designers. This feedback indicates that LiteVis successfully supports lighting designers to achieve key tasks more efficiently and with greater certainty.


eurographics | 2011

Reconstructing Buildings as Textured Low Poly Meshes from Point Clouds and Images

Irene Reisner-Kollmann; Christian Luksch; Michael Schwärzler

C urrent urban building reconstruction techniques rely mainly on data gathered from either laser scans or imagebased approaches, and do usually require a large amount of manual post-processing and modeling. Difficulties arise due to erroneous and noisy data, and due to the huge amount of information to process. We propose a system that helps to overcome these time-consuming steps by automatically generating low-poly 3D building models. This is achieved by taking both information from point clouds and image information into account, exploiting the particular strengths and avoiding the relative weaknesses of these data sources: While the segmented point cloud is used to identify the dominant planar surfaces in 3D space, the images are used to extract accurate edges, fill holes and generate textured polygonal meshes of urban buildings.


vision modeling and visualization | 2010

Interactive Multi-View Facade Image Editing

Przemyslaw Musialski; Christian Luksch; Michael Schwärzler; Matthias Buchetics; Stefan Maierhofer; Werner Purgathofer

We propose a system for generating high-quality approximated façade ortho-textures based on a set of perspective source photographs taken by a consumer hand-held camera. Our approach is to sample a combined orthographic approximation over the façade-plane from the input photos. In order to avoid kinks and seams which may occur on transitions between different source images, we introduce color adjustment and gradient domain stitching by solving a Poisson equation in real-time. In order to add maximum control on the one hand and easy interaction on the other, we provide several editing interactions allowing for user-guided post-processing.


vision modeling and visualization | 2012

Fast Accurate Soft Shadows with Adaptive Light Source Sampling

Michael Schwärzler; Oliver Mattausch; Daniel Scherzer; Michael Wimmer

Physically accurate soft shadows in 3D applications can be simulated by taking multiple samples from all over the area light source and accumulating them. Due to the unpredictability of the size of the penumbra regions, the required sampling density has to be high in order to guarantee smooth shadow transitions in all cases. Hence, several hundreds of shadow maps have to be evaluated in any scene configuration, making the process computationally expensive. Thus, we suggest an adaptive light source subdivision approach to select the sampling points adaptively. The main idea is to start with a few samples on the area light, evaluating there differences using hardware occlusion queries, and adding more sampling points if necessary. Our method is capable of selecting and rendering only the samples which contribute to an improved shadow quality, and hence generate shadows of comparable quality and accuracy. Even though additional calculation time is needed for the comparison step, this method saves valuable rendering time and achieves interactive to real-time frame rates in many cases where a brute force sampling method does not.


The Visual Computer | 2014

Real-time rendering of glossy materials with regular sampling

Christian Luksch; Robert F. Tobler; Thomas Mühlbacher; Michael Schwärzler; Michael Wimmer

Rendering view-dependent, glossy surfaces to increase the realism in real-time applications is a computationally complex task, that can only be performed by applying some approximations—especially when immediate changes in the scene in terms of material settings and object placement are a necessity. The use of environment maps is a common approach to this problem, but implicates performance problems due to costly pre-filtering steps or expensive sampling. We, therefore, introduce a regular sampling scheme for environment maps that relies on an efficient MIP-map-based filtering step, and minimizes the number of necessary samples for creating a convincing real-time rendering of glossy BRDF materials.


The Visual Computer | 2018

Lens flare prediction based on measurements with real-time visualization

Andreas Walch; Christian Luksch; Attila Szabo; Harald Steinlechner; Georg Haaser; Michael Schwärzler; Stefan Maierhofer

Lens flare is a visual phenomenon caused by interreflection of light within a lens system. This effect is often seen as an undesired artifact, but it also gives rendered images a realistic appearance and is even used for artistic purposes. In the area of computer graphics, several simulation-based approaches have been presented to render lens flare for a given spherical lens system. For physically reliable results, these approaches require an accurate description of that system, which differs from camera to camera. Also, for the lens flares appearance, crucial parameters—especially the anti-reflection coatings—can often only be approximated. In this paper we present a novel workflow for generating physically plausible renderings of lens flare phenomena by analyzing the lens flares captured with a camera. Our method allows predicting the occurrence of lens flares for a given light setup. This is an often requested feature in light planning applications in order to efficiently avoid lens flare-prone light positioning. A model with a tight parameter set and a GPU-based rendering method allows our approach to be used in real-time applications.

Collaboration


Dive into the Michael Schwärzler's collaboration.

Top Co-Authors

Avatar

Michael Wimmer

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Daniel Scherzer

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Katharina Krösl

Vienna University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge