Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christian Luksch is active.

Publication


Featured researches published by Christian Luksch.


interactive 3d graphics and games | 2013

Fast percentage closer soft shadows using temporal coherence

Michael Schwärzler; Christian Luksch; Daniel Scherzer; Michael Wimmer

We propose a novel way to efficiently calculate soft shadows in real-time applications by overcoming the high computational effort involved with the complex corresponding visibility estimation each frame: We exploit the temporal coherence prevalent in typical scene movement, making the estimation of a new shadow value only necessary whenever regions are newly disoccluded due to camera adjustment, or the shadow situation changes due to object movement. By extending the typical shadow mapping algorithm by an additional light-weight buffer for the tracking of dynamic scene objects, we can robustly and efficiently detect all screen space fragments that need to be updated, including not only the moving objects themselves, but also the soft shadows they cast. By applying this strategy to the popular Percentage Closer Soft Shadow algorithm (PCSS), we double rendering performance in scenes with both static and dynamic objects -- as prevalent in various 3D game levels -- while maintaining the visual quality of the original approach.


interactive 3d graphics and games | 2013

Fast light-map computation with virtual polygon lights

Christian Luksch; Robert F. Tobler; Ralf Habel; Michael Schwärzler; Michael Wimmer

We propose a new method for the fast computation of light maps using a many-light global-illumination solution. A complete scene can be light mapped on the order of seconds to minutes, allowing fast and consistent previews for editing or even generation at loading time. In our method, virtual point lights are clustered into a set of virtual polygon lights, which represent a compact description of the illumination in the scene. The actual light-map generation is performed directly on the GPU. Our approach degrades gracefully, avoiding objectionable artifacts even for very short computation times.


IEEE Transactions on Visualization and Computer Graphics | 2016

LiteVis: Integrated Visualization for Simulation-Based Decision Support in Lighting Design

Johannes Sorger; Thomas Ortner; Christian Luksch; Michael Schwärzler; M. Eduard Gröller; Harald Piringer

State-of-the-art lighting design is based on physically accurate lighting simulations of scenes such as offices. The simulation results support lighting designers in the creation of lighting configurations, which must meet contradicting customer objectives regarding quality and price while conforming to industry standards. However, current tools for lighting design impede rapid feedback cycles. On the one side, they decouple analysis and simulation specification. On the other side, they lack capabilities for a detailed comparison of multiple configurations. The primary contribution of this paper is a design study of LiteVis, a system for efficient decision support in lighting design. LiteVis tightly integrates global illumination-based lighting simulation, a spatial representation of the scene, and non-spatial visualizations of parameters and result indicators. This enables an efficient iterative cycle of simulation parametrization and analysis. Specifically, a novel visualization supports decision making by ranking simulated lighting configurations with regard to a weight-based prioritization of objectives that considers both spatial and non-spatial characteristics. In the spatial domain, novel concepts support a detailed comparison of illumination scenarios. We demonstrate LiteVis using a real-world use case and report qualitative feedback of lighting designers. This feedback indicates that LiteVis successfully supports lighting designers to achieve key tasks more efficiently and with greater certainty.


eurographics | 2011

Reconstructing Buildings as Textured Low Poly Meshes from Point Clouds and Images

Irene Reisner-Kollmann; Christian Luksch; Michael Schwärzler

C urrent urban building reconstruction techniques rely mainly on data gathered from either laser scans or imagebased approaches, and do usually require a large amount of manual post-processing and modeling. Difficulties arise due to erroneous and noisy data, and due to the huge amount of information to process. We propose a system that helps to overcome these time-consuming steps by automatically generating low-poly 3D building models. This is achieved by taking both information from point clouds and image information into account, exploiting the particular strengths and avoiding the relative weaknesses of these data sources: While the segmented point cloud is used to identify the dominant planar surfaces in 3D space, the images are used to extract accurate edges, fill holes and generate textured polygonal meshes of urban buildings.


vision modeling and visualization | 2010

Interactive Multi-View Facade Image Editing

Przemyslaw Musialski; Christian Luksch; Michael Schwärzler; Matthias Buchetics; Stefan Maierhofer; Werner Purgathofer

We propose a system for generating high-quality approximated façade ortho-textures based on a set of perspective source photographs taken by a consumer hand-held camera. Our approach is to sample a combined orthographic approximation over the façade-plane from the input photos. In order to avoid kinks and seams which may occur on transitions between different source images, we introduce color adjustment and gradient domain stitching by solving a Poisson equation in real-time. In order to add maximum control on the one hand and easy interaction on the other, we provide several editing interactions allowing for user-guided post-processing.


virtual reality software and technology | 2012

Bridging the gap between visual exploration and agent-based pedestrian simulation in a virtual environment

Martin Brunnhuber; Helmut Schrom-Feiertag; Christian Luksch; Thomas Matyus; Gerd Hesina

We present a system to evaluate and improve visual guidance systems and signage for pedestrians inside large buildings. Given a 3D model of an actual building we perform agent-based simulations mimicking the decision making process and navigation patterns of pedestrians trying to find their way to predefined locations. Our main contribution is to enable agents to base their decisions on realistic threedimensional visibility and occlusion cues computed from the actual building geometry with added semantic annotations (e.g. meaning of signs, or purpose of inventory), as well as an interactive visualization of simulated movement trajectories and accompanying visibility data tied to the underlying 3D model. This enables users of the system to quickly pinpoint and solve problems within the simulation by watching, exploring and understanding emergent behavior inside the building. This insight gained from introspection can in turn inform planning and thus improve the effectiveness of guidance systems.


The Visual Computer | 2017

Forced Random Sampling: fast generation of importance-guided blue-noise samples

Daniel Cornel; Robert F. Tobler; Hiroyuki Sakai; Christian Luksch; Michael Wimmer

In computer graphics, stochastic sampling is frequently used to efficiently approximate complex functions and integrals. The error of approximation can be reduced by distributing samples according to an importance function, but cannot be eliminated completely. To avoid visible artifacts, sample distributions are sought to be random, but spatially uniform, which is called blue-noise sampling. The generation of unbiased, importance-guided blue-noise samples is expensive and not feasible for real-time applications. Sampling algorithms for these applications focus on runtime performance at the cost of having weak blue-noise properties. Blue-noise distributions have also been proposed for digital halftoning in the form of precomputed dither matrices. Ordered dithering with such matrices allows to distribute dots with blue-noise properties according to a grayscale image. By the nature of ordered dithering, this process can be parallelized easily. We introduce a novel sampling method called forced random sampling that is based on forced random dithering, a variant of ordered dithering with blue noise. By shifting the main computational effort into the generation of a precomputed dither matrix, our sampling method runs efficiently on GPUs and allows real-time importance sampling with blue noise for a finite number of samples. We demonstrate the quality of our method in two different rendering applications.


The Visual Computer | 2014

Real-time rendering of glossy materials with regular sampling

Christian Luksch; Robert F. Tobler; Thomas Mühlbacher; Michael Schwärzler; Michael Wimmer

Rendering view-dependent, glossy surfaces to increase the realism in real-time applications is a computationally complex task, that can only be performed by applying some approximations—especially when immediate changes in the scene in terms of material settings and object placement are a necessity. The use of environment maps is a common approach to this problem, but implicates performance problems due to costly pre-filtering steps or expensive sampling. We, therefore, introduce a regular sampling scheme for environment maps that relies on an efficient MIP-map-based filtering step, and minimizes the number of necessary samples for creating a convincing real-time rendering of glossy BRDF materials.


The Visual Computer | 2018

Lens flare prediction based on measurements with real-time visualization

Andreas Walch; Christian Luksch; Attila Szabo; Harald Steinlechner; Georg Haaser; Michael Schwärzler; Stefan Maierhofer

Lens flare is a visual phenomenon caused by interreflection of light within a lens system. This effect is often seen as an undesired artifact, but it also gives rendered images a realistic appearance and is even used for artistic purposes. In the area of computer graphics, several simulation-based approaches have been presented to render lens flare for a given spherical lens system. For physically reliable results, these approaches require an accurate description of that system, which differs from camera to camera. Also, for the lens flares appearance, crucial parameters—especially the anti-reflection coatings—can often only be approximated. In this paper we present a novel workflow for generating physically plausible renderings of lens flare phenomena by analyzing the lens flares captured with a camera. Our method allows predicting the occurrence of lens flares for a given light setup. This is an often requested feature in light planning applications in order to efficiently avoid lens flare-prone light positioning. A model with a tight parameter set and a GPU-based rendering method allows our approach to be used in real-time applications.


vision modeling and visualization | 2017

LiteMaker: Interactive Luminaire Development using Progressive Photon Tracing and Multi-Resolution Upsampling.

Katharina Krösl; Christian Luksch; Michael Schwärzler; Michael Wimmer

Industrial applications like luminaire development (the creation of a luminaire in terms of geometry and material) or lighting design (the efficient and aesthetic placement of luminaires in a virtual scene) rely heavily on high realism and physically correct simulations. Using typical approaches like CAD modeling and offline rendering, this requirement induces long processing times and therefore inflexible workflows. In this paper, we combine a GPU-based progressive photon-tracing algorithm to accurately simulate the light distribution of a luminaire with a novel multi-resolution image-filtering approach that produces visually meaningful intermediate results of the simulation process. By using this method in a 3D modeling environment, luminaire development is turned into an interactive process, allowing for real-time modifications and immediate feedback on the light distribution. Since the simulation results converge to a physically plausible solution that can be imported as a representation of a luminaire into a light-planning software, our work contributes to combining the two former decoupled workflows of luminaire development and lighting design, reducing the overall production time and cost for luminaire manufacturers. CCS Concepts •Computing methodologies → Ray tracing; Image processing; Mesh geometry models;

Collaboration


Dive into the Christian Luksch's collaboration.

Top Co-Authors

Avatar

Michael Wimmer

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Katharina Krösl

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Daniel Scherzer

Vienna University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge