Peter Kán
Vienna University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Peter Kán.
international symposium on mixed and augmented reality | 2012
Peter Kán; Hannes Kaufmann
In this paper we present a novel high-quality rendering system for Augmented Reality (AR). We study ray-tracing based rendering techniques in AR with the goal of achieving real-time performance and improving visual quality as well as visual coherence between real and virtual objects in a final composited image. A number of realistic and physically correct rendering effects are demonstrated, that have not been presented in real-time AR environments before. Examples are high-quality specular effects such as caustics, refraction, reflection, together with a depth of field effect and anti-aliasing. We present a new GPU implementation of photon mapping and its application for the calculation of caustics in environments where real and virtual objects are combined. The composited image is produced on-the-fly without the need of any preprocessing step. A main contribution of our work is the achievement of interactive rendering speed for high-quality ray-tracing algorithms in AR setups. Finally we performed an evaluation to study how users perceive visual quality and visual coherence with different realistic rendering effects. The results of our user study show that in 40.1% cases users mistakenly judged virtual objects as real ones. Moreover we show that high-quality rendering positively affects the perceived visual coherence.
international symposium on mixed and augmented reality | 2013
Peter Kán; Hannes Kaufmann
Fast and realistic synthesis of real videos with computer generated content has been a challenging problem in computer graphics. It involves computationally expensive light transport calculations. We present a novel and efficient algorithm for diffuse light transport calculation between virtual and real worlds called Differential Irradiance Caching. Our algorithm produces a high-quality result while preserving interactivity and allowing dynamic geometry, materials, lighting, and camera movement. The problem of expensive differential irradiance evaluation is solved by exploiting the spatial coherence in indirect illumination using irradiance caching. We enable multiple bounces of global illumination by using Monte Carlo integration in GPU ray-tracing to evaluate differential irradiance at irradiance cache records in one pass. The combination of ray-tracing and rasterization is used in an extended irradiance cache splatting algorithm to provide a fast GPU-based solution of indirect illumination. Limited information stored in the irradiance splat buffer causes errors for pixels on edges in case of depth of field rendering. We propose a solution to this problem using a reprojection technique to access the irradiance splat buffer. A novel cache miss detection technique is introduced which allows for a linear irradiance cache data structure. We demonstrate the integration of differential irradiance caching into a rendering framework for Mixed Reality applications capable of simulating complex global illumination effects.
eurographics | 2012
Peter Kán; Hannes Kaufmann
We present a novel method for rendering and compositing video in augmented reality. We focus on calculating the physically correct result of the depth of field caused by a lens with finite sized aperture. In order to correctly simulate light transport, ray-tracing is used and in a single pass combined with differential rendering to compose the final augmented video. The image is fully rendered on GPUs, therefore an augmented video can be produced at interactive frame rates in high quality. Our method runs on the fly, no video postprocessing is needed. In addition we evaluated the user experiences with our rendering system with the hypothesis that a depth of field effect in augmented reality increases the realistic look of composited video. Results with 30 users show that 90% perceive videos with a depth of field considerably more realistic.
eurographics | 2015
Peter Kán
Real world illumination, captured by digitizing devices, is beneficial to solve many problems in computer graphics. Therefore, practical methods for capturing this illumination are of high interest. In this paper, we present a novel method for capturing environmental illumination by a mobile device. Our method is highly practical as it requires only a consumer mobile phone and the result can be instantly used for rendering or material estimation. We capture the real light in high dynamic range (HDR) to preserve its high contrast. Our method utilizes the moving camera of a mobile phone in auto-exposure mode to reconstruct HDR values. The projection of the image to the spherical environment map is based on the orientation of the mobile device. Both HDR reconstruction and projection run on the mobile GPU to enable interactivity. Moreover, an additional image alignment step is performed. Our results show that the presented method faithfully captures the real environment and that the rendering with our reconstructed environment maps achieves high quality, comparable to reality.
international symposium on visual computing | 2013
Peter Kán; Hannes Kaufmann
In this paper we present a novel method for real-time high quality previsualization and cinematic relighting. The physically based Path Tracing algorithm is used within an Augmented Reality setup to preview high-quality light transport. A novel differential version of progressive path tracing is proposed, which calculates two global light transport solutions that are required for differential rendering. A real-time previsualization framework is presented, which renders the solution with a low number of samples during interaction and allows for progressive quality improvement. If a user requests the high-quality solution of a certain view, the tracking is stopped and the algorithm progressively converges to an accurate solution. The problem of rendering complex light paths is solved by using photon mapping. Specular global illumination effects like caustics can easily be rendered. Our framework utilizes the massive parallel power of modern GPUs to achieve fast rendering with complex global illumination, a depth of field effect, and antialiasing.
international symposium on visual computing | 2015
Peter Kán; Johannes Unterguggenberger; Hannes Kaufmann
Consistent illumination of virtual and real objects in augmented reality (AR) is essential to achieve visual coherence. This paper presents a practical method for rendering with consistent illumination in AR in two steps. In the first step, a user scans the surrounding environment by rotational motion of the mobile device and the real illumination is captured. We capture the real light in high dynamic range (HDR) to preserve its high contrast. In the second step, the captured environment map is used to precalculate a set of reflection maps on the mobile GPU which are then used for real-time rendering with consistent illumination. Our method achieves high quality of the reflection maps because the convolution of the environment map by the BRDF is calculated accurately per each pixel of the output map. Moreover, we utilize multiple render targets to calculate reflection maps for multiple materials simultaneously. The presented method for consistent illumination in AR is beneficial for increasing visual coherence between virtual and real objects. Additionally, it is highly practical for mobile AR as it uses only a commodity mobile device.
eurographics, italian chapter conference | 2015
Peter Kán; Hannes Kaufmann
This paper presents a novel method for diffuse texture extraction from a set of multiview images. We address the problem of specularities removal by pixel value minimization across multiple automatically aligned input images. Our method is based on the fact that the presence of specular reflection only increases the captured pixel value. Moreover, we propose an algorithm for estimation of material region in the image by optimization on the GPU. Previous methods for diffuse component separation from multiple images require a complex hardware setup. In contrast to that, our method is highly usable because only a mobile phone is needed to reconstruct diffuse texture in an environment with arbitrary lighting. Moreover, our method is fully automatic and besides capturing of images from multiple viewpoints it does not require any user intervention. Many fields can benefit from our method, particularly material reconstruction, image processing, and digital content creation.
virtual reality software and technology | 2017
Peter Kán; Hannes Kaufmann
In this paper, we present a system that automatically populates indoor virtual scenes with furniture objects and optimizes their positions and orientations with respect to aesthetic, ergonomic and functional rules called interior design guidelines. These guidelines are represented as mathematical expressions which form the cost function. Our system optimizes the set of multiple interior designs by minimizing the cost function using a genetic algorithm. Moreover, we extend the optimization to transdimensional space by enabling automatic selection of furniture objects. Finally, we optimize the assignment of materials to the furniture objects to achieve a unified design and harmonious color distribution. We investigate the capability of our system to generate sensible and livable interior designs in a perceptual study.
eurographics | 2017
Peter Kán; Maxim Davletaliyev; Hannes Kaufmann
This paper presents a novel method for the discovery of new analytical filters suitable for filtering of noise in Monte Carlo rendering. Our method utilizes genetic programming to evolve the set of analytical filtering expressions with the goal to minimize image error in training scenes. We show that genetic programming is capable of learning new filtering expressions with quality comparable to state of the art noise filters in Monte Carlo rendering. Additionally, the analytical nature of the resulting expressions enables the run-times one order of magnitude faster than compared state of the art methods. Finally, we present a new analytical filter discovered by our method which is suitable for filtering of Monte Carlo noise in diffuse scenes.
Archive | 2014
Peter Kán
High-quality image synthesis, indistinguishable from reality, has been one of the most important problems in computer graphics from its beginning. Image synthesis in augmented reality (AR) poses an even more challenging problem, because coherence of virtual and real objects is required. Especially, visual coherence plays an important role in AR. Visual coherence can be achieved by calculating global illumination which introduces the light interaction between virtual and real objects. Correct light interaction provides precise information about spatial location, radiometric properties, and geometric details of inserted virtual objects. In order to calculate light interaction accurately, high-quality global illumination is required. However, high-quality global illumination algorithms have not been suitable for real-time AR due to their high computational cost. Global illumination in AR can be beneficial in many areas including automotive or architectural design, medical therapy, rehabilitation, surgery, education, movie production, and others. This thesis approaches the problem of visual coherence in augmented reality by adopting the physically based rendering algorithms and presenting a novel GPU implementation of these algorithms. The developed rendering algorithms calculate the two solutions of global illumination, required for rendering in AR, in one pass by using a novel one-pass differential rendering algorithm. The rendering algorithms, presented in this thesis, are based on GPU ray tracing which provides high quality results. The developed rendering system computes various visual features in high quality. These features include depth of field, shadows, specular and diffuse global illumination, reflections, and refractions. Moreover, numerous improvements of the physically based rendering algorithms are presented which allow fast and accurate light transport calculation in AR. Additionally, this thesis presents the differential progressive path tracing algorithm which can calculate the unbiased AR solution in a progressive fashion. Finally, the presented methods are compared to the state of the art in real-time global illumination for AR. The results show that our high-quality global illumination outperforms other methods in terms of accuracy of the rendered images. Additionally, the human perception of developed global illumination methods for AR is evaluated. The impact of the presented rendering algorithms to visual realism and to the sense of presence is studied in this thesis. The results suggest that high-quality global illumination has a positive impact on the realism and presence perceived by users in AR. Thus, future AR applications can benefit from the algorithms developed in this thesis.