Christoph Traxler
VRVis
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Christoph Traxler.
international symposium on mixed and augmented reality | 2010
Martin Knecht; Christoph Traxler; Oliver Mattausch; Werner Purgathofer; Michael Wimmer
In this paper we present a novel plausible realistic rendering method for mixed reality systems, which is useful for many real life application scenarios, like architecture, product visualization or edutainment. To allow virtual objects to seamlessly blend into the real environment, the real lighting conditions and the mutual illumination effects between real and virtual objects must be considered, while maintaining interactive frame rates (20–30fps). The most important such effects are indirect illumination and shadows cast between real and virtual objects. Our approach combines Instant Radiosity and Differential Rendering. In contrast to some previous solutions, we only need to render the scene once in order to find the mutual effects of virtual and real scenes. The dynamic real illumination is derived from the image stream of a fish-eye lens camera. We describe a new method to assign virtual point lights to multiple primary light sources, which can be real or virtual. We use imperfect shadow maps for calculating illumination from virtual point lights and have significantly improved their accuracy by taking the surface normal of a shadow caster into account. Temporal coherence is exploited to reduce flickering artifacts. Our results show that the presented method highly improves the illusion in mixed reality applications and significantly diminishes the artificial look of virtual objects superimposed onto real scenes.
Computers & Graphics | 2012
Martin Knecht; Christoph Traxler; Oliver Mattausch; Michael Wimmer
In this paper we present a novel plausible rendering method for mixed reality systems, which is useful for many real-life application scenarios, like architecture, product visualization or edutainment. To allow virtual objects to seamlessly blend into the real environment, the real lighting conditions and the mutual illumination effects between real and virtual objects must be considered, while maintaining interactive frame rates. The most important such effects are indirect illumination and shadows cast between real and virtual objects. Our approach combines Instant Radiosity and Differential Rendering. In contrast to some previous solutions, we only need to render the scene once in order to find the mutual effects of virtual and real scenes. In addition, we avoid artifacts like double shadows or inconsistent color bleeding which appear in previous work. The dynamic real illumination is derived from the image stream of a fish-eye lens camera. The scene gets illuminated by virtual point lights, which use imperfect shadow maps to calculate visibility. A sufficiently fast scene reconstruction is done at run-time with Microsofts Kinect sensor. Thus, a time-consuming manual pre-modeling step of the real scene is not necessary. Our results show that the presented method highly improves the illusion in mixed-reality applications and significantly diminishes the artificial look of virtual objects superimposed onto real scenes.
international symposium on mixed and augmented reality | 2011
Martin Knecht; Christoph Traxler; Werner Purgathofer; Michael Wimmer
We present a novel adaptive color mapping method for virtual objects in mixed-reality environments. In several mixed-reality applications, added virtual objects should be visually indistinguishable from real objects. Recent mixed-reality methods use global-illumination algorithms to approach this goal. However, simulating the light distribution is not enough for visually plausible images. Since the observing camera has its very own transfer function from real-world radiance values to RGB colors, virtual objects look artificial just because their rendered colors do not match with those of the camera. Our approach combines an on-line camera characterization method with a heuristic to map colors of virtual objects to colors as they would be seen by the observing camera. Previous tone-mapping functions were not designed for use in mixed-reality systems and thus did not take the camera-specific behavior into account. In contrast, our method takes the camera into account and thus can also handle changes of its parameters during runtime. The results show that virtual objects look visually more plausible than by just applying tone-mapping operators.
Journal of Applied Geodesy | 2012
Klaus Chmelina; Josef Jansa; Gerd Hesina; Christoph Traxler
Abstract. The paper presents the mobile multi-sensor system Orthos Plus for the monitoring and mapping of tunnel walls, a scan data processing method for the evaluation of 3-d tunnel wall displacements from subsequent wall scans and, finally, a virtual reality tool supporting the interpretation of data. The measuring system consists of a 3-d laser scanner, a motorised total station and a digital camera that are integrated on a light metal frame that is installed on a mobile platform. It has been designed to perform tunnel measurements most efficiently and to meet the special requirements of tunnels under construction. The evaluation of 3-d displacements is based on a 3-d matching algorithm that takes advantage of the particular conditions of tunnel (shotcrete) surfaces. The virtual reality tool allows viewing of data in a 3-d virtual reality tunnel model and their animation in time and space in order supports understanding in an optimal way. The measuring system Orthos Plus has been developed in the course of a national research project, the 3-d matching method in the frame of the Austrian Christian Doppler Laboratory Spatial Data from Laser Scanning and Remote Sensing and the VR tool in the Austrian COMET K1 Competence Center VRVis Center (www.vrvis.at).
IEEE Transactions on Visualization and Computer Graphics | 2013
Martin Knecht; Christoph Traxler; Christoph Winklhofer; Michael Wimmer
In this paper, we present a novel rendering method which integrates reflective or refractive objects into a differential instant radiosity (DIR) framework usable for mixed-reality (MR) applications. This kind of objects are very special from the light interaction point of view, as they reflect and refract incident rays. Therefore they may cause high-frequency lighting effects known as caustics. Using instant-radiosity (IR) methods to approximate these high-frequency lighting effects would require a large amount of virtual point lights (VPLs) and is therefore not desirable due to real-time constraints. Instead, our approach combines differential instant radiosity with three other methods. One method handles more accurate reflections compared to simple cubemaps by using impostors. Another method is able to calculate two refractions in real-time, and the third method uses small quads to create caustic effects. Our proposed method replaces parts in light paths that belong to reflective or refractive objects using these three methods and thus tightly integrates into DIR. In contrast to previous methods which introduce reflective or refractive objects into MR scenarios, our method produces caustics that also emit additional indirect light. The method runs at real-time frame rates, and the results show that reflective and refractive objects with caustics improve the overall impression for MR scenarios.
international conference on information visualization theory and applications | 2015
Martin Hecher; Christoph Traxler; Gerd Hesina; Anton L. Fuhrmann
This paper describes a new platform for geospatial data analysis. The main purpose is to explore new ways to visualize and interact with multidimensional satellite data and computed models from various Earth Observation missions. The new V-MANIP platform facilitates a multidimensional exploring approach that allows to view the same dataset in multiple viewers at the same time to efficiently find and explore interesting features within the shown data. The platform provides visual analytics capabilities including viewers for displaying 2D or 3D data representations, as well as for volumetric input data. Via a simple configuration file the system can be configured for different stakeholder use cases, by defining desired data sources and available viewer modules. The system architecture, which will be discussed in this paper in detail, uses Open Geospatial Consortium web service interfaces to allow an easy integration of new visualization modules. The implemented software is based on open source libraries and uses modern web technologies to provide a platform-independent, pluginfree user experience.
Earth and Space Science | 2018
Robert Barnes; Sanjeev Gupta; Christoph Traxler; Thomas Ortner; Arnold Bauer; Gerd Hesina; Gerhard Paar; Ben Huber; Kathrin Juhart; Laura Fritz; Bernhard Nauschnegg; Jan-Peter Muller; Y. Tao
Panoramic camera systems on robots exploring the surface of Mars are used to collect images of terrain and rock outcrops which they encounter along their traverse. Image mosaics from these cameras are essential in mapping the surface geology and selecting locations for analysis by other instruments on the rover’s payload. 2-D images do not truly portray the depth of field of features within an image, nor their 3-D geometry. This paper describes a new 3-D visualization software tool for geological analysis of Martian rover-derived Digital Outcrop Models created using photogrammetric processing of stereo-images using the Planetary Robotics Vision Processing tool developed for 3-D vision processing of ExoMars PanCam and Mars 2020 Mastcam-Z data. Digital Outcrop Models are rendered in real time in the Planetary Robotics 3-D Viewer PRo3D, allowing scientists to roam outcrops as in a terrestrial field campaign. Digitization of point, line, and polyline features is used for measuring the physical dimensions of geological features and communicating interpretations. Dip and strike of bedding and fractures is measured by digitizing a polyline along the contact or fracture trace, through which a best fit plane is plotted. The attitude of this plane is calculated in the software. Here we apply these tools to analysis of sedimentary rock outcrops and quantification of the geometry of fracture systems encountered by the science teams of NASA’s Mars Exploration Rover Opportunity and Mars Science Laboratory rover Curiosity. We show the benefits PRo3D allows for visualization and collection of geological interpretations and analyses from rover-derived stereo-images. Plain Language Summary Key data returned from robots exploring the surface of Mars are the images they take of the landscape and rock formations. These are sent back to Earth for detailed investigation and analysis by the science teams. It is difficult to collect reliable measurements from photographs, as they do not truly represent the three-dimensionality of the features within them. In this paper, we present a new 3-D visualization software tool, PRo3D, which enables visualization of 3-D digital models of rock outcrops imaged by robots exploring the surface of Mars. These 3-D models are constructed from mosaicked photographs taken by the stereo panoramic cameras which are positioned on a mast on the rover. This provides a huge advantage to scientists who want to study and analyze the terrain and geology of exposed rock outcrops which surround the rover. Here we apply the tools available in PRo3D to sedimentological and structural analysis of 3-D Digital Outcrop Models of four areas explored by the Mars Exploration Rover Opportunity and Mars Science Laboratory Curiosity rover science teams and show that this method of 3-D visualization and analysis allows scientists to carry out important procedures that would be conducted in a terrestrial field geology campaign.
international symposium on mixed and augmented reality | 2010
Christoph Traxler; Martin Knecht
This laboratory demo is a showcase for the research results published in our ISMAR 2010 paper [3], where we describe a method to simulate the mutual shading effects between virtual and real objects in Mixed Reality applications. The aim is to provide a plausible illusion so that virtual objects seem to be really there. It combines Instant Radiosity [2] with Differential Rendering [1] to a method suitable for MR applications. The demo consists of two scenarios, a simple one to focus on mutual shading effects and an MR game based on LEGO®.
international conference in central europe on computer graphics and visualization | 1998
Christoph Traxler
international conference in central europe on computer graphics and visualization | 2012
Martin Knecht; Georg Tanzmeister; Christoph Traxler; Michael Wimmer