Tobias Alexander Franke
Technische Universität Darmstadt
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tobias Alexander Franke.
international conference on 3d web technology | 2012
Johannes Behr; Yvonne Jung; Tobias Alexander Franke; Timo Sturm
JSON, XML-based 3D formats (e.g. X3D or Collada) and Declarative 3D approaches share some benefits but also one major draw-back: all encoding schemes store the scene-graph and vertex data in the same file structure; unstructured raw mesh data is found within descriptive elements of the scene. Web Browsers therefore have to download all elements (including every single coordinate) before being able to further process the structure of the document. Therefore, we separate the structured scene information and unstructured vertex data to increase the user experience and overall performance of the system by introducing two new referenced containers, which encode external mesh data as so-called Sequential Image Geometry (SIG) or Typed-Array-based Binary Geometry (BG). We also discuss compression, rendering and application results and introduce a novel data layout for image geometry data that supports incremental updates, arbitrary input meshes and GPU decoding.
international conference on 3d web technology | 2011
Tobias Alexander Franke; Svenja Kahn; Manuel Olbrich; Yvonne Jung
Until recently, depth sensing cameras have been used almost exclusively in research due to the high costs of such specialized equipment. With the introduction of the Microsoft Kinect device, realtime depth imaging is now available for the ordinary developer at low expenses and so far it has been received with great interest from both the research and hobby developer community. The underlying OpenNI framework not only allows to extract the depth image from the camera, but also provides tracking information of gestures or user skeletons. In this paper, we present a framework to include depth sensing devices into X3D in order to enhance visual fidelity of X3D Mixed Reality applications by introducing some extensions for advanced rendering techniques. We furthermore outline how to calibrate depth and image data in a meaningful way through calibration for devices that do not already come with precalibrated sensors, as well as a discussion of some of the OpenNI functionality that X3D can benefit from in the future.
international conference on 3d web technology | 2007
Yvonne Jung; Tobias Alexander Franke; Patrick Dähne; Johannes Behr
In this paper, we explore and discuss X3D as an application description language for advanced mixed reality environments. X3D has been established as an important platform for todays web-based visualization and VR applications. Yet, there are very few examples for augmented reality systems utilizing X3D beyond a simple geometric description format. In order to fulfill the image compositing and synthesis requests of todays augmented reality applications, we propose extensions to X3D, especially with a focus on lighting and realistic rendering.
digital heritage international congress | 2013
Sabine Webel; Manuel Olbrich; Tobias Alexander Franke; Jens Keil
Manual and automatic reconstruction of cultural assets are well-established fields of research. Especially with the recent introduction of methods such as KinectFusion even low cost sensors can be employed to create digital copies of the environment. However, experiencing these 3D models is still an active issue considering the wide variety of interaction devices for immersive Virtual Reality. The balance between system mobility and freedom of interaction is still a challenge: either the users view is totally covered through digital data, what complicates the intuitive interaction with the virtual world enormously, or projection-based installations are used which are usually stationary. In our paper we focus on creating a low-cost, fully immersive and non-stationary Virtual Reality setup that allows the user to intuitively experience cultural heritage artifacts. For this purpose we explore and analyze recent devices such as the Oculus Rift HMD, the Microsoft Kinect and the Leap Motion controller.
international symposium on mixed and augmented reality | 2013
Tobias Alexander Franke
Indirect illumination is an important visual cue which has traditionally been neglected in mixed reality applications. We present Delta Light Propagation Volumes, a novel volumetric relighting method for real-time mixed reality applications which allows to simulate the effect of first bounce indirect illumination of synthetic objects onto a real geometry and vice versa. Inspired by Radiance Transfer Fields, we modify Light Propagation Volumes in such a way as to propagate the change in illumination caused by the introduction of a synthetic object into a real scene. This method combines real and virtual light in one representation, provides improved temporal coherence for indirect light compared to previous solutions and implicitly includes smooth shadows.
IEEE Computer Graphics and Applications | 2013
Max Limper; Yvonne Jung; Johannes Behr; Timo Sturm; Tobias Alexander Franke; Karsten Schwenk; Arjan Kuijper
Until recently, a major drawback of declarative-3D approaches for the Web was the encoding of scene-graph-related structured data along with a text-based description of unstructured vertex data. Loading times were long, and 3D Web content wasnt available until the full page had completely loaded. To overcome this limitation requires external mesh data containers that are referenced in the scene description. In particular, sequential image geometry containers and explicit binary containers align well with GPU buffer structures, thus enabling fast decoding and GPU uploads. Furthermore, progressive binary geometry enables simple, yet highly progressive transmission of arbitrary mesh data on the Web.
international conference on 3d web technology | 2015
Johannes Behr; Christophe Mouton; Samuel Parfouru; Julien Champeau; Clotilde Jeulin; Maik Thöner; Christian Stein; Michael Schmitt; Max Limper; Miguel de Sousa; Tobias Alexander Franke; Gerrit Voss
This paper presents the webVis/instant3DHub platform, which combines a novel Web-Components based framework and a Visual Computing as a Service infrastructure to deliver an interactive 3D data visualisation solution. The system focuses on minimising resource consumption, while maximising the end-user experience. It utilises an adaptive and automated combination of client, server and hybrid visualisation techniques, while orchestrating transmission, caching and rendering services to deliver structural and semantically complex data sets on any device class and network architecture. The API and Web Component framework allow the application developer to compose and manipulate complex data setups with a simple set of commands inside the browser, without requiring knowledge about the underlying service infrastructure, interfaces and the fully automated processes. This results in a new class of interactive applications, built around a canvas for real-time visualisation of massive data sets.
intelligent virtual agents | 2009
Yvonne Jung; Christine Weber; Jens Keil; Tobias Alexander Franke
For simulating communicative behavior, realistic appearance and plausible behavior is important. Postures and mimics also reflect emotional behavior. There exist various models of emotion, like the psycho-evolutionary theory developed by Plutchik [1], or Ekmans FACS. But most models are only suited for muscular expressions. A more unattended field however, also in graphics, is the change in color. Also, other physiological symptoms like crying are left unconsidered.
international conference on computer graphics and interactive techniques | 2014
Lukas Hermanns; Tobias Alexander Franke
Indirect lighting (also Global Illumination (GI)) is an important part of photo-realistic imagery and has become a widely used method in real-time graphics applications, such as Computer Aided Design (CAD), Augmented Reality (AR) and video games. Path tracing can already achieve photo-realism by shooting thousands or millions of rays into a 3D scene for every pixel, which results in computational overhead exceeding real-time budgets. However, with modern programmable shader pipelines, a fusion of ray-casting algorithms and rasterization is possible, i.e. methods, which are similar to testing rays against geometry, can be performed on the GPU within a fragment (or rather pixel-) shader. Nevertheless, many implementations for real-time GI still trace perfect specular reflections only. In this work the advantages and disadvantages of different reflection methods are exposed and a combination of some of these is presented, which circumvents artifacts in the rendering and provides a stable, temporally coherent image enhancement. The benefits and failings of this new method are clearly separated as well. Moreover the developed algorithm can be implemented as pure post-process, which can easily be integrated into an existing rendering pipeline.
Proceedings of the 19th International ACM Conference on 3D Web Technologies | 2014
Manuel Olbrich; Tobias Alexander Franke; Pavel Rojtberg
Augmented Reality is maturing, but in a world where we are used to straightforward services on the internet, Augmented Reality applications require a lot of preparation before they can be used. Our approach shows how we can bring Augmented Reality into a normal web browser, or even browsers on mobile devices. We show how we are, with recent features of HTML5, able to augment reality based on complex 3D tracking in a browser without having to install or set up any software on a client. With this solution, we are able to extend 3D Web applications with AR and reach more users with a reduced usability barrier. A key contribution of our work is a pipeline for remote tracking built on web-standards.