Raphael Grasset
Graz University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Raphael Grasset.
international symposium on mixed and augmented reality | 2012
Raphael Grasset; Tobias Langlotz; Denis Kalkofen; Markus Tatzgern; Dieter Schmalstieg
In this paper, we introduce a novel view management technique for placing labels in Augmented Reality systems. A common issue in many Augmented Reality applications is the absence of knowledge of the real environment, limiting the efficient representation and optimal layout of the digital information augmented onto the real world. To overcome this problem, we introduce an image-based approach, which combines a visual saliency algorithm with edge analysis to identify potentially important image regions and geometric constraints for placing labels. Our proposed solution also includes adaptive rendering techniques that allow a designer to control the appearance of depth cues. We describe the results obtained from a user study considering different scenarios, which we performed for validating our approach. Our technique will provide special benefits to Augmented Reality browsers that usually lack scene knowledge, but also to many other applications in the domain of Augmented Reality such as cultural heritage and maintenance applications.
virtual reality software and technology | 2012
Bernhard Kainz; Stefan Hauswiesner; Gerhard Reitmayr; Markus Steinberger; Raphael Grasset; Lukas Gruber; Eduardo E. Veas; Denis Kalkofen; Hartmut Seichter; Dieter Schmalstieg
Real-time three-dimensional acquisition of real-world scenes has many important applications in computer graphics, computer vision and human-computer interaction. Inexpensive depth sensors such as the Microsoft Kinect allow to leverage the development of such applications. However, this technology is still relatively recent, and no detailed studies on its scalability to dense and view-independent acquisition have been reported. This paper addresses the question of what can be done with a larger number of Kinects used simultaneously. We describe an interference-reducing physical setup, a calibration procedure and an extension to the KinectFusion algorithm, which allows to produce high quality volumetric reconstructions from multiple Kinects whilst overcoming systematic errors in the depth measurements. We also report on enhancing image based visual hull rendering by depth measurements, and compare the results to KinectFusion. Our system provides practical insight into achievable spatial and radial range and into bandwidth requirements for depth data acquisition. Finally, we present a number of practical applications of our system.
Proceedings of the IEEE | 2014
Tobias Langlotz; Thanh Nguyen; Dieter Schmalstieg; Raphael Grasset
As low-level hardware will soon allow us to visualize virtual content anywhere in the real world, managing it in a more structured manner still needs to be addressed. Augmented reality (AR) browser technology is the gateway to such structured software platform and an anywhere AR user experience. AR browsers are the substitute of Web browsers in the real world, permitting overlay of interactive multimedia content on the physical world or objects they refer to. As the current generation allows us to barely see floating virtual items in the physical world, a tighter coupling with our reality has not yet been explored. This paper presents our recent effort to create rich, seamless, and adaptive AR browsers. We discuss major challenges in the area and present an agenda on future research directions for an everyday augmented world.
IEEE Transactions on Visualization and Computer Graphics | 2012
Eduardo E. Veas; Raphael Grasset; Ernst Kruijff; Dieter Schmalstieg
In this paper, we explore techniques that aim to improve site understanding for outdoor Augmented Reality (AR) applications. While the first person perspective in AR is a direct way of filtering and zooming on a portion of the data set, it severely narrows overview of the situation, particularly over large areas. We present two interactive techniques to overcome this problem: multi-view AR and variable perspective view. We describe in details the conceptual, visualization and interaction aspects of these techniques and their evaluation through a comparative user study. The results we have obtained strengthen the validity of our approach and the applicability of our methods to a large range of application domains.
ieee virtual reality conference | 2014
Markus Tatzgern; Denis Kalkofen; Raphael Grasset; Dieter Schmalstieg
Annotations of objects in 3D environments are commonly controlled using view management techniques. State-of-the-art view management strategies for external labels operate in 2D image space. This creates problems, because the 2D view of a 3D scene changes over time, and temporal behavior of elements in a 3D scene is not obvious in 2D image space. We propose managing the placement of external labels in 3D object space instead. We use 3D geometric constraints to achieve label placement that fulfills the desired objectives (e.g., avoiding overlapping labels), but also behaves consistently over time as the viewpoint changes. We propose two geometric constraints: a 3D pole constraint, where labels move along a 3D pole sticking out from the annotated object, and a plane constraint, where labels move in a dominant plane in the world. This formulation is compatible with standard optimization approaches for labeling, but overcomes the lack of temporal coherence.
Communications of The ACM | 2013
Tobias Langlotz; Jens Grubert; Raphael Grasset
How lessons learned from the evolution of the Web and Web browsers can influence the development of AR browsers.
ubiquitous computing | 2013
Eduardo E. Veas; Raphael Grasset; Ioan Ferencik; Thomas Grünewald; Dieter Schmalstieg
In response to dramatic changes in the environment, and supported by advances in wireless networking, pervasive sensor networks have become a common tool for environmental monitoring. However, tools for on-site visualization and interactive exploration of environmental data are still inadequate for domain experts. Current solutions are generally limited to tabular data, basic 2D plots, or standard 2D GIS tools designed for the desktop and not adapted to mobile use. In this paper, we introduce a novel augmented reality platform for 3D mobile visualization of environmental data. Following a user-centered design approach, we analyze processes, tasks, and requirements of on-site visualization tools for environmental experts. We present our multilayer infrastructure and the mobile augmented reality platform that leverages visualization of georeferenced sensor measurement and simulation data in a seamless integrated view of the environment.
nordic conference on human-computer interaction | 2012
Jens Grubert; Raphael Grasset; Gerhard Reitmayr
The use of Augmented Reality for overlaying visual information on print media like street posters has become widespread over the last few years. While this user interface metaphor represents an instance of cross-media information spaces the specific context of its use has not yet been carefully studied, resulting in productions generally relying on trial-and-error approaches. In this paper, we explicitly consider mobile contexts in the consumption of augmented print media. We explore the design space of hybrid user interfaces for augmented posters and describe different case studies to validate our approach. Outcomes of this work inform the design of future interfaces for publicly accessible augmented print media in mobile contexts.
Pervasive and Mobile Computing | 2015
Jens Grubert; Michel Pahud; Raphael Grasset; Dieter Schmalstieg; Hartmut Seichter
This paper investigates the utility of the Magic Lens metaphor on small screen handheld devices for map navigation given state of the art computer vision tracking. We investigate both performance and user experience aspects. In contrast to previous studies a semi-controlled field experiment ( n = 18 ) in a ski resort indicated significantly longer task completion times for a Magic Lens compared to a Static Peephole interface in an information browsing task. A follow-up controlled laboratory study ( n = 21 ) investigated the impact of the workspace size on the performance and usability of both interfaces. We show that for small workspaces Static Peephole outperforms Magic Lens. As workspace size increases performance gets equivalent and subjective measurements indicate less demand and better usability for Magic Lens. Finally, we discuss the relevance of our findings for the application of Magic Lens interfaces for map interaction in touristic contexts. Investigation of Magic Lens and Static Peephole on smartphones for maps.Two experiments: semi-controlled field experiment in a ski resort and lab study.For A0 sized posters Magic Lens is slower and less preferred.For larger workspace sizes performance between interfaces is equivalent.Magic Lens interaction results in better usability for large workspaces.
ieee virtual reality conference | 2014
Markus Tatzgern; Raphael Grasset; Denis Kalkofen; Dieter Schmalstieg
Augmented Reality (AR) applications require knowledge about the real world environment in which they are used. This knowledge is often gathered while developing the AR application and stored for future uses of the application. Consequently, changes to the real world lead to a mismatch between the previously recorded data and the real world. New capturing techniques based on dense Simultaneous Localization and Mapping (SLAM) not only allow users to capture real world scenes at run-time, but also enables them to capture changes of the world. However, instead of using previously recorded and prepared scenes, users must interact with an unprepared environment. In this paper, we present a set of new interaction techniques that support users in handling captured real world environments. The techniques present virtual viewpoints of the scene based on a scene analysis and provide natural transitions between the AR view and virtual viewpoints. We demonstrate our approach with a SLAM based prototype that allows us to capture a real world scene and describe example applications of our system.