Hartmut Seichter
Graz University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hartmut Seichter.
international symposium on mixed and augmented reality | 2008
Hartmut Seichter; Julian Looser; Mark Billinghurst
This paper introduces ComposAR, a tool to allow a wide audience to author AR and MR applications. It is unique in that it supports both visual programming and interpretive scripting, and an immediate mode for runtime testing. ComposAR is written in Python which means the user interface and runtime behavior can be easily customized and third-party modules can be incorporated into the authoring environment. We describe the design philosophy and the resulting user interface, lessons learned and directions for future research.
virtual reality software and technology | 2012
Bernhard Kainz; Stefan Hauswiesner; Gerhard Reitmayr; Markus Steinberger; Raphael Grasset; Lukas Gruber; Eduardo E. Veas; Denis Kalkofen; Hartmut Seichter; Dieter Schmalstieg
Real-time three-dimensional acquisition of real-world scenes has many important applications in computer graphics, computer vision and human-computer interaction. Inexpensive depth sensors such as the Microsoft Kinect allow to leverage the development of such applications. However, this technology is still relatively recent, and no detailed studies on its scalability to dense and view-independent acquisition have been reported. This paper addresses the question of what can be done with a larger number of Kinects used simultaneously. We describe an interference-reducing physical setup, a calibration procedure and an extension to the KinectFusion algorithm, which allows to produce high quality volumetric reconstructions from multiple Kinects whilst overcoming systematic errors in the depth measurements. We also report on enhancing image based visual hull rendering by depth measurements, and compare the results to KinectFusion. Our system provides practical insight into achievable spatial and radial range and into bandwidth requirements for depth data acquisition. Finally, we present a number of practical applications of our system.
australasian computer-human interaction conference | 2006
Alastair Hampshire; Hartmut Seichter; Raphael Grasset; Mark Billinghurst
Developing an Augmented Reality (AR) application is usually a long and non-intuitive task. Few methodologies address this problem and tools implementing these are limited or non-existent. To date there is no efficient and easy development tool tailored to the needs of Mixed Reality (MR). We are presenting an initial taxonomy of MR applications, addressing the different levels of abstraction for defining the relation between real and virtual world. We then demonstrate some development approaches and describe tools and libraries that we implemented in order to illustrate aspects of our authoring taxonomy. Finally, we provide a definition addressing the requirements for new generation of AR rapid application development (RAD) tools based on actual implementations.
affective computing and intelligent interaction | 2009
Stephen W. Gilroy; Marc Cavazza; Marcus Niiranen; Elisabeth André; Thurid Vogt; Jérôme Urbain; M. Benayoun; Hartmut Seichter; Mark Billinghurst
The study of multimodality is comparatively less developed for affective interfaces than for their traditional counterparts. However, one condition for the successful development of affective interface technologies is the development of frameworks for the real-time multimodal fusion. In this paper, we describe an approach to multimodal affective fusion, which relies on a dimensional model, Pleasure-Arousal-Dominance (PAD) to support the fusion of affective modalities, each input modality being represented as a PAD vector. We describe how this model supports both affective content fusion and temporal fusion within a unified approach. We report results from early user studies which confirm the existence of a correlation between measured affective input and user temperament scores.
advances in computer entertainment technology | 2008
Stephen W. Gilroy; Marc Cavazza; Rémi Chaignon; Satu-Marja Mäkelä; Markus Niranen; Elisabeth André; Thurid Vogt; Jérôme Urbain; Hartmut Seichter; Mark Billinghurst; M. Benayoun
The development of Affective Interface technologies makes it possible to envision a new generation of Digital Arts and Entertainment applications, in which interaction will be based directly on the analysis of user experience. In this paper, we describe an approach to the development of Multimodal Affective Interfaces that supports real-time analysis of user experience as part of an Augmented Reality Art installation. The system relies on a PAD dimensional model of emotion to support the fusion of affective modalities, each input modality being represented as a PAD vector. A further advantage of the PAD model is that it can support a representation of affective responses that relate to aesthetic impressions.
international symposium on mixed and augmented reality | 2011
Alessandro Mulloni; Hartmut Seichter; Dieter Schmalstieg
We investigate user experiences when using augmented reality (AR) as a new aid to navigation. We integrate AR with other more common interfaces into a handheld navigation system, and we conduct an exploratory study to see where and how people exploit AR. Based on previous work on augmented photographs, we hypothesize that AR is used more to support wayfinding at static locations when users approach a road intersection. In partial contrast to this hypothesis, our results from a user evaluation hint that users will expect to use the system while walking. Further, our results also show that AR is usually exploited shortly before and after road intersections, suggesting that tracking support will be mostly needed in proximity of road intersections.
Pervasive and Mobile Computing | 2015
Jens Grubert; Michel Pahud; Raphael Grasset; Dieter Schmalstieg; Hartmut Seichter
This paper investigates the utility of the Magic Lens metaphor on small screen handheld devices for map navigation given state of the art computer vision tracking. We investigate both performance and user experience aspects. In contrast to previous studies a semi-controlled field experiment ( n = 18 ) in a ski resort indicated significantly longer task completion times for a Magic Lens compared to a Static Peephole interface in an information browsing task. A follow-up controlled laboratory study ( n = 21 ) investigated the impact of the workspace size on the performance and usability of both interfaces. We show that for small workspaces Static Peephole outperforms Magic Lens. As workspace size increases performance gets equivalent and subjective measurements indicate less demand and better usability for Magic Lens. Finally, we discuss the relevance of our findings for the application of Magic Lens interfaces for map interaction in touristic contexts. Investigation of Magic Lens and Static Peephole on smartphones for maps.Two experiments: semi-controlled field experiment in a ski resort and lab study.For A0 sized posters Magic Lens is slower and less preferred.For larger workspace sizes performance between interfaces is equivalent.Magic Lens interaction results in better usability for large workspaces.
human factors in computing systems | 2012
Alessandro Mulloni; Hartmut Seichter; Andreas Dünser; Patrick Baudisch; Dieter Schmalstieg
We investigate 360° panoramas as overviews to support users in the task of locating objects in the surrounding environment. Panoramas are typically visualized as rectangular photographs, but this does not provide clear cues for physical directions in the environment. In this paper, we conduct a series of studies with three different shapes: Frontal, Top-Down and Birds Eye; the last two shapes are chosen because they provide a clearer representation of the spatial mapping between panorama and environment. Our results show that good readability of the panorama is most important and that a clear representation of the spatial mapping plays a secondary role. This paper is the first to provide understanding on how users exploit 360° panoramic over-views to locate objects in the surrounding environment and how different design factors can affect user performance.
international symposium on mixed and augmented reality | 2014
Jens Grubert; Hartmut Seichter; Dieter Schmalstieg
We work towards ad-hoc augmentation of public displays on handheld devices, supporting user perspective rendering of display content. Our prototype system only requires access to a screencast of the public display, which can be easily provided through common streaming platforms and is otherwise self-contained. Hence, it easily scales to multiple users.
advances in computer entertainment technology | 2008
Philip Buchanan; Hartmut Seichter; Mark Billinghurst; Raphaäl Grasset
Physics simulation is becoming more common in computing. We have developed a comprehensive toolkit to connect the physical and virtual world within Augmented Reality (AR) using rigid body simulation. Unlike existing techniques of embedding physics simulations into 3D environments, the use of rigid body simulations within AR requires a different approach. To demonstrate our approach we developed an edutainment game based on the concept of chain reactions and physical contraptions. In this paper we elaborate on the constraints introduced by mixing AR and rigid body simulation, and how it subsequently effects the visual richness and perceptual appearance of an AR simulation. We describe our implementation approach and provide an analysis of additional scenarios which would be enriched by physical simulation.