Matthias Baldauf
Vienna University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Matthias Baldauf.
human computer interaction with mobile devices and services | 2011
Matthias Baldauf; Sebastian Zambanini; Peter Fröhlich; Peter Reichl
The vision-based detection of hand gestures is one technological enabler for Natural User Interfaces which try to provide a natural and intuitive interaction with computers. In particular, mobile devices might benefit from such a less device-centric but more natural input possibility. In this paper, we introduce our ongoing work on the visual markerless detection of fingertips on mobile devices. Further, we shed light on the potential of mobile hand gesture detection and present several promising use cases and respective demo applications based on the presented engine.
Communications of The ACM | 2011
Peter Fröhlich; Antti Oulasvirta; Matthias Baldauf; Antti Nurminen
How to experience real-world landmarks through a wave, gaze, location coordinates, or touch, prompting delivery of useful digital information.
augmented human international conference | 2010
Matthias Baldauf; Peter Fröhlich; Siegfried Hutter
Due to the vast amount of available georeferenced information novel techniques to more intuitively and efficiently interact with such content are increasingly required. In this paper, we introduce KIBITZER, a lightweight wearable system that enables the browsing of urban surroundings for annotated digital information. KIBITZER exploits its users eye-gaze as natural indicator of attention to identify objects-of-interest and offers speech- and non-speech auditory feedback. Thus, it provides the user with a 6th sense for digital georeferenced information. We present a description of our systems architecture and the interaction technique and outline experiences from first functional trials.
Journal of Location Based Services | 2011
Matthias Baldauf; Peter Fröhlich; Kathrin Masuch; Thomas Grechenig
The exploration of the immediate surroundings through mobile location-aware devices is starting to become an everyday urban activity. Due to the increasing amount of available geo-referenced information, advanced viewing and filtering techniques need to be investigated as a complementation to simple present-day 2D-map presentations. This article examines both well-established techniques (2D map, list view, category view) and advanced concepts (3D map, tag cloud) regarding their support of mobile urban exploration. In a field study, 26 participants used an experimental multi-view prototype for viewing and filtering tasks on a route through an urban environment. The results show that content-based views may provide similarly good support for viewing the content as spatial interfaces. Furthermore, the experiment provides evidence that the advantages of content-based filtering techniques are increasingly preferred to spatial ones in regard to the amount of available information.
automotive user interfaces and interactive vehicular applications | 2011
Peter Fröhlich; Matthias Baldauf; Marion Hagen; Stefan Suette; Dietmar Schabus; Andrew L. Kun
Todays in-car information systems are undergoing an evolution towards realistic visualization as well as to real-time telematics services. In a road study with 31 participants we explored the communication of safety information to the driver. We compared three presentation styles: audio-only, audiovisual with a conventional map, and audiovisual with augmented reality. The participants drove on a motorway route and were confronted with recommendations for route following, speed limitation, lane utilization, unexpected route change, and emergency stops. We found significant differences between these safety scenarios in terms of driving performance, eye glances and subjective preference. Comparing the presentation styles, we found that following such recommendations was highly efficient in the audio-only mode. Additional visual information did not significantly increase driving performance. As our subjective preference data also shows, augmented reality does not necessarily create an added value when following safety-related traffic recommendations. However, additional visual information did not interfere with safe driving. Importantly, we did not find evidence for a higher distraction potential by augmented reality; drivers even looked slightly less frequently on the human-machine interface screen in the augmented reality mode than with conventional maps.
human computer interaction with mobile devices and services | 2014
Matthias Baldauf; Stefan Suette; Peter Fröhlich; Ulrich Lehner
Interactive opinion polls are a promising novel use case for public urban displays. However, voicing ones opinion at such a public installation poses special privacy requirements. In this paper, we introduce our ongoing work on investigating the roles of the interaction technique and the poll question in this novel context. We present a field study comparing three different voting techniques (public touch interface, personal smartphone by scanning a QR code, from remote through a short Web address) and three types of poll questions (general, personal, local). Overall, the results show that actively casting an opinion on a timely topic is highly appreciated by passers-by. The public voting opportunity through a touch screen is clearly preferred. Offering mobile or remote voting does not significantly increase the overall participation rate. The type of poll question has an impact on the number of participants but does not influence the preferred interaction modality.
human factors in computing systems | 2013
Matthias Baldauf; Peter Fröhlich
The Augmented Video Wall is a compelling showcase application demonstrating a novel collocated interaction technique for public displays beyond traditional competitive or collaborative multi-user scenarios. By utilizing means of augmented reality on personal mobile devices and applying animated video overlays accurately superimposed upon the public display, we create the illusion of literally private views to a shared public display. Besides this concurrent viewing mode, the demonstrator features a competitive mode and a concurrent mode enhanced with social features to highlight the characteristics of this novel display interaction techniques. During a first preliminary study, the Augmented Video Wall attracted lots of visitors and created highly entertaining experiences for groups.
acm multimedia | 2015
Matthias Baldauf; Peter Fröhlich; Florence Adegeye; Stefan Suette
On-screen gamepads are increasingly used as controllers for video games on distant screens, yet lack the typical tactile feedback known from hardware controllers. We conducted a comparative lab study to investigate four smartphone gamepads inspired by traditional game controllers and mobile game controls (directional buttons, directional pad, floating joystick, tilt control). The study consisted of both completing a formal control test as well as controlling two popular video games of different genres (Pac-Man and Super Mario Bros.). The results indicate that the directional buttons require the most attention of the user, however, work precisely for direction-restricted navigational tasks. Directional pad and joystick showed a similar performance, yet they encourage drifting and unintended operations when the user is focused on the remote screen. While currently unfamiliar to many users, the floating joystick can reduce the glances at the device. Tilt turned out to be not sufficiently precise and quick for the investigated tasks. The article concludes with derived design guidelines with easily realizable measures for typical contexts such as casual gaming at home or spontaneous gaming on public displays.
International Journal of Mobile Human Computer Interaction | 2013
Matthias Baldauf; Peter Fröhlich; Jasmin Buchta; Theresa Stürmer
Today’s smartphones provide the technical means to serve as interfaces for public displays in various ways. Even though recent research has identified several new approaches for mobile-display interaction, inter-technique comparisons of respective methods are scarce. The authors conducted an experimental user study on four currently relevant mobile-display interaction techniques (‘Touchpad’, ‘Pointer’, ‘Mini Video’, and ‘Smart Lens’) and learned that their suitability strongly depends on the task and use case at hand. The study results indicate that mobile-display interactions based on a traditional touchpad metaphor are time-consuming but highly accurate in standard target acquisition tasks. The direct interaction techniques Mini Video and Smart Lens had comparably good completion times, and especially Mini Video appeared to be best suited for complex visual manipulation tasks like drawing. Smartphone-based pointing turned out to be generally inferior to the other alternatives. Examples for the application of these differentiated results to real-world use cases are provided.
human factors in computing systems | 2010
Peter Froehlich; Raimund Schatz; Peter Leitner; Stephan Mantler; Matthias Baldauf
This paper reflects on the currently observable evolution of in-vehicle information systems towards realistic visualization. As compared to common schematic maps, hi-fidelity visualizations might support an easier recognition of the outside world and therefore better contribute to driving safety. On the other hand, too much visual detail might distract from the primary driving task. We present an experimental car-simulator study with 28 users, in which the in-car HMI was systematically manipulated with regard to representation of the outside world. The results show that perceived safety is significantly higher with 1:1 realistic views than with conventional schematic styles, despite higher visual complexity. Furthermore, we found that the more demanding the safety recommendation on the HMI, the more realistic visualization are perceived as a valuable support.