Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Susanne Boll is active.

Publication


Featured researches published by Susanne Boll.


tangible and embedded interaction | 2008

Gesture recognition with a Wii controller

Thomas Schlömer; Benjamin Poppinga; Niels Henze; Susanne Boll

In many applications today user interaction is moving away from mouse and pens and is becoming pervasive and much more physical and tangible. New emerging interaction technologies allow developing and experimenting with new interaction methods on the long way to providing intuitive human computer interaction. In this paper, we aim at recognizing gestures to interact with an application and present the design and evaluation of our sensor-based gesture recognition. As input device we employ the Wii-controller (Wiimote) which recently gained much attention world wide. We use the Wiimotes acceleration sensor independent of the gaming console for gesture recognition. The system allows the training of arbitrary gestures by users which can then be recalled for interacting with systems like photo browsing on a home TV. The developed library exploits Wii-sensor data and employs a hidden Markov model for training and recognizing user-chosen gestures. Our evaluation shows that we can already recognize gestures with a small number of training samples. In addition to the gesture recognition we also present our experiences with the Wii-controller and the implementation of the gesture recognition. The system forms the basis for our ongoing work on multimodal intuitive media browsing and are available to other researchers in the field.


nordic conference on human-computer interaction | 2008

Tactile wayfinder: a non-visual support system for wayfinding

Wilko Heuten; Niels Henze; Susanne Boll; Martin Pielot

Digital maps and route descriptions on a PDA have become very popular for navigation, not the least with the advent of the iPhone and its Google Maps application. A visual support for wayfinding, however, is not reasonable or even possible all the time. A pedestrian must pay attention to traffic on the street, a hiker should concentrate on the narrow trail, and a blind person relies on other modalities to find her way. To overcome these limitations, we developed a non-visual support for wayfinding that guides and keeps a mobile user en route by a tactile display. We designed a belt with vibrators that indicates directions and deviations from the path in an accurate and unobtrusive way. Our first user evaluation showed that on an open field without any landmarks the participants stayed well to given test routes and that wayfinding support is possible with our Tactile Wayfinder.


IEEE MultiMedia | 2007

MultiTube--Where Web 2.0 and Multimedia Could Meet

Susanne Boll

Web 2.0 is an area thats gained much attention recently, especially with Googles acquisition of YouTube. Given the strong focus on media in many Web 2.0 applications, from a multimedia perspective the question arises what multimedia (research) and Web 2.0 have in common, where the two fields meet, and how they can benefit each other


acm multimedia | 1999

A cross-media adaptation strategy for multimedia presentations

Susanne Boll; Wolfgang Klas; Jochen Wandel

Adaptation techniques for multimedia presentations are mainly concerned with switching between different qualities of single media elements to reduce the data volume and by this to adapt to limited presentation resources. This kind of adaptation, however, is limited to an inherent lower bound, i.e., the lowest acceptable technical quality of the respective media type. To overcome this limitation, we propose cross-media adaptation in which the presentation alternatives can be media elements of different media type, even different fragments. Thereby, the alternatives can extremely vary in media type and data volume and this enormously widens the possibilities to efficiently adapt to the current presentation resources. However, the adapted presentation must still convey the same content as the original one, hence, the substitution of media elements and fragments must preserve the presentation semantics. Therefore, our cross-media adaptation strategy provides models for the automatic augmentation of multimedia documents by semantically equivalent presentation alternatives. Additionally, during presentation, substitution models enforce a semantically correct information flow in case of dynamic adaptation to varying presentation resources. The cross-media adaptation strategy allows for flexible reuse of multimedia content in many different environments and, at the same time, maintains a semantically correct information flow of the presentation.


International Journal of Mobile Human Computer Interaction | 2011

My App is an Experiment: Experience from User Studies in Mobile App Stores

Susanne Boll; Niels Henze; Martin Pielot; Benjamin Poppinga; Torben Schinke

Experiments are a cornerstone of HCI research. Mobile distribution channels such as Apples App Store and Googles Android Market have created the opportunity to bring experiments to the end user. Hardly any experience exists on how to conduct such experiments successfully. This article reports on five experiments that were conducted by publishing Apps in the Android Market. The Apps are freely available and have been installed more than 30,000 times. The outcomes of the experiments range from failure to valuable insights. Based on these outcomes, the authors identified factors that account for the success of experiments using mobile application stores. When generalizing findings it must be considered that smartphone users are a non-representative sample of the worlds population. Most participants can be obtained by informing users about the study when the App had been started for the first time. Because Apps are often used for a short time only, data should be collected as early as possible. To collect valuable qualitative feedback other channels than user comments and email have to be used. Finally, the interpretation of collected data has to consider unpredicted usage patterns to provide valid conclusions.


international conference on computers for handicapped persons | 2004

AccesSights – A Multimodal Location-Aware Mobile Tourist Information System

Palle Klante; Jens Krösche; Susanne Boll

Through recent developments in the segment of mobile devices like personal digital assistants (PDA) the usage of mobile applications in different areas of our normal life increases. New applications support mobile users with location-aware information. But today’s systems are not usable for all: there still exist various barriers for blind and visually impaired people. This user group does not receives the same information as normally sighted users. AccesSights overcomes these barriers by supporting both user groups with the same information. Meeting the different user requirements we designed a multimodal user interface to support different user groups – each in their suitable fashion. The introduced AccesSights system is based on our highly flexible and modular Niccimon platform.


IEEE Transactions on Knowledge and Data Engineering | 2001

Z/sub Y/X-a multimedia document model for reuse and adaptation of multimedia content

Susanne Boll; Wolfgang Klas

Advanced multimedia applications require adequate support for the modeling of multimedia content by multimedia document models. More and more this support calls for not only the adequate modeling of the temporal and spatial course of a multimedia presentation and its interactions, but also for the partial reuse of multimedia documents and adaptation to a given user context. However, our thorough investigation of existing standards for multimedia document models such as HTML, MHEG, SMIL, and HyTime leads to us the conclusion that these standard models do not provide sufficient modeling support for reuse and adaptation. Therefore, we propose a new approach for the modeling of adaptable and reusable multimedia content, the Z/sub Y/X model. The model offers primitives that provide-beyond the more or less common primitives for temporal, spatial, and interaction modeling-a variform support for reuse of structure and layout of document fragments and for the adaptation of the content and its presentation to the user context. We present the model in detail and illustrate the application and effectiveness of these concepts by samples taken from our Cardio-OP application in the domain of cardiac surgery. With the Z/sub Y/X model, we developed a comprehensive means for advanced multimedia content creation: support for template-driven authoring of multimedia content and support for flexible, dynamic composition of multimedia documents customized to the users local context and needs. The approach significantly impacts and supports the authoring process in terms of methodology and economic aspects.


acm multimedia | 2007

Semantics, content, and structure of many for the creation of personal photo albums

Susanne Boll; Philipp Sandhaus; Ansgar Scherp; Utz Westermann

Photos are often a means to remember personal events, and the creation of photo albums is the attempt to preserve our memories in a nice book. For a long time people have been creating such photo albums on the basis of prints from analog photos arranged in an album book with scissors and glue and annotated with comments and captions - a tedious task which in these days is getting support by authoring tools and digitally mastered photo books. Relying on the content of others such as printed travel guides, news papers, leaflets, but also friends and family the personal content often has been enriched, enhanced, and completed. This is the starting point of our work: with digital photography and the increasing amount of content-based and contextual metadata of personal photos we can now use this metadata to actually support the targeted and semi-automatic inclusion of interesting, related information from content of others, e.g., from Web 2.0 communities, and offer and add it at the right spot in the personal album. In this paper, we show how photo album creation can benefit from leveraging information learned from many users in regard of the albums content, structure, and semantics.


IEEE Pervasive Computing | 2014

Sensor-Based Identification of Opportune Moments for Triggering Notifications

Benjamin Poppinga; Wilko Heuten; Susanne Boll

Todays smartphones will issue a notification immediately after an event occurs, repeating unanswered notifications in fixed time intervals. The disadvantage of this issue-and-repeat strategy is that notifications can appear in inconvenient situations and thus are perceived as annoying and interrupting. The authors study the mobile context as inferred through a phones sensors for both answered and ignored notifications. They conducted a large-scale, longitudinal study via the Google Play store and observed 6,581 notifications from 79 different users over 76 days. Their derived model can predict opportune moments to issue notifications with approximately 77 percent accuracy. Their findings could lead to intelligent strategies to issue unobtrusive notifications on todays smartphones at no extra cost. This article is part of a special issue on managing attention.


nordic conference on human-computer interaction | 2006

Interactive 3D sonification for the exploration of city maps

Wilko Heuten; Daniel Wichmann; Susanne Boll

Blind or visually impaired people usually do not leave their homes without any assistance, in order to visit unknown cities or places. One reason for this dilemma is, that it is hardly possible for them to gain a non-visual overview about the new place, its landmarks and geographic entities already at home. Sighted people can use a printed or digital map to perform this task. Existing haptic and acoustic approaches today do not provide an economic way to mediate the understanding of a map and relations between objects like distance, direction, and object size. We are providing an interactive three-dimensional sonification interface to explore city maps. A blind person can build a mental model of an areas structure by virtually exploring an auditory map at home. Geographic objects and landmarks are presented by sound areas, which are placed within a sound room. Each type of object is associated with a different sound and can therefore be identified. By investigating the auditory map, the user perceives an idea of the various objects, their directions and relative distances. First user tests show, that users are able to reproduce a sonified city map, which comes close to the original visual city map. With our approach exploring a map with non-speech sound areas provide a new user interface metaphor that offers its potential not only for blind and visually impaired persons but also to applications for sighted persons.

Collaboration


Dive into the Susanne Boll's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge