Frank Honold
University of Ulm
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Frank Honold.
australasian computer-human interaction conference | 2012
Frank Honold; Felix Schüssel; Michael Weber
Human beings continuously adapt their way of communication to their surroundings and their communication partner. Although context-aware ubiquitous systems gather a lot of information to maximize their functionality, they predominantly use static ways to communicate. In order to fulfill the users communication needs and demands, the sensors diverse and sometimes uncertain information must also be used to dynamically adapt the user interface. In this article we present ProFi, a system for Probabilistic Fission, designed to reason on adaptive and multimodal output based on uncertain or ambiguous data. In addition, we present a system architecture as well as a new meta model for multimodal interactive systems. Based on this meta model we describe ProFis process of multimodal fission along with our current implementation.
intelligent environments | 2014
Frank Honold; Pascal Bercher; Felix Richter; Florian Nothdurft; Thomas Geier; Roland Barth; Thilo Hörnle; Felix Schüssel; Stephan Reuter; Matthias Rau; Gregor Bertrand; Bastian Seegebarth; Peter Kurzok; Bernd Schattenberg; Wolfgang Minker; Michael Weber; Susanne Biundo
The properties of multimodality, individuality, adaptability, availability, cooperativeness and trustworthiness are at the focus of the investigation of Companion Systems. In this article, we describe the involved key components of such a system and the way they interact with each other. Along with the article comes a video, in which we demonstrate a fully functional prototypical implementation and explain the involved scientific contributions in a simplified manner. The realized technology considers the entire situation of the user and the environment in current and past states. The gained knowledge reflects the context of use and serves as basis for decision-making in the presented adaptive system.
MPRSS'12 Proceedings of the First international conference on Multimodal Pattern Recognition of Social Signals in Human-Computer-Interaction | 2012
Felix Schüssel; Frank Honold; Michael Weber
Systems with multimodal interaction capabilities have gained a lot of attention in recent years. Especially so called companion systems that offer an adaptive, multimodal user interface show great promise for a natural human computer interaction. While more and more sophisticated sensors become available, current systems capable of accepting multimodal inputs (e.g. speech and gesture) still lack the robustness of input interpretation needed for companion systems. We demonstrate how evidential reasoning can be applied in the domain of graphical user interfaces in order to provide such reliability and robustness expected by users. For this purpose an existing approach using the Transferable Belief Model from the robotic domain is adapted and extended.
international conference on multimodal interfaces | 2014
Felix Schüssel; Frank Honold; Miriam Schmidt; Nikola Bubalo; Anke Huckauf; Michael Weber
Multimodal systems still tend to ignore the individual input behavior of users, and at the same time, suffer from erroneous sensor inputs. Although many researchers have described user behavior in specific settings and tasks, little to nothing is known about the applicability of such information, when it comes to increase the robustness of a system for multimodal inputs. We conducted a gamified experimental study to investigate individual user behavior and error types found in an actually running system. It is shown, that previous ways of describing input behavior by a simple classification scheme (like simultaneous and sequential) are not suited to build up an individual interaction history. Instead, we propose to use temporal distributions of different metrics derived from multimodal event timings. We identify the major errors that can occur in multimodal interactions and finally show how such an interaction history can practically be applied for error detection and recovery. Applying the proposed approach to the experimental data, the initial error rate is reduced from 4.9% to a minimum of 1.2%.
intelligent environments | 2013
Frank Honold; Felix Schüssel; Michael Weber; Florian Nothdurft; Gregor Bertrand; Wolfgang Minker
This article presents a context adaptive approach for multimodal interaction for the use in cognitive technical systems, so called companion systems. A system architecture is presented and we clarify where context awareness occurs on different levels with a layered context model. The focus is on the topics of dialog management, multimodal fusion, and multimodal fission, as the main participants in interaction. An implemented prototype is presented, yielding some concrete instances of the described context models and the adaption to them.
Neurocomputing | 2015
Michael Glodek; Frank Honold; Thomas Geier; Gerald Krell; Florian Nothdurft; Stephan Reuter; Felix Schüssel; Thilo Hörnle; Klaus Dietmayer; Wolfgang Minker; Susanne Biundo; Michael Weber; Günther Palm; Friedhelm Schwenker
Recent trends in human-computer interaction (HCI) show a development towards cognitive technical systems (CTS) to provide natural and efficient operating principles. To do so, a CTS has to rely on data from multiple sensors which must be processed and combined by fusion algorithms. Furthermore, additional sources of knowledge have to be integrated, to put the observations made into the correct context. Research in this field often focuses on optimizing the performance of the individual algorithms, rather than reflecting the requirements of CTS. This paper presents the information fusion principles in CTS architectures we developed for Companion Technologies. Combination of information generally goes along with the level of abstractness, time granularity and robustness, such that large CTS architectures must perform fusion gradually on different levels - starting from sensor-based recognitions to highly abstract logical inferences. In our CTS application we sectioned information fusion approaches into three categories: perception-level fusion, knowledge-based fusion and application-level fusion. For each category, we introduce examples of characteristic algorithms. In addition, we provide a detailed protocol on the implementation performed in order to study the interplay of the developed algorithms.
Journal on Multimodal User Interfaces | 2013
Felix Schüssel; Frank Honold; Michael Weber
When developing multimodal interactive systems it is not clear which importance should be given to which modality. In order to study influencing factors on multimodal interaction, we conducted a Wizard of Oz study on a basic recurrent task: 53 subjects performed diverse selections of objects on a screen. The way and modality of interaction was not specified nor predefined by the system, and the users were free in how and what to select. Natural input modalities like speech, gestures, touch, and arbitrary multimodal combinations of these were recorded as dependent variables. As independent variables, subjects’ gender, personality traits, and affinity towards technical devices were surveyed, as well as the system’s varying presentation styles of the selection. Our statistical analyses reveal gender as a momentous influencing factor and point out the role of individuality for the way of interaction, while the influence of the system output seems to be quite limited. This knowledge about the prevalent task of selection will be useful for designing effective and efficient multimodal interactive systems across a wide range of applications and domains.
intelligent environments | 2014
Frank Honold; Felix Schüssel; Michael Weber
Present context-aware systems gather a lot of information to maximize their functionality but they predominantly use rather static ways to communicate. This paper motivates two components that serve as mediators between arbitrary components for multimodal fission and fusion, aiming to improve communication skills. Along with an exemplary selection scenario we describe the architecture for an automatic cooperation of fusion and fission in a model-driven realization. We describe how the approach supports user-initiative dialog requests as well as user-nominated UI configuration. Despite that, we show how multimodal input conflicts can be solved using a shortcut in the commonly used human-computer interaction loop (HCI loop).
automotive user interfaces and interactive vehicular applications | 2009
Guido M. de Melo; Frank Honold; Michael Weber; Mark Poguntke; André Berton
In this paper we present an approach for creating user interfaces from abstract representations for the automotive domain. The approach is based on transformations between different user interface abstraction levels. Existing user interface representation methods are presented and evaluated. The impact of specific requirements for automotive human-machine interaction is discussed. Considering these requirements a process based on transformation rules is outlined to allow for flexible integration of external infotainment applications coming from mobile devices or web sources into the in-car interaction environment.
computer software and applications conference | 2011
Gregor Bertrand; Florian Nothdurft; Frank Honold; Felix Schüssel
In the research area of spoken language dialogue systems there are many ways for modeling dialogues. The dialog models particular structure depends on the algorithm used to interpret it. In most cases a dialogues model is quite difficult to understand and to create. We present a novel technique for modeling dialogues based on ready to use open source tools in an easy and understandable way. Making use of our approach a dialogue designer (unfamiliar with the internals of the dialogue manager) can simply develop even complex and adaptive dialogues. The dialogues are then ready to be interpreted by the dialogue management in order to integrate them seamlessly into the spoken language dialogue system.