Tobias Heinroth
University of Ulm
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tobias Heinroth.
pervasive computing and communications | 2010
Tobias Heinroth; Dan Denich; Alexander Schmitt
In this paper we describe the OwlSpeak dialogue manager that enables adaptive spoken dialogue within Intelligent Environments. By facilitating an ontology-based model to define both dialogue domain and state during the ongoing dialogue we turn-wise generate VoiceXML dialogue snippets. We define adaptive dialogues to be adaptive regarding devices and services within the Intelligent Environment, events such as alerts that happen suddenly, and tasks, which may change and/or evolve during an ongoing human-system negotiation. After presenting a scenario that reflects the capabilities and challenges of the system we analyse an exemplary spoken dialogue. The focus of this publication lies on the underlying model - the Spoken Dialogue Ontology. We describe the logical framework of this ontology in detail and give some outlook on future work.
computer software and applications conference | 2011
Tobias Heinroth; Dan Denich
Within the framework of the EU-funded Project ATRACO we have implemented the adaptive Spoken Dialogue Manager Owl Speak. Its most important feature is the ability to pause, resume, and to switch between multiple interactive tasks. These features make the deployed Spoken Dialogue System (SDS) capable of multitasking. In this paper we investigate the effects of multitasking on users within the context of spoken dialogues. Especially within Intelligent Environments such as ATRACO, it is necessary to modify the status of the SDS depending on the actual context and of the requirements the user may have. We have conducted an evaluation to investigate how users cope with the multitasking capabilities of an SDS. The results are two-fold: on the one hand, users who were engaged in multitasking dialogues were more inclined to interact with the SDS. On the other hand, the subjects who serially received one task after another were able to retain more facts in mind than the subjects who used the dynamic multitasking approach. By taking these results into account we conclude the paper with some future work.
annual meeting of the special interest group on discourse and dialogue | 2009
Alexander Schmitt; Tobias Heinroth; Jackson Liscombe
Most studies on speech-based emotion recognition are based on prosodic and acoustic features, only employing artificial acted corpora where the results cannot be generalized to telephone-based speech applications. In contrast, we present an approach based on utterances from 1,911 calls from a deployed telephone-based speech application, taking advantage of additional dialogue features, NLU features and ASR features that are incorporated into the emotion recognition process. Depending on the task, non-acoustic features add 2.3% in classification accuracy compared to using only acoustic features.
Archive | 2011
Stefan Ultes; Tobias Heinroth; Alexander Schmitt; Wolfgang Minker
Dialog strategies have long since been handcrafted by dialog experts. Only within the last decade, research has moved to data-driven methods leading to statistical models. But still, most dialog systems make use solely of the spoken words and their semantics, although speech signals reveal much more about the speaker, e.g. its age, gender, emotional state, etc. Using this speaker state information - along with the semantics - can be a promising way of moving dialog systems towards better performance whilst making them more natural at the same time. Partially Observable Markov Decision Processes (POMDPs), a state-of-the-art statistical modeling method, offer an easy and unified way of integrating speaker state information into dialog systems. In this contribution we present our ongoing research on combining a POMDP-based dialog manager with speaker state information.
Intelligent Decision Technologies | 2011
Tobias Heinroth; Achilles Kameas; Gaëtan Pruvost; Lambrini Seremeti; Yacine Bellik; Wolfgang Minker
In this article we describe our approach towards the specification and realization of human-computer interaction within Next Generation Ambient Intelligent Environments (NGAIE). These environments are populated with numerous devices and multiple occupants or users. They exhibit increasingly intelligent behaviour, provide optimized resource usage and support consistent functionality and human-centric operation. In our approach, NGAIEs contain an encoding of local and global knowledge in the form of a set of heterogeneous ontologies, which have to be aligned. This is to provide a uniform and consistent knowledge representation. In NGAIEs humans will interact with their environments seamlessly using multimodal dialogue interaction. To enable such adaptive human-computer interaction we then focus on when and how this knowledge can be modelled and used in order to realize complex, negotiative, and collaborative tasks. The combination of heterogeneous ontologies and ontology matching algorithms allows for semantically rich interaction and information exchange. Based on an agent-based, service-oriented architecture, this combination maximizes the use of available interaction resources, while decoupling interaction specification from interfaces and modalities. We illustrate our approach with a task analysis of a scenario showing the challenges of NGAIEs. Finally, we present ontology prototypes that are required for the implementation of the scenario.
intelligent environments | 2010
Florian Nothdurft; Gregor Bertrand; Tobias Heinroth; Wolfgang Minker
In this paper, we describe the development of a dialogue model that integrates emotional dialogue strategies and explanations in a simple hence powerful way. As intelligent environments make inroads into the market, the need for user-friendly interaction with these systems grows. Pro-active reaction to user knowledge and emotions is one of the key points in user-friendly adaption of dialogue systems and therefore one of the main topics of research. As intelligent environments grow in complexity and field of application, the knowledge requirements for the user grow as well. Therefore it is vitally important to impart knowledge and information in an emotionally sensitive and user-aware way. In our dialog model we consider the natural structure of a nontrivial dialogue as a structure divided into several goals. These goals are protected by so called guards which represent preconditions which have to be fulfilled in order to tackle the related goal.
intelligent systems design and applications | 2009
Gaëtan Pruvost; Achilles Kameas; Tobias Heinroth; Lambrini Seremeti; Wolfgang Minker
This article describes our approach towards the specification and realization of interoperability within Next Generation Ambient Intelligent Environments (NGAIE). These are populated with numerous devices and multiple occupants or users exhibit increasingly intelligent behaviour, provide optimized resource usage and support consistent functionality and human-centric operation. In NGAIEs users will interact with their environments using the devices therein complemented with adaptive multimodal dialogue. This requires the definition of the local and global information which is relevant to the interaction and mechanisms to share this knowledge among entities. In our approach, knowledge is represented as a set of heterogeneous ontologies which have to be aligned in order to provide a uniform and consistent knowledge representation. The combination of heterogeneous ontologies and ontology matching algorithms allows for semantically rich information exchange. Based on a combination of agent-based and service-oriented architectures, the proposed approach adopts a task-based model to maximize the use of available heterogeneous resources.
intelligent agents | 2009
Tobias Heinroth; Achilles Kameas; Christian Wagner; Yacine Bellik
This paper presents an agent-based approach for realizing a new generation of intelligent environments referred to as adaptive ambient ecologies. These are highly distributed systems, which require new ways of communication and collaboration in order to support the realization of peoples tasks. We use three types of agents for the main functions of planning, adaptation and interaction. The knowledge repository of an ambient ecology is encoded in an ontology that is assembled on demand and is made accessible to the agents. In our approach, we introduce methodologies for extending the concept of ontologies beyond the storage of structured data to further include mechanisms for the exchange and manipulation of activity related information within ecologies.
IET International Conference on Intelligent Environment | 2009
Achilles Kameas; Christos Goumopoulos; Hani Hagras; Victor Callaghan; Tobias Heinroth; Michael Weber
The realization of the vision of ambient intelligence requires developments both at infrastructure and application levels. As a consequence of the former, physical spaces are turned into intelligent AmI environments, which offer not only services such as sensing, digital storage, computing, and networking but also optimization, data fusion, and adaptation. However, despite the large capabilities of AmI environments, people’s interaction with their environment will not cease to be goal-oriented and task-centric. In this chapter, we use the notions of ambient ecology to describe the resources of an AmI environment and activity spheres to describe the specific ambient ecology resources, data and knowledge required to support a user in realizing a specific goal. In order to achieve task-based collaboration among the heterogeneous members of an ambient ecology, first one has to deal with this heterogeneity, while at the same time achieving independence between a task description and its respective realization within a specific AmI environment. Successful execution of tasks depends on the quality of interactions among artifacts and among people and artifacts, as well as on the efficiency of adaptation mechanisms. The formation of a system that realizes adaptive activity spheres is supported by a service-oriented architecture , which uses intelligent agents to support adaptive planning, task realization and enhanced human–machine interaction, ontologies to represent knowledge and ontology alignment mechanisms to achieve adaptation and device independence. The proposed system supports adaptation at different levels, such as the changing configuration of the ambient ecology, the realization of the same activity sphere in different AmI environments, the realization of tasks in different contexts, and the interaction between the system and the user.
international conference on human computer interaction | 2013
Florian Nothdurft; Tobias Heinroth; Wolfgang Minker
Maintaining and enhancing the willingness of a user to interact with a technical system is crucial for human-computer interaction (HCI). Trust has shown to be an important factor influencing the frequency and kind of usage. In this paper we present our work on using explanations to maintain the trust relationship between human and computer. We conducted an experiment on how different goals of explanations influence the bases of human-computer trust. We present the results of the conducted study and outline what this means for the design of future technical systems and in particular for the central dialogue management component controlling the course and content of the HCI.