Gregor Mehlmann
Augsburg College
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gregor Mehlmann.
international conference on multimodal interfaces | 2014
Gregor Mehlmann; Markus Häring; Kathrin Janowski; Tobias Baur; Patrick Gebhard; Elisabeth André
Grounding is an important process that underlies all human interaction. Hence, it is crucial for building social robots that are expected to collaborate effectively with humans. Gaze behavior plays versatile roles in establishing, maintaining and repairing the common ground. Integrating all these roles in a computational dialog model is a complex task since gaze is generally combined with multiple parallel information modalities and involved in multiple processes for the generation and recognition of behavior. Going beyond related work, we present a modeling approach focusing on these multi-modal, parallel and bi-directional aspects of gaze that need to be considered for grounding and their interleaving with the dialog and task management. We illustrate and discuss the different roles of gaze as well as advantages and drawbacks of our modeling approach based on a first user study with a technically sophisticated shared workspace application with a social humanoid robot.
Ksii Transactions on Internet and Information Systems | 2015
Tobias Baur; Gregor Mehlmann; Ionut Damian; Florian Lingenfelser; Johannes Wagner; Birgit Lugrin; Elisabeth André; Patrick Gebhard
The outcome of interpersonal interactions depends not only on the contents that we communicate verbally, but also on nonverbal social signals. Because a lack of social skills is a common problem for a significant number of people, serious games and other training environments have recently become the focus of research. In this work, we present NovA (Nonverbal behavior Analyzer), a system that analyzes and facilitates the interpretation of social signals automatically in a bidirectional interaction with a conversational agent. It records data of interactions, detects relevant social cues, and creates descriptive statistics for the recorded data with respect to the agents behavior and the context of the situation. This enhances the possibilities for researchers to automatically label corpora of human--agent interactions and to give users feedback on strengths and weaknesses of their social behavior.
intelligent virtual agents | 2010
Gregor Mehlmann; Markus Häring; René Bühling; Michael Wißner; Elisabeth André
We present the design of a cast of pedagogical agents impersonating different educational roles in an interactive virtual learning environment. Teams of those agents are used to create different learning scenarios in order to provide learners with an engaging and motivating learning experience. Authors can employ an easy to use multimodal dialog authoring tool to adapt lecture and dialog content as well as interaction management to meet their respective requirements.
international conference on multimodal interfaces | 2012
Gregor Mehlmann; Elisabeth André
In this paper we present a novel approach to the combined modeling of multimodal fusion and interaction management. The approach is based on a declarative multimodal event logic that allows the integration of inputs distributed over multiple modalities in accordance to spatial, temporal and semantic constraints. In conjunction with a visual state chart language, our approach supports the incremental parsing and fusion of inputs and a tight coupling with interaction management. The incremental and parallel parsing approach allows us to cope with concurrent continuous and discrete interactions and fusion on different levels of abstraction. The high-level visual and declarative modeling methods support rapid prototyping and iterative development of multimodal systems.
artificial intelligence in education | 2015
Ionut Damian; Tobias Baur; Birgit Lugrin; Patrick Gebhard; Gregor Mehlmann; Elisabeth André
Technology-enhanced learning environments are designed to help users practise social skills. In this paper, we present and evaluate a virtual job interview training game which has been adapted to the special requirements of young people with low chances on the job market. The evaluation spanned three days, during which we compared the technology-enhanced training with a traditional learning method usually practised in schools, i.e. reading a job interview guide. The results are promising as professional career counsellors rated the pupils who trained with the system significantly better than those who learned with the traditional method.
Künstliche Intelligenz | 2016
Gregor Mehlmann; Kathrin Janowski; Elisabeth André
Grounding is an important process that underlies all human interaction. Hence, it is also crucial for social companions to interact naturally. Maintaining the common ground requires domain knowledge but has also numerous social aspects, such as attention, engagement and empathy. Integrating these aspects and their interplay with the dialog management in a computational interaction model is a complex task. We present a modeling approach overcoming this challenge and illustrate it based on some social companion applications.
international conference on multimodal interfaces | 2011
Gregor Mehlmann; Birgit Endraß; Elisabeth André
In this paper, we present a modeling approach for the management of highly interactive, multithreaded and multimodal dialogues. Our approach enforces the separation of dialogue content and dialogue structure and is based on a statechart language enfolding concepts for hierarchy, concurrency, variable scoping and a detailed runtime history. These concepts facilitate the modeling of interactive dialogues with multiple virtual characters, autonomous and parallel behaviors, flexible interruption policies, context-sensitive interpretation of the users discourse acts and coherent resumptions of dialogues. An interpreter allows the realtime visualization and modification of the model to allow a rapid prototyping and easy debugging. Our approach has successfully been used in applications and research projects as well as evaluated in field tests with non-expert authors. We present a demonstrator illustrating our concepts in a social game scenario.
european conference on artificial intelligence | 2014
Gregor Mehlmann; Kathrin Janowski; Tobias Baur; Markus Häring; Elisabeth André; Patrick Gebhard
Grounding is essential in human interaction and crucial for social robots collaborating with humans. Gaze plays versatile roles for establishing, maintaining and repairing the common ground. It is combined with parallel modalities and involved in several processes for behavior generation and recognition. We present a uniform modeling approach focusing on the multi-modal, parallel and bidirectional aspects of gaze and their interleaving with the dialog logic.
Proceedings of the 1st International Workshop on Multimedia Analysis and Retrieval for Multimodal Interaction | 2016
Leo Wanner; Josep Blat; Stamatia Dasiopoulou; Mónica Domínguez; Gerard Llorach; Simon Mille; Federico M. Sukno; Eleni Kamateri; Stefanos Vrochidis; Ioannis Kompatsiaris; Elisabeth André; Florian Lingenfelser; Gregor Mehlmann; Andries Stam; Ludo Stellingwerff; Bianca Vieru; Lori Lamel; Wolfgang Minker; Louisa Pragst; Stefan Ultes
We present work in progress on an intelligent embodied conversation agent in the basic care and healthcare domain. In contrast to most of the existing agents, the presented agent is aimed to have linguistic cultural, social and emotional competence needed to interact with elderly and migrants. It is composed of an ontology-based and reasoning-driven dialogue manager, multimodal communication analysis and generation modules and a search engine for the retrieval of multimedia background content from the web needed for conducting a conversation on a given topic.
international conference on interactive digital storytelling | 2011
Birgit Endrass; Christoph Klimmt; Gregor Mehlmann; Elisabeth André; Christian Roth
How human users perceive and interact with interactive story-telling applications has not been widely researched so far. In this paper, we present an experimental approach in which we investigate the impact of different dialog-based interaction styles on human users. To this end, an interactive demonstrator has been evaluated in two different versions: one providing a continuous interaction style where interaction is possible at any time, and another providing system-initiated interaction where the user can only interact at certain prompts.