Hannes Pirker
Austrian Research Institute for Artificial Intelligence
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hannes Pirker.
intelligent virtual agents | 2006
Stefan Kopp; Brigitte Krenn; Stacy Marsella; Andrew N. Marshall; Catherine Pelachaud; Hannes Pirker; Kristinn R. Thórisson; Hannes Högni Vilhjálmsson
This paper describes an international effort to unify a multimodal behavior generation framework for Embodied Conversational Agents (ECAs). We propose a three stage model we call SAIBA where the stages represent intent planning, behavior planning and behavior realization. A Function Markup Language (FML), describing intent without referring to physical behavior, mediates between the first two stages and a Behavior Markup Language (BML) describing desired physical realization, mediates between the last two stages. In this paper we will focus on BML. The hope is that this abstraction and modularization will help ECA researchers pool their resources to build more sophisticated virtual humans.
Archive | 2011
Marc Schröder; Hannes Pirker; Myriam Lamolle; Felix Burkhardt; Christian Peter; Enrico Zovato
In many cases when technological systems are to operate on emotions and related states, they need to represent these states. Existing representations are limited to application-specific solutions that fall short of representing the full range of concepts that have been identified as relevant in the scientific literature. The present chapter presents a broad conceptual view on the possibility to create a generic representation of emotions that can be used in many contexts and for many purposes. Potential use cases and resulting requirements are identified and compared to the scientific literature on emotions. Options for the practical realisation of an Emotion Markup Language are discussed in the light of the requirement to extend the language to different emotion concepts and vocabularies, and ontologies are investigated as a means to provide limited “mapping” mechanisms between different emotion representations.
Archive | 2011
Brigitte Krenn; Catherine Pelachaud; Hannes Pirker; Christopher E. Peters
This contribution deals with the requirements on representation languages employed in planning and displaying communicative multimodal behaviour of embodied conversational agents (ECAs). We focus on the role of behaviour representation frameworks as part of the processing chain from intent planning to the planning and generation of multimodal communicative behaviours. On the one hand, the field is fragmented, with almost everybody working on ECAs developing their own tailor-made representations, which is amongst others reflected in the extensive references list. On the other hand, there are general aspects that need to be modelled in order to generate multimodal behaviour. Throughout the chapter, we take different perspectives on existing representation languages and outline the fundamental of a common framework.
Archive | 2011
Jean-Claude Martin; Laurence Devillers; Amaryllis Raouzaiou; George Caridakis; Zsófia Ruttkay; Catherine Pelachaud; Maurizio Mancini; Radek Niewiadomski; Hannes Pirker; Brigitte Krenn; Isabella Poggi; Emanuela Magno Caldognetto; Federica Cavicchio; Giorgio Merola; Alejandra García Rojas; Frédéric Vexo; Daniel Thalmann; Arjan Egges; Nadia Magnenat-Thalmann
In order to be believable, embodied conversational agents (ECAs) must show expression of emotions in a consistent and natural looking way across modalities. The ECA has to be able to display coordinated signs of emotion during realistic emotional behaviour. Such a capability requires one to study and represent emotions and coordination of modalities during non-basic realistic human behaviour, to define languages for representing such behaviours to be displayed by the ECA, to have access to mono-modal representations such as gesture repositories. This chapter is concerned about coordinating the generation of signs in multiple modalities in such an affective agent. Designers of an affective agent need to know how it should coordinate its facial expression, speech, gestures and other modalities in view of showing emotion. This synchronisation of modalities is a main feature of emotions.
meeting of the association for computational linguistics | 1998
Hannes Pirker; Georg Niklfeld; Johannes Matlagek; Harald Trost
The paper describes an interface between generator and synthesizer of the German language concept-to-speech system VieCtoS. It discusses phenomena in German intonation that depend on the interaction between grammatical dependencies (projection of information structure into syntax) and prosodic context (performance-related modifications to intonation patterns).Phonological processing in our system comprises segmental as well as suprasegmental dimensions such as syllabification, modification of word stress positions, and a symbolic encoding of intonation. Phonological phenomena often touch upon more than one of these dimensions, so that mutual accessibility of the data structures on each dimension had to be ensured.We present a linear representation of the multidimensional phonological data based on a straightforward linearization convention, which suffices to bring this conceptually multilinear data set under the scope of the well-known processing techniques for two-level morphology.
TAEBC-2009 | 2009
Stefan Kopp; Brigitte Krenn; Stacy Marsella; Andrew N. Marshall; Catherine Pelachaud; Hannes Pirker; Kristinn R. Thórisson; Hannes Högni Vilhjálmsson
Virtual doppelgängers are human representations in virtual environments with photorealistic resemblance to individuals. Previous research has shown that doppelgängers can be effective in persuading users in the health domain. An experiment explored the potential of using virtual doppelgängers in addition to a traditional public health campaign message to heighten the perception of personal relevance and risk of sugar-sweetened beverages. Both virtual doppelgängers and an unfamiliar virtual human (i.e., virtual other) used in addition to a health pamphlet were effective in increasing risk perception compared to providing just the pamphlet. Virtual doppelgängers were more effective than virtual others in increasing perceived personal relevance to the health message. Self-referent thoughts and self presence were confirmed as mediators.
affective computing and intelligent interaction | 2007
Hannes Pirker
This study deals with the application of MFCC based models for both the recognition of emotional speech and the recognition of emotions in speech. More specifically it investigates the performance of phone-level models. First, results from performing forced alignment for the phonetic segmentation on GEMEP, a novel multimodal corpus of acted emotional utterances are presented, then the newly acquired segmentations are used for experiments with emotion recognition.
Lecture Notes in Computer Science | 2006
Stefan Kopp; Brigitte Krenn; Stacy Marsella; Andrew N. Marshall; Catherine Pelachaud; Hannes Pirker; Kristinn R. Thórisson; Hannes Högni Vilhjálmsson
arXiv: Multimedia | 2002
Paul Piwek; Brigitte Krenn; Marc Schroeder; Martine Grice; Stefan Baumann; Hannes Pirker
conference of the international speech communication association | 1998
Erhard Rank; Hannes Pirker