Mark ter Maat
University of Twente
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mark ter Maat.
intelligent virtual agents | 2010
Mark ter Maat; Khiet Phuong Truong; Dirk Heylen
Different turn-taking strategies of an agent influence the impression that people have of it. We recorded conversations of a human with an interviewing agent, controlled by a wizard and using a particular turn-taking strategy. A questionnaire with 27 semantic differential scales concerning personality, emotion, social skills and interviewing skills was used to capture these impressions. We show that it is possible to influence factors such as agreeableness, assertiveness, conversational skill and rapport by varying the agent’s turn-taking strategy.
affective computing and intelligent interaction | 2009
Marc Schröder; Elisabetta Bevacqua; Florian Eyben; Hatice Gunes; Dirk Heylen; Mark ter Maat; Sathish Pammi; Maja Pantic; Catherine Pelachaud; Björn W. Schuller; Etienne de Sevin; Michel F. Valstar; Martin Wöllmer
Sensitive artificial listeners (SAL) are virtual dialogue partners who, despite their very limited verbal understanding, intend to engage the user in a conversation by paying attention to the users emotions and non-verbal expressions. The SAL characters have their own emotionally defined personality, and attempt to drag the user towards their dominant emotion, through a combination of verbal and non-verbal expression. The demonstrator shows an early version of the fully autonomous SAL system based on audiovisual analysis and synthesis.
intelligent virtual agents | 2011
Mark ter Maat; Dirk Heylen
This paper introduces Flipper, an specification language and interpreter for Information State Update rules that can be used for developing spoken dialogue systems and embodied conversational agents. The system uses XML-templates to modify the information state and to select behaviours to perform.
Applied Artificial Intelligence | 2011
Dirk Heylen; Rieks op den Akker; Mark ter Maat; Paolo Petta; Stefan Rank; Dennis Reidsma; Job Zwiers
The literature on social agents has put forward a number of requirements that social agents need to fulfill. In this paper we analyze the kinds of reasons and motivations that lie behind the statement of these requirements. In a second part of the paper, we look at how one can go about engineering the social agents. We introduce a general language in which to express dialogue rules and some tools that support the development of dialogue systems.
intelligent technologies for interactive entertainment | 2008
Mark ter Maat; Rob M. Ebbers; Dennis Reidsma; Antinus Nijholt
We describe our research on designing and implementing a Virtual Conductor. That is, a virtual human (embodied agent) that acts like a human conductor in its interaction with a real, human orchestra. We reported previously on a first version that used a digital musical score to lead an orchestra. This conductor was able to conduct the beat with a certain tempo and dynamics, and to correct the tempo if necessary, using advanced audio analysis. We observed this Virtual Conductor at work during various performances for which he was invited. These performances made us aware of shortcomings. Therefore we took a closer look at the interaction between conductors and musicians in practice, both during performances and during rehearsals, and based on this study we introduced conducting gestures that display the intentions of a conductor and developed rehearsal modules. Apart from the literature on conducting we took into account videos of human conductors and interviewed human conductors. In addition we introduced principles from conversational analysis in the new design of our Virtual Conductor.
ieee international conference on automatic face gesture recognition | 2011
Marc Schröder; Sathish Pammi; Hatice Gunes; Maja Pantic; Michel F. Valstar; Roddy Cowie; Dirk Heylen; Mark ter Maat; Florian Eyben; Björn W. Schuller; Martin Wöllmer; Elisabetta Bevacqua; Catherine Pelachaud; Etienne de Sevin
This demonstration aims to showcase the recently completed SEMAINE system. The SEMAINE system is a publicly available, fully autonomous Sensitive Artificial Listeners (SAL) system that consists of virtual dialog partners based on audiovisual analysis and synthesis (see http://semaine.opendfki.de/wiki). The system runs in real-time, and combines incremental analysis of user behavior, dialog management, and synthesis of speaker and listener behavior of a SAL character, displayed as a virtual agent. The SAL characters intend to engage the user in a conversation by paying attention to the users emotions and nonverbal expressions. The characters have their own emotionally defined personality. During an interaction, the characters attempt to create an emotional workout for the user by drawing her/him towards their dominant emotion, through a combination of verbal and nonverbal expressions.
intelligent technologies for interactive entertainment | 2011
Elisabetta Bevacqua; Florian Eyben; Dirk Heylen; Mark ter Maat; Sathish Pammi; Catherine Pelachaud; Marc Schröder; Björn W. Schuller; Etienne de Sevin; Martin Wöllmer
Sensitive Artificial Listener (SAL) is a multimodal dialogue system which allows users to interact with virtual agents. Four characters with different emotional traits engage users is emotionally coloured interactions. They not only encourage the users into talking but also try to drag them towards specific emotional states. Despite the agents very limited verbal understanding, they are able to react appropriately to the user’s non-verbal behaviour. The demonstrator shows an final version of the fully autonomous SAL system.
intelligent virtual agents | 2012
Ronald Walter Poppe; Mark ter Maat; Dirk Heylen
Advances in animation and sensor technology allow us to engage in face-to-face conversations with virtual agents [1]. One major challenge is to generate the virtual agents appropriate, human-like behavior contingent with that of the human conversational partner. Models of (nonverbal) behavior are pre-dominantly learned from corpora of dialogs between human subjects [2], or based on simple observations from literature (e.g. [3,4,5,6])
Proceedings of the 3rd international workshop on Affective interaction in natural environments | 2010
Mark ter Maat; Dirk Heylen
This paper describes work-in-progress on a study to create models of responses of virtual agents that are selected only based on non-content features, such as prosody and facial expressions. From a corpus of human-human interactions, in which one person was playing the part of an agent and the second person a user, we extracted the turns of the user and gave these to annotators. The annotators had to select utterances from a list of phrases in the repertoire of our agent that would be a good response to the user utterance. The corpus is used to train response selection models based on automatically extracted features and on human annotations of the user-turns.
Multimodal Signals: Cognitive and Algorithmic Issues | 2009
Mark ter Maat; Dirk Heylen
After perceiving multi-modal behaviour from a user or agent a conversational agent needs to be able to determine what was intended with that behaviour. Contextual variables play an important role in this process. We discuss the concept of context and its role in interpretation, analysing a number of examples. We show how in these cases contextual variables are needed to disambiguate multi-modal behaviours. Finally we present some basic categories in which these contextual variables can be divided.