Maria Koutsombogera
National and Kapodistrian University of Athens
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Maria Koutsombogera.
Cognitive Computation | 2015
Alessandro Vinciarelli; Anna Esposito; Elisabeth André; Francesca Bonin; Mohamed Chetouani; Jeffrey F. Cohn; Marco Cristani; Ferdinand Fuhrmann; Elmer Gilmartin; Zakia Hammal; Dirk Heylen; Rene Kaiser; Maria Koutsombogera; Alexandros Potamianos; Steve Renals; Giuseppe Riccardi; Albert Ali Salah
Modelling, analysis and synthesis of behaviour are the subject of major efforts in computing science, especially when it comes to technologies that make sense of human–human and human–machine interactions. This article outlines some of the most important issues that still need to be addressed to ensure substantial progress in the field, namely (1) development and adoption of virtuous data collection and sharing practices, (2) shift in the focus of interest from individuals to dyads and groups, (3) endowment of artificial agents with internal representations of users and context, (4) modelling of cognitive and semantic processes underlying social behaviour and (5) identification of application domains and strategies for moving from laboratory to the real-world products.
Multimodal Signals: Cognitive and Algorithmic Issues | 2009
Maria Koutsombogera; Harris Papageorgiou
This paper presents a study on multimodal conversation analysis of Greek TV interviews. Specifically, we examine the type of facial, hand and body gestures and their respective communicative functions in terms of feedback and turn management. Taking into account previous work on the analysis of non-verbal interaction, we describe the tools and the coding scheme employed, we discuss the distribution of the features of interest and we investigate the effect of the situational and conversational interview setting on the interactional behavior of the participants. Finally, we conclude with comments on future work and exploitation of the resulting resource.
9th IFIP WG 5.5 International Summer Workshop on Multimodal Interfaces, eNTERFACE 2013, Lisbon, Portugal, July 15 – August 9, 2013 | 2014
Samer Al Moubayed; Jonas Beskow; Bajibabu Bollepalli; Ahmed Hussen-Abdelaziz; Martin Johansson; Maria Koutsombogera; José Lopes; Jekaterina Novikova; Catharine Oertel; Gabriel Skantze; Kalin Stefanov; Gül Varol
This project explores a novel experimental setup towards building spoken, multi-modally rich, and human-like multiparty tutoring agent. A setup is developed and a corpus is collected that targets t ...
COST'09 Proceedings of the Second international conference on Development of Multimodal Interfaces: active Listening and Synchrony | 2009
Maria Koutsombogera; Harris Papageorgiou
The aim of this study is to analyze certain linguistic (dialogue acts, morphosyntactic units, semantics) and non-verbal cues (face, hand and body gestures) that may induce the silent feedback of a participant in face-to-face discussions. We analyze the typology and functions of the feedback expressions as attested in a corpus of TV interviews and then we move on to the investigation of the immediately preceding context to find systematic evidence related to the production of feedback. Our motivation is to look into the case of active listening by processing data from real dialogues based on the discourse and lexical content that induces the listener’s reactions.
international conference on universal access in human-computer interaction | 2016
Eleni Efthimiou; Stavroula-Evita Fotinea; Theodore Goulas; Athanasia-Lida Dimou; Maria Koutsombogera; Vassilis Pitsikalis; Petros Maragos; Costas S. Tzafestas
Acquisition and annotation of a multimodal-multisensory data set of human-passive rollator-carer interactions have enabled the analysis of related human behavioural patterns and the definition of the MOBOT human-robot communication model. The MOBOT project has envisioned the development of cognitive robotic assistant prototypes that act proactively, adaptively and interactively with respect to elderly humans with slight walking and cognitive difficulties. To meet the project’s goals, a multimodal action recognition system is being developed to monitor, analyse and predict user actions with a high level of accuracy and detail. In the same framework, the analysis of human behaviour data that have become available through the project’s multimodal-multisensory corpus, have led to the modelling of Human-Robot Communication in order to achieve an effective, natural interaction between users and the assistive robotic platform. Here, we discuss how the project’s communication model has been integrated in the robotic platform in order to support a natural multimodal human-robot interaction.
ieee symposium series on computational intelligence | 2016
Eleni Efthimiou; Stavroula-Evita Fotinea; Theodore Goulas; Maria Koutsombogera; Panagiotis Karioris; Anna Vacalopoulou; Isidoros Rodomagoulakis; Petros Maragos; Costas S. Tzafestas; Vassilis Pitsikalis; Yiannis Koumpouros; Alexandra Karavasili; Panagiotis Siavelis; Foteini Koureta; Despoina Alexopoulou
In this paper we discuss the integration of a communication model in the MOBOT assistive robotic platform and its evaluation by target users. The MOBOT platform envisions the development of cognitive robotic assistant prototypes that act proactively, adaptively and interactively with respect to elderly humans with slight walking and cognitive impairments. The respective multimodal action recognition system has been developed to monitor, analyze and predict user actions with a high level of accuracy and detail. The robotic platform incorporates a human-robot communication model that has been defined with semantics of human actions in interaction, their capture and their representation in terms of behavioral patterns, to achieve an effective, natural interaction, aiming to support elderly users of slight walking and cognitive inability. The platform has been evaluated in a series of validation experiments with end users, the procedure and results of which are also presented in this paper.
human-robot interaction | 2014
Samer Al Moubayed; Jonas Beskow; Bajibabu Bollepalli; Joakim Gustafson; Ahmed Hussen-Abdelaziz; Martin Johansson; Maria Koutsombogera; José Lopes; Jekaterina Novikova; Catharine Oertel; Gabriel Skantze; Kalin Stefanov; Gül Varol
In this paper, we describe a project that explores a novel experimental setup towards building a spoken, multi-modally rich, and human-like multiparty tutoring robot. A human-robot interaction setup is designed, and a human-human dialogue corpus is collected. The corpus targets the development of a dialogue system platform to study verbal and nonverbal tutoring strategies in multiparty spoken interactions with robots which are capable of spoken dialogue. The dialogue task is centered on two participants involved in a dialogue aiming to solve a card-ordering game. Along with the participants sits a tutor (robot) that helps the participants perform the task, and organizes and balances their interaction. Different multimodal signals captured and autosynchronized by different audio-visual capture technologies, such as a microphone array, Kinects, and video cameras, were coupled with manual annotations. These are used build a situated model of the interaction based on the participants personalities, their state of attention, their conversational engagement and verbal dominance, and how that is correlated with the verbal and visual feedback, turn-management, and conversation regulatory actions generated by the tutor. Driven by the analysis of the corpus, we will show also the detailed design methodologies for an affective, and multimodally rich dialogue system that allows the robot to measure incrementally the attention states, and the dominance for each participant, allowing the robot head Furhat to maintain a wellcoordinated, balanced, and engaging conversation, that attempts to maximize the agreement and the contribution to solve the task. This project sets the first steps to explore the potential of using multimodal dialogue systems to build interactive robots that can serve in educational, team building, and collaborative task solving applications.
Toward Robotic Socially Believable Behaving Systems (II) | 2016
Maria Koutsombogera; Miltos Deligiannis; Maria Giagkou; Harris Papageorgiou
This paper presents an experimental design and setup that explores the interaction between two children and their tutor during a question–answer session of a reading comprehension task. The multimodal aspects of the interactions are analysed in terms of preferred signals and strategies that speakers employ to carry out successful multi-party conversations. This analysis will form the basis for the development of behavioral models accounting for the specific context. We envisage the integration of such models into intelligent, context-aware systems, i.e. an embodied dialogue system that has the role of a tutor and is able to carry out a discussion in a multiparty setting by exploring the multimodal signals of the children. This system will have the ability to discuss a text and address questions to the children, encouraging collaboration and equal participation in the discussion and assessing the answers that the children give. The paper focuses on the design of the appropriate setup, the data collection and the analysis of the multimodal signals that are important for the realization of such a system.
international conference on multimodal interfaces | 2014
Samer Al Moubayed; Dan Bohus; Anna Esposito; Dirk Heylen; Maria Koutsombogera; Harris Papageorgiou; Gabriel Skantze
In this paper, we present a brief summary of the international workshop on Modeling Multiparty, Multimodal Interactions. The UM3I 2014 workshop is held in conjunction with the ICMI 2014 conference. The workshop will highlight recent developments and adopted methodologies in the analysis and modeling of multiparty and multimodal interactions, the design and implementation principles of related human-machine interfaces, as well as the identification of potential limitations and ways of overcoming them.
GW'11 Proceedings of the 9th international conference on Gesture and Sign Language in Human-Computer Interaction and Embodied Communication | 2011
Maria Koutsombogera; Harris Papageorgiou
This paper presents a study of iconic gestures as attested in a corpus of Greek face-to-face television interviews. The communicative significance of the iconic gestures situated in an interactional context is examined with regards to their semantics as well as the syntactic properties of the accompanying speech. Iconic gestures are classified according to their semantic equivalents, and are further linked to the phrasal units of the words co-occurring with them, in order to provide evidence about the actual syntactic structures that induce them. The findings support the communicative power of iconic gestures and suggest a framework for their interpretation based on the interplay of semantic and syntactic cues.