Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kalin Stefanov is active.

Publication


Featured researches published by Kalin Stefanov.


9th IFIP WG 5.5 International Summer Workshop on Multimodal Interfaces, eNTERFACE 2013, Lisbon, Portugal, July 15 – August 9, 2013 | 2014

Tutoring Robots : Multiparty multimodal social dialogue with an embodied tutor

Samer Al Moubayed; Jonas Beskow; Bajibabu Bollepalli; Ahmed Hussen-Abdelaziz; Martin Johansson; Maria Koutsombogera; José Lopes; Jekaterina Novikova; Catharine Oertel; Gabriel Skantze; Kalin Stefanov; Gül Varol

This project explores a novel experimental setup towards building spoken, multi-modally rich, and human-like multiparty tutoring agent. A setup is developed and a corpus is collected that targets t ...


international conference on multimodal interfaces | 2012

Multimodal multiparty social interaction with the furhat head

Samer Al Moubayed; Gabriel Skantze; Jonas Beskow; Kalin Stefanov; Joakim Gustafson

We will show in this demonstrator an advanced multimodal and multiparty spoken conversational system using Furhat, a robot head based on projected facial animation. Furhat is a human-like interface that utilizes facial animation for physical robot heads using back-projection. In the system, multimodality is enabled using speech and rich visual input signals such as multi-person real-time face tracking and microphone tracking. The demonstrator will showcase a system that is able to carry out social dialogue with multiple interlocutors simultaneously with rich output signals such as eye and head coordination, lips synchronized speech synthesis, and non-verbal facial gestures used to regulate fluent and expressive multiparty conversations.


human-robot interaction | 2014

Human-robot collaborative tutoring using multiparty multimodal spoken dialogue

Samer Al Moubayed; Jonas Beskow; Bajibabu Bollepalli; Joakim Gustafson; Ahmed Hussen-Abdelaziz; Martin Johansson; Maria Koutsombogera; José Lopes; Jekaterina Novikova; Catharine Oertel; Gabriel Skantze; Kalin Stefanov; Gül Varol

In this paper, we describe a project that explores a novel experimental setup towards building a spoken, multi-modally rich, and human-like multiparty tutoring robot. A human-robot interaction setup is designed, and a human-human dialogue corpus is collected. The corpus targets the development of a dialogue system platform to study verbal and nonverbal tutoring strategies in multiparty spoken interactions with robots which are capable of spoken dialogue. The dialogue task is centered on two participants involved in a dialogue aiming to solve a card-ordering game. Along with the participants sits a tutor (robot) that helps the participants perform the task, and organizes and balances their interaction. Different multimodal signals captured and autosynchronized by different audio-visual capture technologies, such as a microphone array, Kinects, and video cameras, were coupled with manual annotations. These are used build a situated model of the interaction based on the participants personalities, their state of attention, their conversational engagement and verbal dominance, and how that is correlated with the verbal and visual feedback, turn-management, and conversation regulatory actions generated by the tutor. Driven by the analysis of the corpus, we will show also the detailed design methodologies for an affective, and multimodally rich dialogue system that allows the robot to measure incrementally the attention states, and the dominance for each participant, allowing the robot head Furhat to maintain a wellcoordinated, balanced, and engaging conversation, that attempts to maximize the agreement and the contribution to solve the task. This project sets the first steps to explore the potential of using multimodal dialogue systems to build interactive robots that can serve in educational, team building, and collaborative task solving applications.


GLU 2017 International Workshop on Grounding Language Understanding | 2017

Vision-based Active Speaker Detection in Multiparty Interaction

Kalin Stefanov; Jonas Beskow; Giampiero Salvi

This paper presents a supervised learning method for automatic visual detection of the active speaker in multiparty interactions. The presented detectors are built using a multimodal multiparty interaction dataset previously recorded with the purpose to explore patterns in the focus of visual attention of humans. Three different conditions are included: two humans involved in taskbased interaction with a robot; the same two humans involved in task-based interaction where the robot is replaced by a third human, and a free three-party human interaction. The paper also presents an evaluation of the active speaker detection method in a speaker dependent experiment showing that the method achieves good accuracy rates in a fairly unconstrained scenario using only image data as input. The main goal of the presented method is to provide real-time detection of the active speaker within a broader framework implemented on a robot and used to generate natural focus of visual attention behavior during multiparty human-robot interactions.


Proceedings of the 2nd Workshop on Advancements in Social Signal Processing for Multimodal Interaction | 2016

Look who's talking: visual identification of the active speaker in multi-party human-robot interaction

Kalin Stefanov; Akihiro Sugimoto; Jonas Beskow

This paper presents analysis of a previously recorded multi-modal interaction dataset. The primary purpose of that dataset is to explore patterns in the focus of visual attention of humans under three different conditions - two humans involved in task-based interaction with a robot; the same two humans involved in task-based interaction where the robot is replaced by a third human, and a free three-party human interaction. The paper presents a data-driven methodology for automatic visual identification of the active speaker based on facial action units (AUs). The paper also presents an evaluation of the proposed methodology on 12 different interactions with an approximate length of 4 hours. The methodology will be implemented on a robot and used to generate natural focus of visual attention behavior during multi-party human-robot interactions.


2013 Workshop on Multimodal Corpora: Beyond Audio and Video; Edinburgh, UK, 1 September 2013 | 2013

A Kinect Corpus of Swedish Sign Language Signs

Kalin Stefanov; Jonas Beskow


international conference on multimodal interfaces | 2015

Public Speaking Training with a Multimodal Interactive Virtual Audience Framework

Mathieu Chollet; Kalin Stefanov; Helmut Prendinger; Stefan Scherer


language resources and evaluation | 2016

A Multi-party Multi-modal Dataset for Focus of Visual Attention in Human-human and Human-robot Interaction

Kalin Stefanov; Jonas Beskow


language resources and evaluation | 2014

The Tutorbot Corpus ― A Corpus for Studying Tutoring Behaviour in Multiparty Face-to-Face Spoken Dialogue

Maria Koutsombogera; Samer Al Moubayed; Bajibabu Bollepalli; Ahmed Hussen Abdelaziz; Martin Johansson; José Lopes; Jekaterina Novikova; Catharine Oertel; Kalin Stefanov; G"ul Varol


eNTERFACE | 2013

Tutoring Robots - Multiparty Multimodal Social Dialogue with an Embodied Tutor.

Samer Al Moubayed; Jonas Beskow; Bajibabu Bollepalli; Ahmed Hussen Abdelaziz; Martin Johansson; Maria Koutsombogera; José Lopes; Jekaterina Novikova; Catharine Oertel; Gabriel Skantze; Kalin Stefanov; Gül Varol

Collaboration


Dive into the Kalin Stefanov's collaboration.

Top Co-Authors

Avatar

Jonas Beskow

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Samer Al Moubayed

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Bajibabu Bollepalli

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Catharine Oertel

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Gabriel Skantze

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Martin Johansson

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Maria Koutsombogera

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge