Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Florian Lingenfelser is active.

Publication


Featured researches published by Florian Lingenfelser.


acm multimedia | 2013

The social signal interpretation (SSI) framework: multimodal signal processing and recognition in real-time

Johannes Wagner; Florian Lingenfelser; Tobias Baur; Ionut Damian; Felix Kistler; Elisabeth André

Automatic detection and interpretation of social signals carried by voice, gestures, mimics, etc. will play a key-role for next-generation interfaces as it paves the way towards a more intuitive and natural human-computer interaction. The paper at hand introduces Social Signal Interpretation (SSI), a framework for real-time recognition of social signals. SSI supports a large range of sensor devices, filter and feature algorithms, as well as, machine learning and pattern recognition tools. It encourages developers to add new components using SSIs C++ API, but also addresses front end users by offering an XML interface to build pipelines with a text editor. SSI is freely available under GPL at http://openssi.net.


IEEE Transactions on Affective Computing | 2011

Exploring Fusion Methods for Multimodal Emotion Recognition with Missing Data

Johannes Wagner; Elisabeth André; Florian Lingenfelser; Jonghwa Kim

The study at hand aims at the development of a multimodal, ensemble-based system for emotion recognition. Special attention is given to a problem often neglected: missing data in one or more modalities. In offline evaluation the issue can be easily solved by excluding those parts of the corpus where one or more channels are corrupted or not suitable for evaluation. In real applications, however, we cannot neglect the challenge of missing data and have to find adequate ways to handle it. To address this, we do not expect examined data to be completely available at all time in our experiments. The presented system solves the problem at the multimodal fusion stage, so various ensemble techniques-covering established ones as well as rather novel emotion specific approaches-will be explained and enriched with strategies on how to compensate for temporarily unavailable modalities. We will compare and discuss advantages and drawbacks of fusion categories and extensive evaluation of mentioned techniques is carried out on the CALLAS Expressivity Corpus, featuring facial, vocal, and gestural modalities.


Proceedings of 4th International Workshop on Human Behavior Understanding - Volume 8212 | 2013

NovA: Automated Analysis of Nonverbal Signals in Social Interactions

Tobias Baur; Ionut Damian; Florian Lingenfelser; Johannes Wagner; Elisabeth André

Previous studies have shown that the success of interpersonal interaction depends not only on the contents we communicate explicitly, but also on the social signals that are conveyed implicitly. In this paper, we present NovA (NOnVerbal behavior Analyzer), a system that analyzes and facilitates the interpretation of social signals conveyed by gestures, facial expressions and others automatically as a basis for computer-enhanced social coaching. NovA records data of human interactions, automatically detects relevant behavioral cues as a measurement for the quality of an interaction and creates descriptive statistics for the recorded data. This enables us to give a user online generated feedback on strengths and weaknesses concerning his social behavior, as well as elaborate tools for offline analysis and annotation.


acm multimedia | 2014

An Event Driven Fusion Approach for Enjoyment Recognition in Real-time

Florian Lingenfelser; Johannes Wagner; Elisabeth André; William Curran

Social signals and interpretation of carried information is of high importance in Human Computer Interaction. Often used for affect recognition, the cues within these signals are displayed in various modalities. Fusion of multi-modal signals is a natural and interesting way to improve automatic classification of emotions transported in social signals. Throughout most present studies, uni-modal affect recognition as well as multi-modal fusion, decisions are forced for fixed annotation segments across all modalities. In this paper, we investigate the less prevalent approach of event driven fusion, which indirectly accumulates asynchronous events in all modalities for final predictions. We present a fusion approach, handling short-timed events in a vector space, which is of special interest for real-time applications. We compare results of segmentation based uni-modal classification and fusion schemes to the event driven fusion approach. The evaluation is carried out via detection of enjoyment-episodes within the audiovisual Belfast Story-Telling Corpus.


Ksii Transactions on Internet and Information Systems | 2015

Context-Aware Automated Analysis and Annotation of Social Human--Agent Interactions

Tobias Baur; Gregor Mehlmann; Ionut Damian; Florian Lingenfelser; Johannes Wagner; Birgit Lugrin; Elisabeth André; Patrick Gebhard

The outcome of interpersonal interactions depends not only on the contents that we communicate verbally, but also on nonverbal social signals. Because a lack of social skills is a common problem for a significant number of people, serious games and other training environments have recently become the focus of research. In this work, we present NovA (Nonverbal behavior Analyzer), a system that analyzes and facilitates the interpretation of social signals automatically in a bidirectional interaction with a conversational agent. It records data of interactions, detects relevant social cues, and creates descriptive statistics for the recorded data with respect to the agents behavior and the context of the situation. This enhances the possibilities for researchers to automatically label corpora of human--agent interactions and to give users feedback on strengths and weaknesses of their social behavior.


international conference on multimodal interfaces | 2011

A systematic discussion of fusion techniques for multi-modal affect recognition tasks

Florian Lingenfelser; Johannes Wagner; Elisabeth André

Recently, automatic emotion recognition has been established as a major research topic in the area of human computer interaction (HCI). Since humans express emotions through various channels, a users emotional state can naturally be perceived by combining emotional cues derived from all available modalities. Yet most effort has been put into single-channel emotion recognition, while only a few studies with focus on the fusion of multiple channels have been published. Even though most of these studies apply rather simple fusion strategies -- such as the sum or product rule -- some of the reported results show promising improvements compared to the single channels. Such results encourage investigations if there is further potential for enhancement if more sophisticated methods are incorporated. Therefore we apply a wide variety of possible fusion techniques such as feature fusion, decision level combination rules, meta-classification or hybrid-fusion. We carry out a systematic comparison of a total of 16 fusion methods on different corpora and compare results using a novel visualization technique. We find that multi-modal fusion is in almost any case at least on par with single channel classification, though homogeneous results within corpora point to interchangeability between concrete fusion schemes.


virtual reality international conference | 2014

XIM-engine: a software framework to support the development of interactive applications that uses conscious and unconscious reactions in immersive mixed reality

Pedro Omedas; Alberto Betella; Riccardo Zucca; Xerxes D. Arsiwalla; Daniel Pacheco; Johannes Wagner; Florian Lingenfelser; Elisabeth André; Daniele Mazzei; Antonio Lanata; Alessandro Tognetti; Danilo De Rossi; Antoni Grau; Alex Goldhoorn; Edmundo Guerra; René Alquézar; Alberto Sanfeliu; Paul F. M. J. Verschure

The development of systems that allow multimodal interpretation of human-machine interaction is crucial to advance our understanding and validation of theoretical models of user behavior. In particular, a system capable of collecting, perceiving and interpreting unconscious behavior can provide rich contextual information for an interactive system. One possible application for such a system is in the exploration of complex data through immersion, where massive amounts of data are generated every day both by humans and computer processes that digitize information at different scales and resolutions thus exceeding our processing capacity. We need tools that accelerate our understanding and generation of hypotheses over the datasets, guide our searches and prevent data overload. We describe XIM-engine, a bio-inspired software framework designed to capture and analyze multi-modal human behavior in an immersive environment. The framework allows performing studies that can advance our understanding on the use of conscious and unconscious reactions in interactive systems.


international conference on computer graphics and interactive techniques | 2013

Advanced interfaces to stem the data deluge in mixed reality: placing human (un)consciousness in the loop

Alberto Betella; Enrique Martinez; Riccardo Zucca; Xerxes D. Arsiwalla; Pedro Omedas; Sytse Wierenga; Anna Mura; Johannes Wagner; Florian Lingenfelser; Elisabeth André; Daniele Mazzei; Alessandro Tognetti; Antonio Lanata; Danilo De Rossi; Paul F. M. J. Verschure

We live in an era of data deluge and this requires novel tools to effectively extract, analyze and understand the massive amounts of data produced by the study of natural and artificial phenomena in many areas of research.


Künstliche Intelligenz | 2011

Social Signal Interpretation (SSI)

Johannes Wagner; Florian Lingenfelser; Nikolaus Bee; Elisabeth André

The development of anticipatory user interfaces is a key issue in human-centred computing. Building systems that allow humans to communicate with a machine in the same natural and intuitive way as they would with each other requires detection and interpretation of the user’s affective and social signals. These are expressed in various and often complementary ways, including gestures, speech, mimics etc. Implementing fast and robust recognition engines is not only a necessary, but also challenging task. In this article, we introduce our Social Signal Interpretation (SSI) tool, a framework dedicated to support the development of such online recognition systems. The paper at hand discusses the processing of four modalities, namely audio, video, gesture and biosignals, with focus on affect recognition, and explains various approaches to fuse the extracted information to a final decision.


affective computing and intelligent interaction | 2015

The Belfast storytelling database: A spontaneous social interaction database with laughter focused annotation

William Curran; Johannes Wagner; Florian Lingenfelser; Elisabeth André

To support the endeavor of creating intelligent interfaces between computers and humans the use of training materials based on realistic human-human interactions has been recognized as a crucial task. One of the effects of the creation of these databases is an increased realization of the importance of often overlooked social signals and behaviours in organizing and orchestrating our interactions. Laughter is one of these key social signals; its importance in maintaining the smooth flow of human interaction has only recently become apparent in the embodied conversational agent domain. In turn, these realizations require training data that focus on these key social signals. This paper presents a database that is well annotated and theoretically constructed with respect to understanding laughter as it is used within human social interaction. Its construction, motivation, annotation and availability are presented in detail in this paper.

Collaboration


Dive into the Florian Lingenfelser's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stefanos Vrochidis

Information Technology Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge