Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Simon Alexanderson is active.

Publication


Featured researches published by Simon Alexanderson.


Computer Speech & Language | 2014

Animated Lombard speech: Motion capture, facial animation and visual intelligibility of speech produced in adverse conditions

Simon Alexanderson; Jonas Beskow

In this paper we study the production and perception of speech in diverse conditions for the purposes of accurate, flexible and highly intelligible talking face animation. We recorded audio, video and facial motion capture data of a talker uttering a set of 180 short sentences, under three conditions: normal speech (in quiet), Lombard speech (in noise), and whispering. We then produced an animated 3D avatar with similar shape and appearance as the original talker and used an error minimization procedure to drive the animated version of the talker in a way that matched the original performance as closely as possible. In a perceptual intelligibility study with degraded audio we then compared the animated talker against the real talker and the audio alone, in terms of audio-visual word recognition rate across the three different production conditions. We found that the visual intelligibility of the animated talker was on par with the real talker for the Lombard and whisper conditions. In addition we created two incongruent conditions where normal speech audio was paired with animated Lombard speech or whispering. When compared to the congruent normal speech condition, Lombard animation yields a significant increase in intelligibility, despite the AV-incongruence. In a separate evaluation, we gathered subjective opinions on the different animations, and found that some degree of incongruence was generally accepted.


ACM Transactions on Accessible Computing | 2015

Towards Fully Automated Motion Capture of Signs -- Development and Evaluation of a Key Word Signing Avatar

Simon Alexanderson; Jonas Beskow

Motion capture of signs provides unique challenges in the field of multimodal data collection. The dense packaging of visual information requires high fidelity and high bandwidth of the captured data. Even though marker-based optical motion capture provides many desirable features such as high accuracy, global fitting, and the ability to record body and face simultaneously, it is not widely used to record finger motion, especially not for articulated and syntactic motion such as signs. Instead, most signing avatar projects use costly instrumented gloves, which require long calibration procedures. In this article, we evaluate the data quality obtained from optical motion capture of isolated signs from Swedish sign language with a large number of low-cost cameras. We also present a novel dual-sensor approach to combine the data with low-cost, five-sensor instrumented gloves to provide a recording method with low manual postprocessing. Finally, we evaluate the collected data and the dual-sensor approach as transferred to a highly stylized avatar. The application of the avatar is a game-based environment for training Key Word Signing (KWS) as augmented and alternative communication (AAC), intended for children with communication disabilities.


motion in games | 2016

Robust online motion capture labeling of finger markers

Simon Alexanderson; Carol O'Sullivan; Jonas Beskow

Passive optical motion capture is one of the predominant technologies for capturing high fidelity human skeletal motion, and is a workhorse in a large number of areas such as bio-mechanics, film and video games. While most state-of-the-art systems can automatically identify and track markers on the larger parts of the human body, the markers attached to fingers provide unique challenges and usually require extensive manual cleanup. In this work we present a robust online method for identification and tracking of passive motion capture markers attached to the fingers of the hands. The method is especially suited for large capture volumes and sparse marker sets of 3 to 10 markers per hand. Once trained, our system can automatically initialize and track the markers, and the subject may exit and enter the capture volume at will. By using multiple assignment hypotheses and soft decisions, it can robustly recover from a difficult situation with many simultaneous occlusions and false observations (ghost markers). We evaluate the method on a collection of sparse marker sets commonly used in industry and in the research community. We also compare the results with two of the most widely used motion capture platforms: Motion Analysis Cortex and Vicon Blade. The results show that our method is better at attaining correct marker labels and is especially beneficial for real-time applications.


Computers & Graphics | 2017

Real-time labeling of non-rigid motion capture marker sets

Simon Alexanderson; Carol O'Sullivan; Jonas Beskow

Passive optical motion capture is one of the predominant technologies for capturing high fidelity human motion, and is a workhorse in a large number of areas such as bio-mechanics, film and video g ...


tests and proofs | 2017

Mimebot—Investigating the Expressibility of Non-Verbal Communication Across Agent Embodiments

Simon Alexanderson; Carol O’Sullivan; Michael Neff; Jonas Beskow

Unlike their human counterparts, artificial agents such as robots and game characters may be deployed with a large variety of face and body configurations. Some have articulated bodies but lack facial features, and others may be talking heads ending at the neck. Generally, they have many fewer degrees of freedom than humans through which they must express themselves, and there will inevitably be a filtering effect when mapping human motion onto the agent. In this article, we investigate filtering effects on three types of embodiments: (a) an agent with a body but no facial features, (b) an agent with a head only, and (c) an agent with a body and a face. We performed a full performance capture of a mime actor enacting short interactions varying the non-verbal expression along five dimensions (e.g., level of frustration and level of certainty) for each of the three embodiments. We performed a crowd-sourced evaluation experiment comparing the video of the actor to the video of an animated robot for the different embodiments and dimensions. Our findings suggest that the face is especially important to pinpoint emotional reactions but is also most volatile to filtering effects. The body motion, on the other hand, had more diverse interpretations but tended to preserve the interpretation after mapping and thus proved to be more resilient to filtering.


ieee international conference on automatic face gesture recognition | 2017

Computer Analysis of Sentiment Interpretation in Musical Conducting

Kelly Karipidou; Josefin Ahnlund; Anders Friberg; Simon Alexanderson; Hedvig Kjellström

This paper presents a unique dataset consisting of 20 recordings of the same musical piece, conducted with 4 different musical intentions in mind. The upper body and baton motion of a professional conductor was recorded, as well as the sound of each instrument in a professional string quartet following the conductor. The dataset is made available for benchmarking of motion recognition algorithms. An HMM-based emotion intent classification method is trained with subsets of the data, and classification of other subsets of the data show firstly that the motion of the baton communicates energetic intention to a high degree, secondly, that the conductor’s torso, head and other arm conveys calm intention to a high degree, and thirdly, that positive vs negative sentiments are communicated to a high degree through other channels than the body and baton motion – most probably, through facial expression and muscle tension conveyed through articulated hand and finger motion. The long-term goal of this work is to develop a computer model of the entire conductor-orchestra communication process; the studies presented here indicate that computer modeling of the conductor-orchestra communication is feasible.


Proceedings of the Workshop on Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction | 2016

Automatic annotation of gestural units in spontaneous face-to-face interaction

Simon Alexanderson; David House; Jonas Beskow

Speech and gesture co-occur in spontaneous dialogue in a highly complex fashion. There is a large variability in the motion that people exhibit during a dialogue, and different kinds of motion occur during different states of the interaction. A wide range of multimodal interface applications, for example in the fields of virtual agents or social robots, can be envisioned where it is important to be able to automatically identify gestures that carry information and discriminate them from other types of motion. While it is easy for a human to distinguish and segment manual gestures from a flow of multimodal information, the same task is not trivial to perform for a machine. In this paper we present a method to automatically segment and label gestural units from a stream of 3D motion capture data. The gestural flow is modeled with a 2-level Hierarchical Hidden Markov Model (HHMM) where the sub-states correspond to gesture phases. The model is trained based on labels of complete gesture units and self-adaptive manipulators. The model is tested and validated on two datasets differing in genre and in method of capturing motion, and outperforms a state-of-the-art SVM classifier on a publicly available dataset.


AVSP2011, Aug 31-Sep 3, Volterra, Italy | 2011

A robotic head using projected animated faces

Samer Al Moubayed; Simon Alexanderson; Jonas Beskow; Björn Granström


12th International Conference on Auditory-Visual Speech Processing (AVSP2013), Annecy, France, from August 29th to September 1st, 2013 | 2013

Aspects of co-occurring syllables and head nods in spontaneous dialogue

Simon Alexanderson; David House; Jonas Beskow


TiGeR 2013: Tilburg Gesture Research Meeting: 10th International Gesture Workshop (GW) and 3rd Gesture and Speech in Interaction (GESPIN) Conference, Tilburg University, Netherlands 2013-06-19 2013-06-21 | 2013

Extracting and analyzing head movements accompanying spontaneous dialogue

Simon Alexanderson; David House; Jonas Beskow

Collaboration


Dive into the Simon Alexanderson's collaboration.

Top Co-Authors

Avatar

Jonas Beskow

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

David House

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kalin Stefanov

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Anders Friberg

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Björn Granström

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hedvig Kjellström

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Iolanda Leite

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Margaret Zellers

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Samer Al Moubayed

Royal Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge