Lars Omlor
University of Tübingen
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lars Omlor.
Journal of Vision | 2009
Claire L. Roether; Lars Omlor; Andrea Christensen; Martin A. Giese
Human observers readily recognize emotions expressed in body movement. Their perceptual judgments are based on simple movement features, such as overall speed, but also on more intricate posture and dynamic cues. The systematic analysis of such features is complicated due to the difficulty of considering the large number of potentially relevant kinematic and dynamic parameters. To identify emotion-specific features we motion-captured the neutral and emotionally expressive (anger, happiness, sadness, fear) gaits of 25 individuals. Body posture was characterized by average flexion angles, and a low-dimensional parameterization of the spatio-temporal structure of joint trajectories was obtained by approximation with a nonlinear mixture model. Applying sparse regression, we extracted critical emotion-specific posture and movement features, which typically depended only on a small number of joints. The features we extracted from the motor behavior closely resembled features that were critical for the perception of emotion from gait, determined by a statistical analysis of classification and rating judgments of 21 observers presented with avatars animated with the recorded movements. The perceptual relevance of these features was further supported by another experiment showing that artificial walkers containing only the critical features induced high-level after-effects matching those induced by adaptation with natural emotional walkers.
Experimental Brain Research | 2009
Avi Barliya; Lars Omlor; Martin A. Giese; Tamar Flash
The law of intersegmental coordination is a kinematic law that describes the coordination patterns among the elevation angles of the lower limb segments during locomotion (Borghese et al. in J Physiol 494:863–879, 1996). This coordination pattern reduces the number of degrees of freedom of the lower limb to two, i.e. the elevation angles covary along a plane in angular space. The properties of the plane that constrains the time course of the elevation angles have been extensively studied, and its orientation was found to be correlated with gait velocity and energy expenditure (Bianchi et al. in J Neurophysiol 79:2155–2170, 1998). Here, we present a mathematical model that represents the rotations of the elevation angles in terms of simple oscillators with appropriate phase shifts between them. The model explains what requirements the time courses of the elevation angles must fulfill in order for the angular covariation relationship to be planar. Moreover, an analytical formulation is proposed for both the orientation of the plane and for the eccentricity of the nearly elliptical shape that is generated within this plane, in terms of the amplitudes and relative phases of the first harmonics of the segments elevation angles. The model presented here sheds some new light on the possible interactions among the Central Pattern Generators possibly underlying the control of biped locomotion. The model precisely specifies how any two segments in the limb interact, and how a change in gait velocity affects the orientation of the intersegmental coordination plane mainly through a change in phase shifts between the segments. Implications of this study with respect to neural control of locomotion and other motor activities are discussed.
Current Biology | 2008
Claire L. Roether; Lars Omlor; Martin A. Giese
Summary Emotional behaviours in humans and animals, such as kissing or tail wagging, sometimes show characteristic lateral asymmetries [1,2]. Such asymmetries suggest differences in the involvement of the cerebral hemispheres in the expression of emotion. An established example is the expressiveness advantage of the left hemiface that has been demonstrated with chimeric face stimuli , static pictures of emotional expressions with one side of the face replaced by the mirror image of the other [3]. While this result has been interpreted as support for a right-hemisphere dominance in emotion expression [4], substantial ipsilateral innervation of the relevant facial musculature [5] and findings of reduced or reversed asymmetry for positive emotions [3,6] complicate the conclusion. It is therefore critical to investigate lateral asymmetries in emotion expression using effectors with clearly contralateral innervation. We report here a pronounced lateral asymmetry for emotional full-body movements [7], the left body side moving with higher amplitude and energy, and causing higher perceived emotional expressiveness of the left body side compared to the right. This finding provides strong support for a right-hemisphere dominance in the control of emotional expressions independent of the specific effector.
Neurocomputing | 2007
Lars Omlor; Martin A. Giese
Experimental and computational studies suggest that complex motor behavior is based on simpler spatio-temporal primitives, or synergies. This has been demonstrated by application of dimensionality reduction techniques to signals obtained by electrophysiological and EMG recordings during the execution of limb movements. However, the existence of spatio-temporal primitives on the level of the joint angle trajectories of complex full-body movements remains less explored. Known blind source separation techniques, like PCA and ICA, tend to extract relatively large numbers of sources from such trajectories that are typically difficult to interpret. For the example of emotional human gait patterns, we present a new non-linear source separation technique that treats temporal delays of signals in an efficient manner. The method allows to approximate high-dimensional movement trajectories very accurately based on a small number of learned spatio-temporal primitives or source signals. It is demonstrated that the new method is significantly more accurate than other common techniques. Combining this method with sparse multivariate regression, we identified spatio-temporal primitives that are specific for different emotions in gait. The extracted emotion-specific features match closely features that have been shown to be critical for the perception of emotions from gait pattern in visual psychophysics studies. This suggests the existence of emotion-specific motor primitives in human gait.
Statistical and Geometrical Approaches to Visual Motion Analysis | 2009
Martin A. Giese; Albert Mukovskiy; Aee-Ni Park; Lars Omlor; Jean-Jacques E. Slotine
The synthesis of realistic complex body movements in real-time is a difficult problem in computer graphics and in robotics. High realism requires the accurate modeling of the details of the trajectories for a large number of degrees of freedom. At the same time, real-time animation necessitates flexible systems that can react in an online fashion, adapting to external constraints. Such online systems are suitable for the self-organization of complex behavior by the dynamic interaction between multiple autonomous characters in the scene. In this paper we present a novel approach for the online synthesis of realistic human body movements. The proposed model is inspired by concepts from motor control. It approximates movements by superposition of movement primitives (synergies) that are learned from motion capture data applying a new blind source separation algorithm. The learned generative model can synthesize periodic and non-periodic movements, achieving high degrees of realism with a very small number of synergies. For obtaining a system that is suitable for real-time synthesis, the primitives are approximated by the solutions of low-dimensional nonlinear dynamical systems (dynamic primitives). The application of a new type of stability analysis (contraction theory) permits the design of complex networks of such dynamic primitives, resulting in a stable overall system architecture. We discuss a number of applications of this framework and demonstrate that it is suitable for the self-organization of complex behaviors, such as navigation, synchronized crowd behavior and dancing.
Archive | 2009
Claire L. Roether; Lars Omlor; Martin Giese
Body movements can reveal important information about a person’s emotional state. The visual system efficiently extracts subtle information about the emotional style of a movement, even from point-light stimuli. While much existing work has addressed the problem of style perception from a holistic perspective, we try to investigate which features are critical for the recognition of emotions from full-body movements. This work is inspired by the motor-control concept of “synergies,” which define spatial components of movements that encompass only a limited set of degrees of freedom that are jointly controlled. We present an algorithm that learns a highly compact generative model for the joint-angle trajectories of emotional body movements. The model approximates movements by nonlinear superpositions of a small number of basis components. Applying sparse feature learning, we extracted from this representation the spatial components that are characteristic for happy, sad, fearful and angry movements. The extracted features for walking were highly consistent with emotion-specific features of gait, as described in the literature. We further show that this type of result is not restricted to locomotor movements. Compared to other techniques, the proposed algorithm requires significantly fewer basic components to accomplish the same level of accuracy. In addition, we show that feature learning based on such less compact representations does not result in easily interpretable local features. Based on the features extracted from the trajectory data, we studied how spatio-temporal components that convey information about emotional styles of body movements are integrated in visual perception. Using motion morphing to vary the information content of different components, we show that the integration of spatial features is slightly suboptimal compared to a Bayesian ideal observer. Besides, integration was worse for components that matched the components extracted from the movement trajectories. This result is inconsistent with the hypothesis that emotional body movements are recognized by a parallel internal simulation of the underlying motor behavior. Instead, it seems that the recognition of emotion from body movements is based on a purely visual process that is influenced by the distribution of attention.
tests and proofs | 2011
Dominik Endres; Andrea Christensen; Lars Omlor; Martin A. Giese
Natural body movements arise in the form of temporal sequences of individual actions. During visual action analysis, the human visual system must accomplish a temporal segmentation of the action stream into individual actions. Such temporal segmentation is also essential to build hierarchical models for action synthesis in computer animation. Ideally, such segmentations should be computed automatically in an unsupervised manner. We present an unsupervised segmentation algorithm that is based on Bayesian Binning (BB) and compare it to human segmentations derived from psychophysical data. BB has the advantage that the observation model can be easily exchanged. Moreover, being an exact Bayesian method, BB allows for the automatic determination of the number and positions of segmentation points. We applied this method to motion capture sequences from martial arts and compared the results to segmentations provided by humans from movies that showed characters that were animated with the motion capture data. Human segmentation was then assessed by an interactive adjustment paradigm, where participants had to indicate segmentation points by selection of the relevant frames. Results show a good agreement between automatically generated segmentations and human performance when the trajectory segments between the transition points were modeled by polynomials of at least third order. This result is consistent with theories about differential invariants of human movements.
perception and interactive technologies | 2006
Lars Omlor; Martin A. Giese
Experimental and computational studies suggest that complex motor behavior is based on simpler spatio-temporal primitives. This has been demonstrated by application of dimensionality reduction techniques to signals from electrophysiological and EMG recordings during execution of limb movements. However, the existence of such primitives on the level of kinematics, i.e. the joint trajectories of complex human full-body movements remains less explored. Known blind source separation techniques, e.g. PCA and ICA, tend to extract relatively large numbers of components or source signals from such trajectories that are typically difficult to interpret. For the analysis of emotional human gait patterns, we present a new method for blind source separation that is based on a nonlinear generative model with additional time delays. The resulting model is able to approximate high-dimensional movement trajectories very accurately with very few source components. Combining this method with sparse regression, we identified spatio-temporal primitives for the encoding of individual emotions in gait. We verified that these primitives match features that are important for the perception of emotions from gait in psychophysical studies. This suggests the existence of emotion-specific movement primitives that might be useful for the simulation of emotional behavior in technical applications.
KI'11 Proceedings of the 34th Annual German conference on Advances in artificial intelligence | 2011
Dominik Endres; Andrea Christensen; Lars Omlor; Martin A. Giese
Natural body movements are temporal sequences of individual actions. In order to realise a visual analysis of these actions, the human visual system must accomplish a temporal segmentation of action sequences. We attempt to reproduce human temporal segmentations with Bayesian binning (BB)[8]. Such a reproduction would not only help our understanding of human visual processing, but would also have numerous potential applications in computer vision and animation. BB has the advantage that the observation model can be easily exchanged. Moreover, being an exact Bayesian method, BB allows for the automatic determination of the number and positions of segmentation points. We report our experiments with polynomial (in time) observation models on joint angle data obtained by motion capture. To obtain human segmentation points, we generated videos by animating sequences from the motion capture data. Human segmentation was then assessed by an interactive adjustment paradigm, where participants had to indicate segmentation points by selection of the relevant frames. We find that observation models with polynomial order ≥ 3 can match human segmentations closely.
international conference on independent component analysis and signal separation | 2007
Lars Omlor; Martin A. Giese
For the extraction of sources with unsupervised learning techniques invariance under certain transformations, such as shifts, rotations or scaling, is often a desirable property. A straight-forward approach for accomplishing this goal is to include these transformations and its parameters into the mixing model. For the case of one-dimensional signals in presence of shifts this problem has been termed anechoic demixing, and several algorithms for the analysis of time series have been proposed. Here, we generalize this approach for sources depending on multi-dimensional arguments and apply it for learning of translation-invariant features from higher-dimensional data, such as images. A new algorithm for the solution of such high-dimensional anechoic demixing problems based on the Wigner-Ville distribution is presented. It solves the multi-dimensional problem by projection onto multiple one-dimensional problems. The feasibility of this algorithm is demonstrated by learning independent features from sets of real images.