Andrea Christensen
University of Tübingen
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Andrea Christensen.
Journal of Vision | 2009
Claire L. Roether; Lars Omlor; Andrea Christensen; Martin A. Giese
Human observers readily recognize emotions expressed in body movement. Their perceptual judgments are based on simple movement features, such as overall speed, but also on more intricate posture and dynamic cues. The systematic analysis of such features is complicated due to the difficulty of considering the large number of potentially relevant kinematic and dynamic parameters. To identify emotion-specific features we motion-captured the neutral and emotionally expressive (anger, happiness, sadness, fear) gaits of 25 individuals. Body posture was characterized by average flexion angles, and a low-dimensional parameterization of the spatio-temporal structure of joint trajectories was obtained by approximation with a nonlinear mixture model. Applying sparse regression, we extracted critical emotion-specific posture and movement features, which typically depended only on a small number of joints. The features we extracted from the motor behavior closely resembled features that were critical for the perception of emotion from gait, determined by a statistical analysis of classification and rating judgments of 21 observers presented with avatars animated with the recorded movements. The perceptual relevance of these features was further supported by another experiment showing that artificial walkers containing only the critical features induced high-level after-effects matching those induced by adaptation with natural emotional walkers.
The Journal of Neuroscience | 2011
Andrea Christensen; Winfried Ilg; Martin Giese
The execution of motor behavior influences concurrent visual action observation and especially the perception of biological motion. The neural mechanisms underlying this interaction between perception and motor execution are not exactly known. In addition, the available experimental evidence is partially inconsistent because previous studies have reported facilitation as well as impairments of action perception by concurrent execution. Exploiting a novel virtual reality paradigm, we investigated the spatiotemporal tuning of the influence of motor execution on the perception of biological motion within a signal-detection task. Human observers were presented with point-light stimuli that were controlled by their own movements. Participants had to detect a point-light arm in a scrambled mask, either while executing waving movements or without concurrent motor execution (baseline). The temporal and spatial coherence between the observed and executed movements was parametrically varied. We found a systematic tuning of the facilitatory versus inhibitory influences of motor execution on biological motion detection with respect to the temporal and the spatial congruency between observed and executed movements. Specifically, we found a gradual transition between facilitatory and inhibitory interactions for decreasing temporal synchrony and spatial congruency. This result provides evidence for a spatiotemporally highly selective coupling between dynamic motor representations and neural structures involved in the visual processing of biological motion. In addition, our study offers a unifying explanation that reconciles contradicting results about modulatory effects of motor execution on biological motion perception in previous studies.
NeuroImage | 2014
Sabrina Schneider; Andrea Christensen; Florian B. Häußinger; Andreas J. Fallgatter; Martin A. Giese; Ann-Christine Ehlis
The ability to recognize and adequately interpret emotional states in others plays a fundamental role in regulating social interaction. Body language presents an essential element of nonverbal communication which is often perceived prior to mimic expression. However, the neural networks that underlie the processing of emotionally expressive body movement and body posture are poorly understood. 33 healthy subjects have been investigated using the optically based imaging method functional near-infrared spectroscopy (fNIRS) during the performance of a newly developed emotion discrimination paradigm consisting of faceless avatars expressing fearful, angry, sad, happy or neutral gait patterns. Participants were instructed to judge (a) the presented emotional state (emotion task) and (b) the observed walking speed of the respective avatar (speed task). We measured increases in cortical oxygenated haemoglobin (O2HB) in response to visual stimulation during emotion discrimination. These O2HB concentration changes were enhanced for negative emotions in contrast to neutral gait sequences in right occipito-temporal and left temporal and temporo-parietal brain regions. Moreover, fearful and angry bodies elicited higher activation increases during the emotion task compared to the speed task. Haemodynamic responses were correlated with a number of behavioural measures, whereby a positive relationship between emotion regulation strategy preference and O2HB concentration increases after sad walks was mediated by the ability to accurately categorize sad walks. Our results support the idea of a distributed brain network involved in the recognition of bodily emotion expression that comprises visual association areas as well as body/movement perception specific cortical regions that are also sensitive to emotion. This network is activated less when the emotion is not intentionally processed (i.e. during the speed task). Furthermore, activity of this perceptive network is, mediated by the ability to correctly recognize emotions, indirectly connected to active emotion regulation processes. We conclude that a full understanding of emotion perception and its neural substrate requires the investigation of dynamic representations and means of expression other than the face.
Scientific Reports | 2015
Lucia Maria Sacheli; Andrea Christensen; Martin A. Giese; Nick Taubert; Enea Francesco Pavone; Salvatore Maria Aglioti; Matteo Candidi
During social interactions people automatically apply stereotypes in order to rapidly categorize others. Racial differences are among the most powerful cues that drive these categorizations and modulate our emotional and cognitive reactivity to others. We investigated whether implicit racial bias may also shape hand kinematics during the execution of realistic joint actions with virtual in- and out-group partners. Caucasian participants were required to perform synchronous imitative or complementary reach-to-grasp movements with avatars that had different skin color (white and black) but showed identical action kinematics. Results demonstrate that stronger visuo-motor interference (indexed here as hand kinematics differences between complementary and imitative actions) emerged: i) when participants were required to predict the partners action goal in order to on-line adapt their own movements accordingly; ii) during interactions with the in-group partner, indicating the partners racial membership modulates interactive behaviors. Importantly, the in-group/out-group effect positively correlated with the implicit racial bias of each participant. Thus visuo-motor interference during joint action, likely reflecting predictive embodied simulation of the partners movements, is affected by cultural inter-individual differences.
The Journal of Neuroscience | 2014
Andrea Christensen; Martin Giese; Fahad Sultan; Oliver Mueller; Sophia Goericke; Winfried Ilg; Dagmar Timmann
It is widely accepted that action and perception in humans functionally interact on multiple levels. Moreover, areas originally suggested to be predominantly motor-related, as the cerebellum, are also involved in action observation. However, as yet, few studies provided unequivocal evidence that the cerebellum is involved in the action perception coupling (APC), specifically in the integration of motor and multisensory information for perception. We addressed this question studying patients with focal cerebellar lesions in a virtual-reality paradigm measuring the effect of action execution on action perception presenting self-generated movements as point lights. We measured the visual sensitivity to the point light stimuli based on signal detection theory. Compared with healthy controls cerebellar patients showed no beneficial influence of action execution on perception indicating deficits in APC. Applying lesion symptom mapping, we identified distinct areas in the dentate nucleus and the lateral cerebellum of both hemispheres that are causally involved in APC. Lesions of the right ventral dentate, the ipsilateral motor representations (lobules V/VI), and most interestingly the contralateral posterior cerebellum (lobule VII) impede the benefits of motor execution on perception. We conclude that the cerebellum establishes time-dependent multisensory representations on different levels, relevant for motor control as well as supporting action perception. Ipsilateral cerebellar motor representations are thought to support the somatosensory state estimate of ongoing movements, whereas the ventral dentate and the contralateral posterior cerebellum likely support sensorimotor integration in the cerebellar-parietal loops. Both the correct somatosensory as well as the multisensory state representations are vital for an intact APC.
Journal of Neurophysiology | 2013
Winfried Ilg; Andrea Christensen; Oliver Mueller; Sophia L. Goericke; Martin A. Giese; Dagmar Timmann
We examined the influence of focal cerebellar lesions on working memory (n-back task), gait, and the interaction between working memory and different gait tasks in a dual-task paradigm. The analysis included 17 young patients with chronic focal lesions after cerebellar tumor resection and 17 age-matched controls. Patients have shown mild to moderate ataxia. Lesion sites were examined on the basis of structural magnetic resonance imaging. N-back tasks were executed with different levels of difficulty (n = 1-4) during sitting (baseline), treadmill walking, and treadmill tandem walking (dual-task conditions). Patients exhibited decreased n-back performance particularly at difficult n-back levels and in dual-task conditions. Voxel-based lesion-symptom mapping revealed that decreased baseline n-back performance was associated with lesions of the posterolateral cerebellar hemisphere and the dentate nucleus. By contrast, decreased n-back performance in dual-task conditions was more associated with motor-related areas including dorsal portions of the dentate and the interposed nucleus, suggesting a prioritization of the motor task. During baseline walking, increased gait variability was associated with lesions in medial and intermediate regions, whereas for baseline tandem gait, lesions in the posterolateral hemispheres and the dentate nucleus became important. Posterolateral regions overlapped with regions related to baseline n-back performance. Consistently, we observed increased tandem gait variability with growing n-back difficulty in the dual-task condition. These findings suggest that dual-task effects in cerebellar patients are at least partially caused by a common involvement of posterolateral cerebellar regions in working memory and complex motor tasks.
NeuroImage | 2015
Hagar Goldberg; Andrea Christensen; Tamar Flash; Martin Giese; Rafael Malach
An accurate judgment of the emotional state of others is a prerequisite for successful social interaction and hence survival. Thus, it is not surprising that we are highly skilled at recognizing the emotions of others. Here we aimed to examine the neuronal correlates of emotion recognition from gait. To this end we created highly controlled dynamic body-movement stimuli based on real human motion-capture data (Roether et al., 2009). These animated avatars displayed gait in four emotional (happy, angry, fearful, and sad) and speed-matched neutral styles. For each emotional gait and its equivalent neutral gait, avatars were displayed at five morphing levels between the two. Subjects underwent fMRI scanning while classifying the emotions and the emotional intensity levels expressed by the avatars. Our results revealed robust brain selectivity to emotional compared to neutral gait stimuli in brain regions which are involved in emotion and biological motion processing, such as the extrastriate body area (EBA), fusiform body area (FBA), superior temporal sulcus (STS), and the amygdala (AMG). Brain activity in the amygdala reflected emotional awareness: for visually identical stimuli it showed amplified stronger response when the stimulus was perceived as emotional. Notably, in avatars gradually morphed along an emotional expression axis there was a parametric correlation between amygdala activity and emotional intensity. This study extends the mapping of emotional decoding in the human brain to the domain of highly controlled dynamic biological motion. Our results highlight an extensive level of brain processing of emotional information related to body language, which relies mostly on body kinematics.
tests and proofs | 2011
Dominik Endres; Andrea Christensen; Lars Omlor; Martin A. Giese
Natural body movements arise in the form of temporal sequences of individual actions. During visual action analysis, the human visual system must accomplish a temporal segmentation of the action stream into individual actions. Such temporal segmentation is also essential to build hierarchical models for action synthesis in computer animation. Ideally, such segmentations should be computed automatically in an unsupervised manner. We present an unsupervised segmentation algorithm that is based on Bayesian Binning (BB) and compare it to human segmentations derived from psychophysical data. BB has the advantage that the observation model can be easily exchanged. Moreover, being an exact Bayesian method, BB allows for the automatic determination of the number and positions of segmentation points. We applied this method to motion capture sequences from martial arts and compared the results to segmentations provided by humans from movies that showed characters that were animated with the motion capture data. Human segmentation was then assessed by an interactive adjustment paradigm, where participants had to indicate segmentation points by selection of the relevant frames. Results show a good agreement between automatically generated segmentations and human performance when the trajectory segments between the transition points were modeled by polynomials of at least third order. This result is consistent with theories about differential invariants of human movements.
KI'11 Proceedings of the 34th Annual German conference on Advances in artificial intelligence | 2011
Dominik Endres; Andrea Christensen; Lars Omlor; Martin A. Giese
Natural body movements are temporal sequences of individual actions. In order to realise a visual analysis of these actions, the human visual system must accomplish a temporal segmentation of action sequences. We attempt to reproduce human temporal segmentations with Bayesian binning (BB)[8]. Such a reproduction would not only help our understanding of human visual processing, but would also have numerous potential applications in computer vision and animation. BB has the advantage that the observation model can be easily exchanged. Moreover, being an exact Bayesian method, BB allows for the automatic determination of the number and positions of segmentation points. We report our experiments with polynomial (in time) observation models on joint angle data obtained by motion capture. To obtain human segmentation points, we generated videos by animating sequences from the motion capture data. Human segmentation was then assessed by an interactive adjustment paradigm, where participants had to indicate segmentation points by selection of the relevant frames. We find that observation models with polynomial order ≥ 3 can match human segmentations closely.
Annual Conference on Artificial Intelligence | 2011
Nick Taubert; Dominik Endres; Andrea Christensen; Martin A. Giese
We present an approach for the generative modeling of human interactions with emotional style variations. We employ a hierarchical Gaussian process latent variable model (GP-LVM) to map motion capture data of handshakes into a space of low dimensionality. The dynamics of the handshakes in this low dimensional space are then learned by a standard hidden Markov model, which also encodes the emotional style variation. To assess the quality of generated and rendered handshakes, we asked human observers to rate them for realism and emotional content. We found that generated and natural handshakes are virtually indistinguishable, proving the accuracy of the learned generative model.