Martin Wegrzyn
Bielefeld University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Martin Wegrzyn.
Cortex | 2015
Martin Wegrzyn; Marcel Riehle; Kirsten Labudda; Friedrich G. Woermann; Florian Baumgartner; Stefan Pollmann; Christian G. Bien; Johanna Kissler
Humans can readily decode emotion expressions from faces and perceive them in a categorical manner. The model by Haxby and colleagues proposes a number of different brain regions with each taking over specific roles in face processing. One key question is how these regions directly compare to one another in successfully discriminating between various emotional facial expressions. To address this issue, we compared the predictive accuracy of all key regions from the Haxby model using multi-voxel pattern analysis (MVPA) of functional magnetic resonance imaging (fMRI) data. Regions of interest were extracted using independent meta-analytical data. Participants viewed four classes of facial expressions (happy, angry, fearful and neutral) in an event-related fMRI design, while performing an orthogonal gender recognition task. Activity in all regions allowed for robust above-chance predictions. When directly comparing the regions to one another, fusiform gyrus and superior temporal sulcus (STS) showed highest accuracies. These results underscore the role of the fusiform gyrus as a key region in perception of facial expressions, alongside STS. The study suggests the need for further specification of the relative role of the various brain areas involved in the perception of facial expression. Face processing appears to rely on more interactive and functionally overlapping neural mechanisms than previously conceptualised.
Frontiers in Psychology | 2014
Sebastian Schindler; Martin Wegrzyn; Inga Steppacher; Johanna Kissler
Language has an intrinsically evaluative and communicative function. Words can serve to describe emotional traits and states in others and communicate evaluations. Using electroencephalography (EEG), we investigate how the cerebral processing of emotional trait adjectives is modulated by their perceived communicative sender in anticipation of an evaluation. 16 students were videotaped while they described themselves. They were told that a stranger would evaluate their personality based on this recording by endorsing trait adjectives. In a control condition a computer program supposedly randomly selected the adjectives. Actually, both conditions were random. A larger parietal N1 was found for adjectives in the supposedly human-generated condition. This indicates that more visual attention is allocated to the presented adjectives when putatively interacting with a human. Between 400 and 700 ms a fronto-central main effect of emotion was found. Positive, and in tendency also negative adjectives, led to a larger late positive potential (LPP) compared to neutral adjectives. A centro-parietal interaction in the LPP-window was due to larger LPP amplitudes for negative compared to neutral adjectives within the ‘human sender’ condition. Larger LPP amplitudes are related to stimulus elaboration and memory consolidation. Participants responded more to emotional content particularly when presented in a meaningful ‘human’ context. This was first observed in the early posterior negativity window (210–260 ms). But the significant interaction between sender and emotion reached only trend-level on post hoc tests. Our results specify differential effects of even implied communicative partners on emotional language processing. They show that anticipating evaluation by a communicative partner alone is sufficient to increase the relevance of particularly emotional adjectives, given a seemingly realistic interactive setting.
Psychiatry Research-neuroimaging | 2013
Martin Wegrzyn; Stefan J. Teipel; Imke Oltmann; Alexandra Bauer; Johannes Thome; Annette Großmann; Karlheinz Hauenstein; Jacqueline Höppner
We investigated the functional consequences of compromised white matter integrity in Alzheimers disease by combining Diffusion Tensor Imaging (DTI) and Transcranial Magnetic Stimulation (TMS) in 19 patients with AD (Alzheimers disease) and 19 healthy controls. We used a region of interest approach and correlated the ipsilateral silent period (iSP) and the resting motor threshold (RMT) from TMS with fractional anisotropy (FA) and mean diffusivity (MD) values of the corpus callosum and corticospinal tract. AD patients showed significant reductions of FA in intracortical projecting fibre tracts compared to controls and widespread increases in MD. TMS data showed increased latency of iSP in AD patients and a decreased RMT, indicating decreased motor cortical inhibition. Although both TMS and DTI metrics were prominently altered in AD patients, impaired white matter integrity was not associated with increased iSP latency or reduced RMT, as correlation of TMS parameters with FA and MD values in the a priori defined regions showed no significant effects. Therefore, we argue that beside the direct degeneration of the underlying fibre tracts, other pathophysiological mechanisms may account for the observation of decreased transcallosal inhibition and increased motor excitability in AD.
PLOS ONE | 2017
Martin Wegrzyn; Maria Vogt; Berna Kireclioglu; Julia Schneider; Johanna Kissler
Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation.
Amyotrophic Lateral Sclerosis | 2014
Elisabeth Kasper; Martin Wegrzyn; Ivo Marx; Christin Korp; Wolfram Kress; Reiner Benecke; Stefan J. Teipel; Johannes Prudlo
Abstract Spinal and bulbar muscular atrophy (SBMA), Kennedys disease, is an adult-onset hereditary neurodegenerative disorder, associated predominantly with a lower motor neuron syndrome and eventually endocrine and sensory disturbances. In contrast to other motor neuron diseases such as amyotrophic lateral sclerosis (ALS), the impairment of cognition in SBMA is not well documented. We conducted a systematic cross-sectional neuropsychological study in order to investigate cognition in SBMA patients more thoroughly. We investigated 20 genetically proven SBMA patients compared to 20 age- and education-matched control subjects using a comprehensive neuropsychological test battery, measuring executive functioning, attention, memory and visuospatial abilities. The SBMA patients performed significantly worse than healthy controls in three sub-tests in the executive and attention domains. This low performance was in the working memory (digit span backward task), verbal fluency category (single letter fluency task) and memory storage capacity (digit span forward task). No disturbances were detected in other cognitive domains. The impairments were subclinical and not relevant to the patients’ everyday functioning. In addition, no correlations were found between cognitive scores and the CAG repeat length. In conclusion, we found minor cognitive disturbances in patients with SBMA, which could indicate subtle frontal lobe dysfunction. These findings extend our neurobiological understanding of SBMA.
PLOS ONE | 2015
Martin Wegrzyn; Isabelle Bruckhaus; Johanna Kissler
Human observers are remarkably proficient at recognizing expressions of emotions and at readily grouping them into distinct categories. When morphing one facial expression into another, the linear changes in low-level features are insufficient to describe the changes in perception, which instead follow an s-shaped function. Important questions are, whether there are single diagnostic regions in the face that drive categorical perception for certain parings of emotion expressions, and how information in those regions interacts when presented together. We report results from two experiments with morphed fear-anger expressions, where (a) half of the face was masked or (b) composite faces made up of different expressions were presented. When isolated upper and lower halves of faces were shown, the eyes were found to be almost as diagnostic as the whole face, with the response function showing a steep category boundary. In contrast, the mouth allowed for a substantially lesser amount of accuracy and responses followed a much flatter psychometric function. When a composite face consisting of mismatched upper and lower halves was used and observers were instructed to exclusively judge either the expression of mouth or eyes, the to-be-ignored part always influenced perception of the target region. In line with experiment 1, the eye region exerted a much stronger influence on mouth judgements than vice versa. Again, categorical perception was significantly more pronounced for upper halves of faces. The present study shows that identification of fear and anger in morphed faces relies heavily on information from the upper half of the face, most likely the eye region. Categorical perception is possible when only the upper face half is present, but compromised when only the lower part is shown. Moreover, observers tend to integrate all available features of a face, even when trying to focus on only one part.
BMC Psychology | 2017
Martin Wegrzyn; Sina Westphal; Johanna Kissler
BackgroundWhy is it that certain violent criminals repeatedly find themselves engaged in brawls? Many inmates report having felt provoked or threatened by their victims, which might be due to a tendency to ascribe malicious intentions when faced with ambiguous social signals, termed hostile attribution bias.MethodsThe present study presented morphed fear-anger faces to prison inmates with a history of violent crimes, a history of child sexual abuse, and to matched controls form the general population. Participants performed a fear-anger decision task. Analyses compared both response frequencies and measures derived from psychophysical functions fitted to the data. In addition, a test to distinguish basic facial expressions and questionnaires for aggression, psychopathy and personality disorders were administered.ResultsViolent offenders present with a reliable hostile attribution bias, in that they rate ambiguous fear-anger expressions as more angry, compared to both the control population and perpetrators of child sexual abuse. Psychometric functions show a lowered threshold to detect anger in violent offenders compared to the general population. This effect is especially pronounced for male faces, correlates with self-reported aggression and presents in absence of a general emotion recognition impairment.ConclusionsThe results indicate that a hostile attribution, related to individual level of aggression and pronounced for male faces, might be one mechanism mediating physical violence.
PLOS ONE | 2018
Martin Wegrzyn; Joana Aust; Larissa Barnstorf; Magdalena Gippert; Mareike Harms; Antonia Hautum; Shanna Heidel; Friederike Herold; Sarah M. Hommel; Anna-Katharina Knigge; Dominik Neu; Diana Peters; Marius Schaefer; Julia Schneider; Ria Vormbrock; Sabrina M. Zimmer; Friedrich G. Woermann; Kirsten Labudda
Cognitive processes, such as the generation of language, can be mapped onto the brain using fMRI. These maps can in turn be used for decoding the respective processes from the brain activation patterns. Given individual variations in brain anatomy and organization, analyzes on the level of the single person are important to improve our understanding of how cognitive processes correspond to patterns of brain activity. They also allow to advance clinical applications of fMRI, because in the clinical setting making diagnoses for single cases is imperative. In the present study, we used mental imagery tasks to investigate language production, motor functions, visuo-spatial memory, face processing, and resting-state activity in a single person. Analysis methods were based on similarity metrics, including correlations between training and test data, as well as correlations with maps from the NeuroSynth meta-analysis. The goal was to make accurate predictions regarding the cognitive domain (e.g. language) and the specific content (e.g. animal names) of single 30-second blocks. Four teams used the dataset, each blinded regarding the true labels of the test data. Results showed that the similarity metrics allowed to reach the highest degrees of accuracy when predicting the cognitive domain of a block. Overall, 23 of the 25 test blocks could be correctly predicted by three of the four teams. Excluding the unspecific rest condition, up to 10 out of 20 blocks could be successfully decoded regarding their specific content. The study shows how the information contained in a single fMRI session and in each of its single blocks can allow to draw inferences about the cognitive processes an individual engaged in. Simple methods like correlations between blocks of fMRI data can serve as highly reliable approaches for cognitive decoding. We discuss the implications of our results in the context of clinical fMRI applications, with a focus on how decoding can support functional localization.
The Journal of Neuroscience | 2015
Sebastian Schindler; Martin Wegrzyn; Inga Steppacher; Johanna Kissler
Journal of Neural Transmission | 2012
Jacqueline Hoeppner; Martin Wegrzyn; Johannes Thome; Alexandra Bauer; Imke Oltmann; Johannes Buchmann; Stefan J. Teipel