Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mariia Kaliuzhna is active.

Publication


Featured researches published by Mariia Kaliuzhna.


Consciousness and Cognition | 2015

Balancing awareness: Vestibular signals modulate visual consciousness in the absence of awareness

Roy Salomon; Mariia Kaliuzhna; Bruno Herbelin; Olaf Blanke

The processing of visual and vestibular information is crucial for perceiving self-motion. Visual cues, such as optic flow, have been shown to induce and alter vestibular percepts, yet the role of vestibular information in shaping visual awareness remains unclear. Here we investigated if vestibular signals influence the access to awareness of invisible visual signals. Using natural vestibular stimulation (passive yaw rotations) on a vestibular self-motion platform, and optic flow masked through continuous flash suppression (CFS) we tested if congruent visual-vestibular information would break interocular suppression more rapidly than incongruent information. We found that when the unseen optic flow was congruent with the vestibular signals perceptual suppression as quantified with the CFS paradigm was broken more rapidly than when it was incongruent. We argue that vestibular signals impact the formation of visual awareness through enhanced access to awareness for congruent multisensory stimulation.


PLOS ONE | 2014

Vestibular-somatosensory interactions: effects of passive whole-body rotation on somatosensory detection.

Elisa Raffaella Ferrè; Mariia Kaliuzhna; Bruno Herbelin; Patrick Haggard; Olaf Blanke

Vestibular signals are strongly integrated with information from several other sensory modalities. For example, vestibular stimulation was reported to improve tactile detection. However, this improvement could reflect either a multimodal interaction or an indirect interaction driven by vestibular effects on spatial attention and orienting. Here we investigate whether natural vestibular activation induced by passive whole-body rotation influences tactile detection. In particular, we assessed the ability to detect faint tactile stimuli to the fingertips of the left and right hand during spatially congruent or incongruent rotations. We found that passive whole-body rotations significantly enhanced sensitivity to faint shocks, without affecting response bias. Critically, this enhancement of somatosensory sensitivity did not depend on the spatial congruency between the direction of rotation and the hand stimulated. Thus, our results support a multimodal interaction, likely in brain areas receiving both vestibular and somatosensory signals.


Journal of Vision | 2015

Learning to integrate contradictory multisensory self-motion cue pairings

Mariia Kaliuzhna; Mario Prsa; Steven Gale; Stella J. Lee; Olaf Blanke

Humans integrate multisensory information to reduce perceptual uncertainty when perceiving the world and self. Integration fails, however, if a common causality is not attributed to the sensory signals, as would occur in conditions of spatiotemporal discrepancies. In the case of passive self-motion, visual and vestibular cues are integrated according to statistical optimality, yet the extent of cue conflicts that do not compromise this optimality is currently underexplored. Here, we investigate whether human subjects can learn to integrate two arbitrary, but co-occurring, visual and vestibular cues of self-motion. Participants made size comparisons between two successive whole-body rotations using only visual, only vestibular, and both modalities together. The vestibular stimulus provided a yaw self-rotation cue, the visual a roll (Experiment 1) or pitch (Experiment 2) rotation cue. Experimentally measured thresholds in the bimodal condition were compared with theoretical predictions derived from the single-cue thresholds. Our results show that human subjects combine and optimally integrate vestibular and visual information, each signaling self-motion around a different rotation axis (yaw vs. roll and yaw vs. pitch). This finding suggests that the experience of two temporally co-occurring but spatially unrelated self-motion cues leads to inferring a common cause for these two initially unrelated sources of information about self-motion. We discuss our results in terms of specific task demands, cross-modal adaptation, and spatial compatibility. The importance of these results for the understanding of bodily illusions is also discussed.


European Journal of Neuroscience | 2015

Tuning of temporo-occipital activity by frontal oscillations during virtual mirror exposure causes erroneous self-recognition

Andrea Serino; Anna Sforza; Noriaki Kanayama; Michiel van Elk; Mariia Kaliuzhna; Bruno Herbelin; Olaf Blanke

Self‐face recognition, a hallmark of self‐awareness, depends on ‘off‐line’ stored information about ones face and ‘on‐line’ multisensory‐motor face‐related cues. The brain mechanisms of how on‐line sensory‐motor processes affect off‐line neural self‐face representations are unknown. This study used 3D virtual reality to create a ‘virtual mirror’ in which participants saw an avatars face moving synchronously with their own face movements. Electroencephalographic (EEG) analysis during virtual mirror exposure revealed mu oscillations in sensory‐motor cortex signalling on‐line congruency between the avatars and participants’ movements. After such exposure and compatible with a change in their off‐line self‐face representation, participants were more prone to recognize the avatars face as their own, and this was also reflected in the activation of face‐specific regions in the inferotemporal cortex. Further EEG analysis showed that the on‐line sensory‐motor effects during virtual mirror exposure caused these off‐line visual effects, revealing the brain mechanisms that maintain a coherent self‐representation, despite our continuously changing appearance.


Multisensory Research | 2016

Multisensory Integration in Self Motion Perception

Mark W. Greenlee; Sebastian M. Frank; Mariia Kaliuzhna; Olaf Blanke; Frank Bremmer; Jan Churan; Luigi F. Cuturi; Paul R. MacNeilage; Andrew T. Smith

Self motion perception involves the integration of visual, vestibular, somatosensory and motor signals. This article reviews the findings from single unit electrophysiology, functional and structural magnetic resonance imaging and psychophysics to present an update on how the human and non-human primate brain integrates multisensory information to estimate ones position and motion in space. The results indicate that there is a network of regions in the non-human primate and human brain that processes self motion cues from the different sense modalities.


Multisensory Research | 2015

Out-of-Body Experiences and Other Complex Dissociation Experiences in a Patient with Unilateral Peripheral Vestibular Damage and Deficient Multisensory Integration

Mariia Kaliuzhna; Dominique Vibert; Petr Grivaz; Olaf Blanke

Out-of-body experiences (OBEs) are illusory perceptions of ones body from an elevated disembodied perspective. Recent theories postulate a double disintegration process in the personal (visual, proprioceptive and tactile disintegration) and extrapersonal (visual and vestibular disintegration) space as the basis of OBEs. Here we describe a case which corroborates and extends this hypothesis. The patient suffered from peripheral vestibular damage and presented with OBEs and lucid dreams. Analysis of the patients behaviour revealed a failure of visuo-vestibular integration and abnormal sensitivity to visuo-tactile conflicts that have previously been shown to experimentally induce out-of-body illusions (in healthy subjects). In light of these experimental findings and the patients symptomatology we extend an earlier model of the role of vestibular signals in OBEs. Our results advocate the involvement of subcortical bodily mechanisms in the occurrence of OBEs.


Scientific Reports | 2016

Multisensory effects on somatosensation: a trimodal visuo-vestibular-tactile interaction

Mariia Kaliuzhna; Elisa Raffaella Ferrè; Bruno Herbelin; Olaf Blanke; Patrick Haggard

Vestibular information about self-motion is combined with other sensory signals. Previous research described both visuo-vestibular and vestibular-tactile bilateral interactions, but the simultaneous interaction between all three sensory modalities has not been explored. Here we exploit a previously reported visuo-vestibular integration to investigate multisensory effects on tactile sensitivity in humans. Tactile sensitivity was measured during passive whole body rotations alone or in conjunction with optic flow, creating either purely vestibular or visuo-vestibular sensations of self-motion. Our results demonstrate that tactile sensitivity is modulated by perceived self-motion, as provided by a combined visuo-vestibular percept, and not by the visual and vestibular cues independently. We propose a hierarchical multisensory interaction that underpins somatosensory modulation: visual and vestibular cues are first combined to produce a multisensory self-motion percept. Somatosensory processing is then enhanced according to the degree of perceived self-motion.


Neuropsychologia | 2018

Optimal visuo-vestibular integration for self-motion perception in patients with unilateral vestibular loss

Mariia Kaliuzhna; Steven Gale; Mario Prsa; Raphael Maire; Olaf Blanke

ABSTRACT Unilateral vestibular loss (UVL) is accompanied by deficits in processing of visual and vestibular self‐motion cues. The present study examined whether multisensory integration of these two types of information is, nevertheless, intact in such patients. Patients were seated on a rotating platform with a screen simulating 3D rotation in front of them and asked to judge the relative magnitude of two successive rotations in the yaw plane in three conditions: vestibular stimulation, visual stimulation and bimodal stimulation (congruent stimuli from both modalities together). Similar to findings in healthy controls, UVL patients exhibited optimal multisensory integration during both ipsi‐ and contralesional rotations. The benefit of multisensory integration was more pronounced on the ipsilesional side. These results show that visuo‐vestibular integration for passive self‐motion is automatic and suggests that it functions without additional cognitive mechanisms, unlike more complex multisensory tasks such as postural control and spatial navigation, previously shown to be impaired in UVL patients. HIGHLIGHTSPatients with unilateral vestibular loss integrate visuo‐vestibular cues.This integration is optimal.Optimality is maintained for contra‐ and ipsilesional rotations.


Journal of Neurology | 2017

Ictal postural phantom limb sensation is associated with impaired mental imagery of body parts

Lukas Heydrich; Mariia Kaliuzhna; Sebastian Dieguez; Roger Nançoz; Olaf Blanke; Margitta Seeck

Reference EPFL-ARTICLE-230723doi:10.1007/s00415-017-8554-4View record in Web of Science Record created on 2017-09-05, modified on 2017-09-05


Multisensory Research | 2013

Rotating straight ahead or translating in circles: How we learn to integrate contradictory multisensory self-motion cue pairings

Mariia Kaliuzhna; Olaf Blanke; Mario Prsa

Humans integrate multisensory information to reduce perceptual uncertainty when perceiving the world (Hillis et al., 2002, 2004) and self (Butler et al., 2010; Prsa et al., 2012) and it has been shown that two multisensory cues are combined and give rise to a single percept only if attributed to the same causal event (Koerding et al., 2007; Parise et al., 2012; Shams and Beierholm, 2010). A growing body of literature studies the limits of such integration for bodily self-consciousness and the perception of self-location under normal and pathological conditions (Ionta et al., 2011). We extend this research by investigating whether human subjects can learn to integrate two arbitrary visual and vestibular cues of self-motion due to their temporal co-occurrence. We conducted two experiments ( N = 8 each) in which whole-body rotations were used as the vestibular stimulus and optic flow as the visual stimulus. The vestibular stimulus provided a yaw self-rotation cue, the visual — a roll (experiment 1) or pitch (experiment 2) rotation cue. Subjects made a relative size comparison between a standard rotation size and a variable test rotation size. Their discrimination performance was fit with a psychometric function and perceptual discrimination thresholds were extracted. We compared experimentally measured thresholds in the bimodal condition with theoretical predictions derived from the single cue thresholds. Our results show that human subjects can learn to combine and optimally integrate vestibular and visual information, each signaling self-motion around a different rotation axis (yaw versus roll as well as pitch). This finding suggests that the experience of two temporally co-occurring but spatially unrelated self-motion cues leads to inferring a common cause to these two initially unrelated sources of information about self-motion.

Collaboration


Dive into the Mariia Kaliuzhna's collaboration.

Top Co-Authors

Avatar

Olaf Blanke

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Bruno Herbelin

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mario Prsa

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Steven Gale

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Patrick Haggard

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge