Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aleksander Väljamäe is active.

Publication


Featured researches published by Aleksander Väljamäe.


tests and proofs | 2009

Moving sounds enhance the visually-induced self-motion illusion (circular vection) in virtual reality

Bernhard E. Riecke; Aleksander Väljamäe; J Schulte-Pelkum

While rotating visual and auditory stimuli have long been known to elicit self-motion illusions (“circular vection”), audiovisual interactions have hardly been investigated. Here, two experiments investigated whether visually induced circular vection can be enhanced by concurrently rotating auditory cues that match visual landmarks (e.g., a fountain sound). Participants sat behind a curved projection screen displaying rotating panoramic renderings of a market place. Apart from a no-sound condition, headphone-based auditory stimuli consisted of mono sound, ambient sound, or low-/high-spatial resolution auralizations using generic head-related transfer functions (HRTFs). While merely adding nonrotating (mono or ambient) sound showed no effects, moving sound stimuli facilitated both vection and presence in the virtual environment. This spatialization benefit was maximal for a medium (20° × 15°) FOV, reduced for a larger (54° × 45°) FOV and unexpectedly absent for the smallest (10° × 7.5°) FOV. Increasing auralization spatial fidelity (from low, comparable to five-channel home theatre systems, to high, 5° resolution) provided no further benefit, suggesting a ceiling effect. In conclusion, both self-motion perception and presence can benefit from adding moving auditory stimuli. This has important implications both for multimodal cue integration theories and the applied challenge of building affordable yet effective motion simulators.


Brain Research Reviews | 2009

Auditorily-induced illusory self-motion: a review.

Aleksander Väljamäe

The aim of this paper is to provide a first review of studies related to auditorily-induced self-motion (vection). These studies have been scarce and scattered over the years and over several research communities including clinical audiology, multisensory perception of self-motion and its neural correlates, ergonomics, and virtual reality. The reviewed studies provide evidence that auditorily-induced vection has behavioral, physiological and neural correlates. Although the sound contribution to self-motion perception appears to be weaker than the visual modality, specific acoustic cues appear to be instrumental for a number of domains including posture prosthesis, navigation in unusual gravitoinertial environments (in the air, in space, or underwater), non-visual navigation, and multisensory integration during self-motion. A number of open research questions are highlighted opening avenue for more active and systematic studies in this area.


Emotion | 2010

Embodied auditory perception: The emotional impact of approaching and receding sound sources.

Ana Tajadura-Jiménez; Aleksander Väljamäe; Erkin Asutay; Daniel Västfjäll

Research has shown the existence of perceptual and neural bias toward sounds perceived as sources approaching versus receding a listener. It has been suggested that a greater biological salience of approaching auditory sources may account for these effects. In addition, these effects may hold only for those sources critical for our survival. In the present study, we bring support to these hypotheses by quantifying the emotional responses to different sounds with changing intensity patterns. In 2 experiments, participants were exposed to artificial and natural sounds simulating approaching or receding sources. The auditory-induced emotional effect was reflected in the performance of participants in an emotion-related behavioral task, their self-reported emotional experience, and their physiology (electrodermal activity and facial electromyography). The results of this study suggest that approaching unpleasant sound sources evoke more intense emotional responses in listeners than receding ones, whereas such an effect of perceived sound motion does not exist for pleasant or neutral sound sources. The emotional significance attributed to the sound source itself, the loudness of the sound, and loudness change duration seem to be relevant factors in this disparity.


Neuropsychologia | 2009

Auditory-somatosensory multisensory interactions are spatially modulated by stimulated body surface and acoustic spectra

Ana Tajadura-Jiménez; Norimichi Kitagawa; Aleksander Väljamäe; Massimiliano Zampini; Micah M. Murray; Charles Spence

Previous research has provided inconsistent results regarding the spatial modulation of auditory-somatosensory interactions. The present study reports three experiments designed to investigate the nature of these interactions in the space close to the head. Human participants made speeded detection responses to unimodal auditory, somatosensory, or simultaneous auditory-somatosensory stimuli. In Experiment 1, electrocutaneous stimuli were presented to either earlobe, while auditory stimuli were presented from the same versus opposite sides, and from one of two distances (20 vs. 70 cm) from the participants head. The results demonstrated a spatial modulation of auditory-somatosensory interactions when auditory stimuli were presented from close to the head. In Experiment 2, electrocutaneous stimuli were delivered to the hands, which were placed either close to or far from the head, while the auditory stimuli were again presented at one of two distances. The results revealed that the spatial modulation observed in Experiment 1 was specific to the particular body part stimulated (head) rather than to the region of space (i.e. around the head) where the stimuli were presented. The results of Experiment 3 demonstrate that sounds that contain high-frequency components are particularly effective in eliciting this auditory-somatosensory spatial effect. Taken together, these findings help to resolve inconsistencies in the previous literature and suggest that auditory-somatosensory multisensory integration is modulated by the stimulated body surface and acoustic spectra of the stimuli presented.


The Engineering of Mixed Reality Systems | 2010

The eXperience Induction Machine: A New Paradigm for Mixed-Reality Interaction Design and Psychological Experimentation

Ulysses Bernardet; Sergi Bermúdez i Badia; Armin Duff; Martin Inderbitzin; Sylvain Le Groux; Jônatas Manzolli; Zenon Mathews; Anna Mura; Aleksander Väljamäe; Paul F. M. J. Verschure

The eXperience Induction Machine (XIM) is one of the most advanced mixed-reality spaces available today. XIM is an immersive space that consists of physical sensors and effectors and which is conceptualized as a general-purpose infrastructure for research in the field of psychology and human–artifact interaction. In this chapter, we set out the epistemological rational behind XIM by putting the installation in the context of psychological research. The design and implementation of XIM are based on principles and technologies of neuromorphic control. We give a detailed description of the hardware infrastructure and software architecture, including the logic of the overall behavioral control. To illustrate the approach toward psychological experimentation, we discuss a number of practical applications of XIM. These include the so-called, persistent virtual community, the application in the research of the relationship between human experience and multi-modal stimulation, and an investigation of a mixed-reality social interaction paradigm.


Acta Psychologica | 2008

Filling-in visual motion with sounds

Aleksander Väljamäe; Salvador Soto-Faraco

Information about the motion of objects can be extracted by multiple sensory modalities, and, as a consequence, object motion perception typically involves the integration of multi-sensory information. Often, in naturalistic settings, the flow of such information can be rather discontinuous (e.g. a cat racing through the furniture in a cluttered room is partly seen and partly heard). This study addressed audio-visual interactions in the perception of time-sampled object motion by measuring adaptation after-effects. We found significant auditory after-effects following adaptation to unisensory auditory and visual motion in depth, sampled at 12.5 Hz. The visually induced (cross-modal) auditory motion after-effect was eliminated if visual adaptors flashed at half of the rate (6.25 Hz). Remarkably, the addition of the high-rate acoustic flutter (12.5 Hz) to this ineffective, sparsely time-sampled, visual adaptor restored the auditory after-effect to a level comparable to what was seen with high-rate bimodal adaptors (flashes and beeps). Our results suggest that this auditory-induced reinstatement of the motion after-effect from the poor visual signals resulted from the occurrence of sound-induced illusory flashes. This effect was found to be dependent both on the directional congruency between modalities and on the rate of auditory flutter. The auditory filling-in of time-sampled visual motion supports the feasibility of using reduced frame rate visual content in multisensory broadcasting and virtual reality applications.


2009 Advanced Technologies for Enhanced Quality of Life | 2009

A Brain-Machine Interface Based on EEG: Extracted Alpha Waves Applied to Mobile Robot

Mufti Mahmud; Aleksander Väljamäe

The increasing number of signal processing tools for highly parallel neurophysiological recordings opens up new avenues for connecting technologies directly to neuronal processes. As the understanding is taking a better shape, lot more work to perform is coming up. A simple brain-machine interface may be able to reestablish the broken loop of the persons with motor dysfunction. With time the brain-machine interfacing is growing more complex due to the increased availability of instruments and processes for implementation. In this work, as a proof-of-principle we established a brain-machine interface through a few simple processes to control a robotic device using the alpha wave’s event-related synchronization and event-related de-synchronization extracted from EEG.


In: Dubois, E and Nigay, L and Gray, P, (eds.) The engineering of Mixed Reality Systems. Springer (2010) | 2010

Auditory-Induced Presence in Mixed Reality Environments and Related Technology

Pontus Larsson; Aleksander Väljamäe; Daniel Västfjäll; Ana Tajadura-Jiménez; Mendel Kleiner

Presence, the “perceptual illusion of non-mediation,” is often a central goal in mediated and mixed environments, and sound is believed to be crucial for inducing high-presence experiences. This chapter provides a review of the state of the art within presence research related to auditory environments. Various sound parameters such as externalization and spaciousness and consistency within and across modalities are discussed in relation to their presence-inducing effects. Moreover, these parameters are related to the use of audio in mixed realities and example applications are discussed. Finally, we give an account of the technological possibilities and challenges within the area of presence-inducing sound rendering and presentation for mixed realities and outline future research aims.


Face and Gesture 2011 | 2011

Expression of emotional states during locomotion based on canonical parameters

Martin Inderbitzin; Aleksander Väljamäe; José María Blanco Calvo; Paul F. M. J. Verschure; Ulysses Bernardet

Humans have the ability to use a complex code of non-verbal behavior to communicate their internal states to others. Conversely, the understanding of intentions and emotions of others is a fundamental aspect of human social interaction. In the study presented here we investigate how people perceive the expression of emotional states based on the observation of different styles of locomotion. Our goal is to find a small set of canonical parameters that allow to control a wide range of emotional expressions. We generated different classes of walking behavior by varying the head/torso inclination, the walking speed, and the viewing angle of an animation of a virtual character. 18 subjects rated the observed walking person using the two-dimensional circumplex model of arousal and valence. The results show that, independent of the viewing angle, participants perceived distinct states of arousal and valence. Moreover, we could show that parametrized body posture codes emotional states, irrespective of the contextual influence or facial expressions. These findings suggest that human locomotion transmits basic emotional cues that can be directly related to canonical parameters of different dimensions of the expressive behavior. These findings are important as they allow us to build virtual characters whose emotional expression is recognizable at large distance and during extended periods of time.


Virtual Reality | 2009

Social cooperation and competition in the mixed reality space eXperience Induction Machine XIM

Martin Inderbitzin; Sytse Wierenga; Aleksander Väljamäe; Ulysses Bernardet; Paul F. M. J. Verschure

Although the architecture of mixed reality spaces is becoming increasingly more complex, our understanding of human behavior in such spaces is still limited. Despite the sophisticated methods deployed in ethology and behavioral biology to track and analyze the actions and movements of animals, we rarely find studies that focus on the understanding of human behavior using such instruments. Here, we address this issue by analyzing social behavior and physical actions of multiple humans who are engaging in a game. As a paradigm of social interaction, we constructed a mixed reality football game in which two teams of two players have to cooperate and compete in order to win. This paradigm was deployed in the, so-called, eXperience Induction Machine (XIM), a human accessible, fully instrumented space that supports full body interaction in mixed reality without the need for body-mounted sensors. Our results show that winning and losing strategies can be discerned by specific behavioral patterns and proxemics. This demonstrates that mixed reality systems such as XIM provide new paradigms for the investigation of human social behavior.

Collaboration


Dive into the Aleksander Väljamäe's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mendel Kleiner

Chalmers University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Norimichi Kitagawa

Nippon Telegraph and Telephone

View shared research outputs
Top Co-Authors

Avatar

Anna Mura

Pompeu Fabra University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge