Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where F Soyka is active.

Publication


Featured researches published by F Soyka.


Physical Review Letters | 2008

Critical Casimir forces in colloidal suspensions on chemically patterned surfaces

F Soyka; Olga Zvyagolskaya; Christopher Hertlein; Laurent Helden; Clemens Bechinger

We investigate the behavior of colloidal particles immersed in a binary liquid mixture of water and 2,6-lutidine in the presence of a chemically patterned substrate. Close to the critical point of the mixture, the particles are subjected to critical Casimir interactions with force components normal and parallel to the surface. Because the strength and sign of these interactions can be tuned by variations in the surface properties and the mixtures temperature, critical Casimir forces allow the formation of highly ordered monolayers but also extend the use of colloids as model systems.


Experimental Brain Research | 2013

Integration of visual and inertial cues in the perception of angular self-motion

Kn de Winkel; F Soyka; Michael Barnett-Cowan; Hh Bülthoff; Eric L. Groen; Peter J. Werkhoven

The brain is able to determine angular self-motion from visual, vestibular, and kinesthetic information. There is compelling evidence that both humans and non-human primates integrate visual and inertial (i.e., vestibular and kinesthetic) information in a statistically optimal fashion when discriminating heading direction. In the present study, we investigated whether the brain also integrates information about angular self-motion in a similar manner. Eight participants performed a 2IFC task in which they discriminated yaw-rotations (2-s sinusoidal acceleration) on peak velocity. Just-noticeable differences (JNDs) were determined as a measure of precision in unimodal inertial-only and visual-only trials, as well as in bimodal visual–inertial trials. The visual stimulus was a moving stripe pattern, synchronized with the inertial motion. Peak velocity of comparison stimuli was varied relative to the standard stimulus. Individual analyses showed that data of three participants showed an increase in bimodal precision, consistent with the optimal integration model; while data from the other participants did not conform to maximum-likelihood integration schemes. We suggest that either the sensory cues were not perceived as congruent, that integration might be achieved with fixed weights, or that estimates of visual precision obtained from non-moving observers do not accurately reflect visual precision during self-motion.


AIAA Modeling and Simulation Technologies Conference 2009 | 2009

Does jerk have to be considered in linear motion simulation

F Soyka; Harald Teufel; K Beykirch; P Robuffo Giordano; John S. Butler; Frank M. Nieuwenhuizen; Hh Bülthoff

Perceptual thresholds for the detection of the direction of linear motion are important for motion simulation. There are situations in which a subject should not perceive the motion direction as, e.g., during repositioning of a simulator, but also opposite cases where a certain motion percept must intentionally be induced in the subject. The exact dependency of the perceptual thresholds on the time evolution of the presented motion profile is still an open question. Previous studies have found evidence for a sensitivity of the thresholds on the rate of change of acceleration, called jerk. In this study we investigate three motion profiles which differ in their jerk characteristics. We want to evaluate which profile can move people furthest in the horizontal plane in a given time without them noticing the direction. Our results suggest that a profile with a minimum peak jerk value should be chosen.


PLOS ONE | 2015

Integration of Semi-Circular Canal and Otolith Cues for Direction Discrimination during Eccentric Rotations

F Soyka; Hh Bülthoff; Michael Barnett-Cowan

Humans are capable of moving about the world in complex ways. Every time we move, our self-motion must be detected and interpreted by the central nervous system in order to make appropriate sequential movements and informed decisions. The vestibular labyrinth consists of two unique sensory organs the semi-circular canals and the otoliths that are specialized to detect rotation and translation of the head, respectively. While thresholds for pure rotational and translational self-motion are well understood surprisingly little research has investigated the relative role of each organ on thresholds for more complex motion. Eccentric (off-center) rotations during which the participant faces away from the center of rotation stimulate both organs and are thus well suited for investigating integration of rotational and translational sensory information. Ten participants completed a psychophysical direction discrimination task for pure head-centered rotations, translations and eccentric rotations with 5 different radii. Discrimination thresholds for eccentric rotations reduced with increasing radii, indicating that additional tangential accelerations (which increase with radius length) increased sensitivity. Two competing models were used to predict the eccentric thresholds based on the pure rotation and translation thresholds: one assuming that information from the two organs is integrated in an optimal fashion and another assuming that motion discrimination is solved solely by relying on the sensor which is most strongly stimulated. Our findings clearly show that information from the two organs is integrated. However the measured thresholds for 3 of the 5 eccentric rotations are even more sensitive than predictions from the optimal integration model suggesting additional non-vestibular sources of information may be involved.


acm symposium on applied perception | 2016

Enhancing stress management techniques using virtual reality

F Soyka; Markus Leyrer; Joe Smallwood; Chris Ferguson; Bernhard E. Riecke; Betty J. Mohler

Chronic stress is one of the major problems in our current fast paced society. The body reacts to environmental stress with physiological changes (e.g. accelerated heart rate), increasing the activity of the sympathetic nervous system. Normally the parasympathetic nervous system should bring us back to a more balanced state after the stressful event is over. However, nowadays we are often under constant pressure, with a multitude of stressful events per day, which can result in us constantly being out of balance. This highlights the importance of effective stress management techniques that are readily accessible to a wide audience. In this paper we present an exploratory study investigating the potential use of immersive virtual reality for relaxation with the purpose of guiding further design decisions, especially about the visual content as well as the interactivity of virtual content. Specifically, we developed an underwater world for head-mounted display virtual reality. We performed an experiment to evaluate the effectiveness of the underwater world environment for relaxation, as well as to evaluate if the underwater world in combination with breathing techniques for relaxation was preferred to standard breathing techniques for stress management. The underwater world was rated as more fun and more likely to be used at home than a traditional breathing technique, while providing a similar degree of relaxation.


ieee virtual reality conference | 2015

Turbulent motions cannot shake VR

F Soyka; Elena Kokkinara; Markus Leyrer; Heinrich H. Buelthoff; Mel Slater; Betty J. Mohler

The International Air Transport Association forecasts that there will be at least a 30% increase in passenger demand for flights over the next five years. In these circumstances the aircraft industry is looking for new ways to keep passengers occupied, entertained and healthy, and one of the methods under consideration is immersive virtual reality. It is therefore becoming important to understand how motion sickness and presence in virtual reality are influenced by physical motion. We were specifically interested in the use of head-mounted displays (HMD) while experiencing in-flight motions such as turbulence. 50 people were tested in different virtual environments varying in their context (virtual airplane versus magic carpet ride over tropical islands) and the way the physical motion was incorporated into the virtual world (matching visual and auditory stimuli versus no incorporation). Participants were subjected to three brief periods of turbulent motions realized with a motion simulator. Physiological signals (postural stability, heart rate and skin conductance) as well as subjective experiences (sickness and presence questionnaires) were measured. None of our participants experienced severe motion sickness during the experiment and although there were only small differences between conditions we found indications that it is beneficial for both wellbeing and presence to choose a virtual environment in which turbulent motions could be plausible and perceived as part of the scenario. Therefore we can conclude that brief exposure to turbulent motions does not get participants sick.


ieee virtual reality conference | 2014

Demonstration: VR-HYPERSPACE — The innovative use of virtual reality to increase comfort by changing the perception of self and space

Mirabelle D'Cruz; Harshada Patel; Laura Lewis; Sue Cobb; Matthias Bues; Oliver Stefani; Tredeaux Grobler; Kaj Helin; Juhani Viitaniemi; Susanna Aromaa; Bernd Fröhlich; Stephan Beck; André Kunert; Alexander Kulik; Ioannis Karaseitanidis; Panagiotis Psonis; Nikos Frangakis; Mel Slater; Ilias Bergstrom; Konstantina Kilteni; Elena Kokkinara; Betty J. Mohler; Markus Leyrer; F Soyka; Enrico Gaia; Domenico Tedone; Michael Olbert; Mario Cappitelli

Our vision is that regardless of future variations in the interior of airplane cabins, we can utilize ever-advancing state-of-the-art virtual and mixed reality technologies with the latest research in neuroscience and psychology to achieve high levels of comfort for passengers. Current surveys on passengers experience during air travel reveal that they are least satisfied with the amount and effectiveness of their personal space, and their ability to work, sleep or rest. Moreover, considering current trends it is likely that the amount of available space is likely to decrease and therefore the passengers physical comfort during a flight is likely to worsen significantly. Therefore, the main challenge is to enable the passengers to maintain a high level of comfort and satisfaction while being placed in a restricted physical space.


Seeing and Perceiving | 2012

Temporal processing of self-motion: Translations are processed slower than rotations

F Soyka; M Barnett Cowan; P Robuffo Giordano; Hh Bülthoff

Reaction times (RTs) to purely inertial self-motion stimuli have only infrequently been studied, and comparisons of RTs for translations and rotations, to our knowledge, are nonexistent. We recently proposed a model (Soyka et al., 2011) which describes direction discrimination thresholds for rotational and translational motions based on the dynamics of the vestibular sensory organs (otoliths and semi-circular canals). This model also predicts differences in RTs for different motion profiles (e.g., trapezoidal versus triangular acceleration profiles or varying profile durations). In order to assess these predictions we measured RTs in 20 participants for 8 supra-threshold motion profiles (4 translations, 4 rotations). A two-alternative forced-choice task, discriminating leftward from rightward motions, was used and 30 correct responses per condition were evaluated. The results agree with predictions for RT differences between motion profiles as derived from previously identified model parameters from threshold measurements. To describe absolute RT, a constant is added to the predictions representing both the discrimination process, and the time needed to press the response button. This constant is approximately 160 ms shorter for rotations, thus indicating that additional processing time is required for translational motion. As this additional latency cannot be explained by our model based on the dynamics of the sensory organs, we speculate that it originates at a later stage, e.g., during tilt-translation disambiguation. Varying processing latencies for different self-motion stimuli (either translations or rotations) which our model can account for must be considered when assessing the perceived timing of vestibular stimulation in comparison with other senses (Barnett-Cowan and Harris, 2009; Sanders et al., 2011).


I-perception | 2011

Integration of Visual and Vestibular Information Used to Discriminate Rotational Self-Motion

F Soyka; Ksander N. de Winkel; Michael Barnett-Cowan; Eric Groen; Hh Bülthoff

Do humans integrate visual and vestibular information in a statistically optimal fashion when discriminating rotational self-motion stimuli? Recent studies are inconclusive as to whether such integration occurs when discriminating heading direction. In the present study eight participants were consecutively rotated twice (2s sinusoidal acceleration) on a chair about an earth-vertical axis in vestibular-only, visual-only and visual-vestibular trials. The visual stimulus was a video of a moving stripe pattern, synchronized with the inertial motion. Peak acceleration of the reference stimulus was varied and participants reported which rotation was perceived as faster. Just-noticeable differences (JND) were estimated by fitting psychometric functions. The visual-vestibular JND measurements are too high compared to the predictions based on the unimodal JND estimates and there is no JND reduction between visual-vestibular and visual-alone estimates. These findings may be explained by visual capture. Alternatively, the visual precision may not be equal between visual-vestibular and visual-alone conditions, since it has been shown that visual motion sensitivity is reduced during inertial self-motion. Therefore, measuring visual-alone JNDs with an underlying uncorrelated inertial motion might yield higher visual-alone JNDs compared to the stationary measurement. Theoretical calculations show that higher visual-alone JNDs would result in predictions consistent with the JND measurements for the visual-vestibular condition.


34th European Conference on Visual Perception | 2011

Multisensory integration in the perception of self-motion about an Earth-vertical yaw axis

Kn de Winkel; F Soyka; Michael Barnett-Cowan; Eric L. Groen; Hh Bülthoff

Newer technology allows for more realistic virtual environments by providing visual image quality that is very similar to that in the real world, this includes adding in virtual self-animated avatars [Slater et al, 2010 PLoS ONE 5(5); Sanchez-Vives et al, 2010 PLoS ONE 5(4)]. To investigate the influence of relative size changes between the visual environment and the visual body, we immersed participants into a full cue virtual environment where they viewed a self-animated avatar from behind and at the same eye-height as the avatar. We systematically manipulated the size of the avatar and the size of the virtual room (which included familiar objects). Both before and after exposure to the virtual room and body, participants performed an action-based measurement and made verbal estimates about the size of self and the world. Additionally we measured their subjective sense of body ownership. The results indicate that the size of the self-representing avatar can change how the user perceives and interacts within the virtual environment. These results have implications for scientists interested in visual space perception and also could potentially be useful for creating positive visual illusions (ie the feeling of being in a more spacious room).Two experiments assessed the development of children’s part and configural (part-relational) processing in object recognition during adolescence. In total 280 school children aged 7–16 and 56 adults were tested in 3AFC tasks to judge the correct appearance of upright and inverted presented familiar animals, artifacts, and newly learned multi-part objects, which had been manipulated either in terms of individual parts or part relations. Manipulation of part relations was constrained to either metric (animals and artifacts) or categorical (multi-part objects) changes. For animals and artifacts, even the youngest children were close to adult levels for the correct recognition of an individual part change. By contrast, it was not until aged 11–12 that they achieved similar levels of performance with regard to altered metric part relations. For the newly-learned multipart objects, performance for categorical part-specific and part-relational changes was equivalent throughout the tested age range for upright presented stimuli. The results provide converging evidence, with studies of face recognition, for a surprisingly late consolidation of configural-metric relative to part-based object recognition.According to the functional approach to the perception of spatial layout, angular optic variables that indicate extents are scaled to the body and its action capabilities [cf Proffitt, 2006 Perspectives on Psychological Science 1(2) 110–122]. For example, reachable extents are perceived as a proportion of the maximum extent to which one can reach, and the apparent sizes of graspable objects are perceived as a proportion of the maximum extent that one can grasp (Linkenauger et al, 2009 Journal of Experimental Psychology: Human Perceptiion and Performance; 2010 Psychological Science). Therefore, apparent sizes and distances should be influenced by changing scaling aspects of the body. To test this notion, we immersed participants into a full cue virtual environment. Participants’ head, arm and hand movements were tracked and mapped onto a first-person, self-representing avatar in real time. We manipulated the participants’ visual information about their body by changing aspects of the self-avatar (hand size and arm length). Perceptual verbal and action judgments of the sizes and shapes of virtual objects’ (spheres and cubes) varied as a function of the hand/arm scaling factor. These findings provide support for a body-based approach to perception and highlight the impact of self-avatars’ bodily dimensions for users’ perceptions of space in virtual environments.

Collaboration


Dive into the F Soyka's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mel Slater

University of Barcelona

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge