Boris M. Velichkovsky
Kurchatov Institute
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Boris M. Velichkovsky.
Behavioral and Brain Sciences | 1994
Bruce Bridgeman; A. H. C. van der Heijden; Boris M. Velichkovsky
We identify two aspects of the problem of maintaining perceptual stability despite an observers eye movements. The first, visual direction constancy, is the (egocentric) stability of apparent positions of objects in the visual world relative to the perceiver. The second, visual position constancy, is the (exocentric) stability of positions of objects relative to each other. We analyze the constancy of visual direction despite saccadic eye movements. Three information sources have been proposed to enable the visual system to achieve stability: the structure of the visual field, proprioceptive inflow, and a copy of neural efference or outflow to the extraocular muscles. None of these sources by itself provides adequate information to achieve visual direction constancy; present evidence indicates that all three are used. Our final question concerns how information processing operations result in a stable world. The three traditionally suggested means have been elimination, translation, or evaluation. All are rejected. From a review of the physiological and psychological evidence we conclude that no subtraction, compensation, or evaluation need take place. The problem for which these solutions were developed turns out to be a false one. We propose a “calibration” solution: correct spatiotopic positions are calculated anew for each fixation. Inflow, outflow, and retinal sources are used in this calculation: saccadic suppression of displacement bridges the errors between these sources and the actual extent of movement.
Visual Cognition | 2005
Pieter J.A. Unema; Sebastian Pannasch; M. Joos; Boris M. Velichkovsky
The present study focuses on two aspects of the time course of visual information processing during the perception of natural scenes. The first aspect is the change of fixation duration and saccade amplitude during the first couple of seconds of the inspection period, as has been described by Buswell (1935), among others. This common effect suggests that the saccade amplitude and fixation duration are in some way controlled by the same mechanism. A simple exponential model containing two parameters can describe the phenomena quite satisfactorily. The parameters of the model show that saccade amplitude and fixation duration may be controlled by a common mechanism. The second aspect under scrutiny is the apparent lack of correlation between saccade amplitude and fixation duration (Viviani, 1990). The present study shows that a strong but nonlinear relationship between saccade amplitude and fixation duration does exist in picture viewing. A model, based on notions laid out by Findlay and Walkers (1999) model of saccade generation and on the idea of two modes of visual processing (Trevarthen, 1968), was developed to explain this relationship. The model both fits the data quite accurately and can explain a number of related phenomena.
I-perception | 2010
Benjamin W. Tatler; Nicholas J. Wade; Hoi Kwan; John M. Findlay; Boris M. Velichkovsky
The impact of Yarbuss research on eye movements was enormous following the translation of his book Eye Movements and Vision into English in 1967. In stark contrast, the published material in English concerning his life is scant. We provide a brief biography of Yarbus and assess his impact on contemporary approaches to research on eye movements. While early interest in his work focused on his study of stabilised retinal images, more recently this has been replaced with interest in his work on the cognitive influences on scanning patterns. We extended his experiment on the effect of instructions on viewing a picture using a portrait of Yarbus rather than a painting. The results obtained broadly supported those found by Yarbus.
Transportation Research Part F-traffic Psychology and Behaviour | 2002
Boris M. Velichkovsky; A. Rothert; Mathias Kopf; Sascha M Dornhöfer; M. Joos
Abstract The analysis of eye movements can provide rich information about drivers attention and the course of behaviour in hazardous situations. We present data from a driving simulation study showing that the switching between preattentive and attentive processing is reflected in visual fixations. For this initial analysis, we considered fixations from the perspective of their duration and the amplitude of related saccades. Since fixation durations may change instantaneously from one fixation to the next, we further selected the temporal vicinity of the emerging hazard for a closer analysis of fixations around this time. With this second type of analysis, the fixations that actually “detect” a critical event can be discovered and their duration measured. Upon detection of an immediate hazard, there is an increase in fixation duration and a corresponding increase in occurrence of attentive fixations on the cost of preattentive ones. This switching from one level of processing to another is recognisable on a short, phasic time scale. We finally discuss attentional conditions where overlooked or not sufficiently processed hazards do not lead to the appropriate breaking reaction on the part of the driver.
ACM Computing Surveys | 2013
Rebekka S. Renner; Boris M. Velichkovsky; Jens R. Helmert
Over the last 20 years research has been done on the question of how egocentric distances, i.e., the subjectively reported distance from a human observer to an object, are perceived in virtual environments. This review surveys the existing literature on empirical user studies on this topic. In summary, there is a mean estimation of egocentric distances in virtual environments of about 74% of the modeled distances. Many factors possibly influencing distance estimates were reported in the literature. We arranged these factors into four groups, namely measurement methods, technical factors, compositional factors, and human factors. The research on these factors is summarized, conclusions are drawn, and promising areas for future research are outlined.
Perception | 1996
Marc Pomplun; Helge Ritter; Boris M. Velichkovsky
Two experiments on the perception and eye-movement scanning of a set of six overtly ambiguous pictures are reported. In the first experiment it was shown that specific perceptual interpretations of an ambiguous picture usually correlate with parameters of the gaze-position distributions. In the second experiment these distributions were used for an image processing of initial pictures in such a way that in regions which attracted less fixations the brightness of all elements was lowered. The preprocessed pictures were then shown to a group of 150 naïve subjects for an identification. The results of this experiment demonstrated that in four out of six pictures it was possible to influence perception of other persons in the predicted way, ie to shift spontaneous reports of naïve subjects in the direction of interpretations that accompanied gaze-position data used for the preprocessing of initial pictures. Possible reasons for a failure of such a communication of personal views in two cases are also discussed.
Consciousness and Cognition | 1996
Bradford H. Challis; Boris M. Velichkovsky; Fergus I.M. Craik
Three experiments investigated level of processing (LOP) effects on a variety of direct and indirect memory tasks, in the context of a processing theory of dissociations. Subjects studied words in five encoding conditions and received one of ten memory tests. In Experiment 1, four tests previously classified as conceptual showed a robust LOP effect, as did a direct perceptual test of graphemic cued recall. An indirect perceptual word fragment completion test was unaffected by LOP. Experiment 2 showed that a new indirect version of a graphemic cued test was not affected by LOP. In Experiment 3, guided by a generation/recognition model, we constructed three new direct tests in which subjects identified words that were graphemically, phonologically, or semantically similar to studied words. The three tests differed in their sensitivity to study conditions, but LOP had no effect in any case, despite the involvement of deliberate conscious recollection. Contemporary explanatory frameworks couched as dichotomies (e.g., implicit/explicit, perceptual/conceptual) do not provide an adequate account of the results. It seems necessary instead to specify the types of information activated by each encoding condition, the types of information required by each test, and how encoding and retrieval processes are modified by task instructions.
human factors in computing systems | 1996
Boris M. Velichkovsky; John Paulin Hansen
This is an overview of the recent progress leading towards a full subject-centered paradigm in human-computer interaction. At this new phase in the evolution of computer technologies it will be possible to take into account not just characteristics of average human beings, but create systems sensitive to the actual states of attention and intentions of interacting persons. We discuss some of these methods concentrating on the eye-tracking and brain imaging. The development is based on the use of eye movement data for a control of output devices, for gaze-contingent image processing and for disambiguating verbal as well as nonverbal information.
Social Neuroscience | 2006
Andreas Mojzisch; Leonhard Schilbach; Jens R. Helmert; Sebastian Pannasch; Boris M. Velichkovsky; Kai Vogeley
Abstract Social neuroscience has shed light on the underpinnings of understanding other minds. The current study investigated the effect of self-involvement during social interaction on attention, arousal, and facial expression. Specifically, we sought to disentangle the effect of being personally addressed from the effect of decoding the meaning of another persons facial expression. To this end, eye movements, pupil size, and facial electromyographic (EMG) activity were recorded while participants observed virtual characters gazing at them or looking at someone else. In dynamic animations, the virtual characters then displayed either socially relevant facial expressions (similar to those used in everyday life situations to establish interpersonal contact) or arbitrary facial movements. The results show that attention allocation, as assessed by eye-tracking measurements, was specifically related to self-involvement regardless of the social meaning being conveyed. Arousal, as measured by pupil size, was primarily related to perceiving the virtual characters gender. In contrast, facial EMG activity was determined by the perception of socially relevant facial expressions irrespective of whom these were directed towards.
Psychophysiology | 2009
Franziska Schrammel; Sebastian Pannasch; Sven-Thomas Graupner; Andreas Mojzisch; Boris M. Velichkovsky
The present study aimed to investigate the impact of facial expression, gaze interaction, and gender on attention allocation, physiological arousal, facial muscle responses, and emotional experience in simulated social interactions. Participants viewed animated virtual characters varying in terms of gender, gaze interaction, and facial expression. We recorded facial EMG, fixation duration, pupil size, and subjective experience. Subjects rapid facial reactions (RFRs) differentiated more clearly between the characters happy and angry expression in the condition of mutual eye-to-eye contact. This finding provides evidence for the idea that RFRs are not simply motor responses, but part of an emotional reaction. Eye movement data showed that fixations were longer in response to both angry and neutral faces than to happy faces, thereby suggesting that attention is preferentially allocated to cues indicating potential threat during social interaction.