Andrei Sherstyuk
Avatar Reality
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Andrei Sherstyuk.
The Visual Computer | 1999
Andrei Sherstyuk
A comprehensive analysis of various convolution kernels is presented. Computational complexity and compatibility between the kernels and a number of modeling primitives are examined. A number of practical suggestions are given how to choose the proper kernel function, with a special attention to polynomial kernels. Mathematical formulations for convolved line segments are given.
symposium on 3d user interfaces | 2007
Andrei Sherstyuk; Dale Vincent; Jack Jin Hwa Lui; K.K. Connolly; Kin Lik Wang; S.M. Saiki; T.P. Cauclell
Triage is a medical term that describes the process of prioritizing and delivering care to multiple casualties within a short time frame. Because of the inherent limitations of traditional methods of teaching triage, such as paper-based scenarios and the use of actors as standardized patients, computer-based simulations and virtual reality (VR) scenarios are being advocated. We present our system for VR triage, focusing on design and development of a pose and gesture based interface that allows a learner to navigate in a virtual space among multiple simulated casualties. The learner is also able to manipulate virtual instruments effectively in order to complete required training tasks
symposium on 3d user interfaces | 2012
Andrei Sherstyuk; Anton Treskunov; Marina L. Gavrilova
Narrow field of view of common Head Mount Displays, coupled with lack of adaptive camera accommodation and vergence make it impossible to view virtual scenes using familiar eye-head-body coordination patterns and reflexes. This impediment of natural habits is most noticeable in applications where users are facing multiple tasks, which require frequent switching between viewing modes, from wide range visual search to object examination at close distances. We propose a new technique for proactive control of the virtual camera by utilizing a predator-prey vision metaphor. We describe the technique, the implementation, and preliminary results.
interactive 3d graphics and games | 2012
Andrei Sherstyuk; Arindam Dey; Christian Sandor; Andrei State
In Virtual Environments (VE), users are often facing tasks that involve direct manipulation of virtual objects at close distances, such as touching, grabbing, placement. In immersive systems that employ head-mounted displays these tasks could be quite challenging, due to lack of convergence of virtual cameras. We present a mechanism that dynamically converges left and right cameras on target objects in VE. This mechanism simulates the natural process that takes place in real life automatically. As a result, the rendering system maintains optimal conditions for stereoscopic viewing of target objects at varying depths, in real time. Building on our previous work, which introduced the eye convergence algorithm [Sherstyuk and State 2010], we developed a Virtual Reality (VR) system and conducted an experimental study on effects of eye convergence in immersive VE. This paper gives the full description of the system, the study design and a detailed analysis of the results obtained.
virtual reality software and technology | 2010
Andrei Sherstyuk; Andrei State
A virtual hand metaphor remains by far the most popular technique for direct object manipulation in immersive Virtual Reality (VR). The utility of the virtual hand depends on a users ability to see it correctly in stereoscopic 3D, especially in tasks that require continuous, precise hand-eye coordination. We present a mechanism that dynamically converges left and right cameras on target objects in VR, mimicking the effect that naturally happens in real life. As a result, the system maintains optimal conditions for stereoscopic viewing at varying depths, in real-time. We describe the algorithm, implementation details and preliminary results from pilot tests.
IEEE Computer Graphics and Applications | 2010
Andrei Sherstyuk; Dale Vincent; Anton Treskunov
Here we describe a vision of VR games that combine the best features of gaming and VR: large, persistent worlds experienced in photorealistic settings with full immersion. For example, Figure 1 illustrates a hypothetical immersive VR game that could be developed using current technologies, including real-time, cinematic-quality graphics; a panoramic head-mounted display (HMD); and wide-area tracking. We also examine the gap between available VR and gaming technologies, and offer solutions for bridging it.
virtual reality continuum and its applications in industry | 2009
Andrei Sherstyuk; Anton Treskunov
Terrain maps, commonly used for updating elevation values of a moving object (i.e., a traveler), may be conveniently used for detecting and preventing collisions between the traveler and other objects on the scene. For that purpose, we project the geometry of all collidable objects onto the map and store it in a dedicated color channel. Combined with adaptive speed control, this information provides fast and reliable collision-avoidance during travel, independent of scene complexity. We present implementation details of the base system for a Virtual Reality application and discuss a number of extensions.
international conference on artificial reality and telexistence | 2014
Yuki Yano; Kiyoshi Kiyokawa; Andrei Sherstyuk; Tomohiro Mashita; Haruo Takemura
Head mounted displays (HMD) are widely used for visual immersion in virtual reality (VR) systems. It is acknowledged that the narrow field of view (FOV) for most HMD models is the leading cause of insufficient quality of immersion, resulting in suboptimal user performance in various tasks in VR and early fatigue, too. Proposed solutions to this problem range from hardware-based approaches to software enhancements of the viewing process. There exist three major techniques of view expansion; minification or rendering graphics with a larger FOV than the displays FOV, motion amplification or amplifying user head rotation aiming to provide accelerated access to peripheral vision during wide sweeping head movements, and diverging left and right virtual cameras outwards in order to increase the combined binocular FOV. Static view expansion has been reported to increase user efficiency in search and navigation tasks, however the effectiveness of dynamic view expansion is not yet well understood. When applied, view expansion techniques modify the natural viewing process and alter familiar user reflex-response loops, which may result in motion sickness and poor user performance. Thus, it is vital to evaluate dynamic view expansion techniques in terms of task effectiveness and user workload. This paper details dynamic view expansion techniques, experimental settings and findings of the user study. In the user study, we investigate three view expansion techniques, applying them dynamically based on user behaviors. We evaluate the effectiveness of these methods quantitatively, by measuring and comparing user performance and user workload in a target search task. Also, we collect and compare qualitative feedback from the subjects in the experiment. Experimental results show that certain levels of minification and motion amplification increase performance by 8.2% and 6.0%, respectively, with comparable or even decreased subjective workload.
ieee virtual reality conference | 2013
Andrei Sherstyuk; Anton Treskunov
Mouse, joystick and keyboard controls in 3D games have long since become a second nature for generations of garners. Recent advances in webcam-based tracking technologies made it easy to bring natural human motions and gestures into play. However, video tracking is a CPU-intensive process, which may have a negative impact on game performance. We measured this impact for different types of 3D content and found it to be minimal, on multi-core platforms. We provide details on implementation and evaluation of our system. Also, we suggest several examples how natural motion can be used in 3D games.
The Visual Computer | 2011
Andrei Sherstyuk; Caroline Jay; Anton Treskunov
The ability to locate, select and interact with objects is fundamental to most Virtual Reality (VR) applications. Recently, it was demonstrated that the virtual hand metaphor, a technique commonly used for these tasks, can also be employed to control the virtual camera, resulting in improved performance and user evaluation in visual search tasks.In this work, we further investigate the effects of hand-assisted viewing on user behavior in immersive virtual environments. We demonstrate that hand-assisted camera control significantly changes the way how people operate their virtual hands, on motor, cognitive, and behavioral levels.