Arun Kulshreshth
University of Louisiana at Lafayette
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Arun Kulshreshth.
foundations of digital games | 2012
Arun Kulshreshth; Jonas Schild; Joseph J. LaViola
We present a study that investigates user performance benefits of playing video games using 3D motion controllers in 3D stereoscopic vision in comparison to monoscopic viewing. Using the PlayStation 3 game console coupled with the PlayStation Move Controller, we explored five different games that combine 3D stereo and 3D spatial interaction. For each game, quantitative and qualitative measures were taken to determine if users performed better and learned faster in the experimental group (3D stereo display) than in the control group (2D display). A game expertise pre-questionnaire was used to classify participants into beginners and expert game player categories to analyze a possible impact on performance differences. The results show two cases where the 3D stereo display did help participants perform significantly better than with a 2D display. For the first time, we can report a positive effect on gaming performance based on stereoscopic vision, although reserved to isolated tasks and depending on game expertise. We discuss the reasons behind these findings and provide recommendations for game designers who want to make use of 3D stereoscopic vision and 3D motion control to enhance game experiences.
symposium on 3d user interfaces | 2013
Arun Kulshreshth; Christopher Zorn; Joseph J. LaViola
Hand gestures are intuitive ways to interact with a variety of user interfaces. We developed a real-time finger tracking technique using the Microsoft Kinect as an input device and compared its results with an existing technique that uses the K-curvature algorithm. Our technique calculates feature vectors based on Fourier descriptors of equidistant points chosen on the silhouette of the detected hand and uses template matching to find the best match. Our preliminary results show that our technique performed as well as an existing k-curvature algorithm based finger detection technique.
symposium on spatial user interaction | 2013
Arun Kulshreshth; Joseph J. LaViola
We present a study that investigates user performance benefits of using head tracking in modern video games. We explored four different carefully chosen commercial games with tasks which can potentially benefit from head tracking. For each game, quantitative and qualitative measures were taken to determine if users performed better and learned faster in the experimental group (with head tracking) than in the control group (without head tracking). A game expertise pre-questionnaire was used to classify participants into casual and expert categories to analyze a possible impact on performance differences. Our results indicate that head tracking provided a significant performance benefit for experts in two of the games tested. In addition, our results indicate that head tracking is more enjoyable for slow paced video games and it potentially hurts performance in fast paced modern video games. Reasoning behind our results is discussed and is the basis for our recommendations to game developers who want to make use of head tracking to enhance game experiences.
human factors in computing systems | 2014
Arun Kulshreshth; Joseph J. LaViola
Counting using ones fingers is a potentially intuitive way to enumerate a list of items and lends itself naturally to gesture-based menu systems. In this paper, we present the results of the first comprehensive study on Finger-Count menus to investigate its usefulness as a viable option for 3D menu selection tasks. Our study compares 3D gesture-based finger counting (Finger Count menus) with two gesture-based menu selection techniques (Hand-n-Hold, Thumbs-Up), derived from existing motion-controlled video game menu selection strategies, as well as 3D Marking menus. We examined selection time, selection accuracy and user preference for all techniques. We also examined the impact of different spatial layouts for menu items and different menu depths. Our results indicate that Finger-Count menus are significantly faster than the other menu techniques we tested and are the most liked by participants. Additionally, we found that while Finger-Count menus and 3D Marking menus have similar selection accuracy, Finger-Count menus are almost twice as fast compared to 3D Marking menus.
human factors in computing systems | 2016
Arun Kulshreshth; Joseph J. LaViola
Most modern stereoscopic 3D applications use fixed stereoscopic 3D parameters (separation and convergence) to render the scene on a 3D display. But, keeping these parameters fixed during usage does not always provide the best experience since it can reduce the amount of depth perception possible in some applications which have large variability in object distances. We developed two stereoscopic rendering techniques which actively vary the stereo parameters based on the scene content. Our first algorithm calculates a low resolution depth map of the scene and chooses ideal stereo parameters based on that depth map. Our second algorithm uses eye tracking data to get the gaze direction of the user and chooses ideal stereo parameters based on the distance of the gazed object. We evaluated our techniques in an experiment that uses three depth judgment tasks: depth ranking, relative depth judgment and path tracing. Our results indicate that variable stereo parameters provide enhanced depth discrimination compared to static parameters and were preferred by our participants over the traditional fixed parameter approach. We discuss our findings and possible implications on the design of future stereoscopic 3D applications.
human factors in computing systems | 2015
Arun Kulshreshth; Joseph J. LaViola
We present the results of a comprehensive video game study which explores how the gaming experience is effected when several 3D user interface technologies are used simultaneously. We custom designed an air-combat game integrating several 3DUI technologies (stereoscopic 3D, head tracking, and finger-count gestures) and studied the combined effect of these technologies on the gaming experience. Our game design was based on existing design principles for optimizing the usage of these technologies in isolation. Additionally, to enhance depth perception and minimize visual discomfort, the game dynamically optimizes stereoscopic 3D parameters (convergence and separation) based on the users look direction. We conducted a within subjects experiment where we examined performance data and self-reported data on users perception of the game. Our results indicate that participants performed significantly better when all the 3DUI technologies (stereoscopic 3D, head-tracking and finger-count gestures) were available simultaneously with head tracking as a dominant factor. We explore the individual contribution of each of these technologies to the overall gaming experience and discuss the reasons behind our findings.
human factors in computing systems | 2015
Arun Kulshreshth; Joseph J. LaViola
Most modern stereoscopic 3D applications (e.g. video games) use optimal (but fixed) stereoscopic 3D parameters (separation and convergence) to render the scene on a 3D display. However, keeping these parameters fixed does not provide the best possible experience since it can reduce depth discrimination. We present two scenarios where the depth discrimination could be enhanced using dynamic adjustments to the separation and the convergence parameters based on the users look direction obtained from head tracking data.
IEEE Computer Graphics and Applications | 2017
Arun Kulshreshth; Kevin Pfeil; Joseph J. LaViola
Three-dimensional (3D) spatial user interface technologies have the potential to make games more immersive and engaging and thus provide a better user experience. Although technologies such as stereoscopic 3D display, head tracking, and gesture-based control are available for games, it is still unclear how their use affects gameplay and if there are any user performance benefits. The authors have conducted several experiments on these technologies in game environments to understand how they affect gameplay and how we can use them to optimize the gameplay experience.
acm symposium on applied perception | 2018
Kevin Pfeil; Eugene M. Taranta; Arun Kulshreshth; Pamela J. Wisniewski; Joseph J. LaViola
Past research has shown that humans exhibit certain eye-head responses to the appearance of visual stimuli, and these natural reactions change during different activities. Our work builds upon these past observations by offering new insight to how humans behave in Virtual Reality (VR) compared to Physical Reality (PR). Using eye- and head- tracking technology, and by conducting a study on two groups of users - participants in VR or PR - we identify how often these natural responses are observed in both environments. We find that users statistically move their heads more often when viewing stimuli in VR than in PR, and VR users also move their heads more in the presence of text. We open a discussion for identifying the HWD factors that cause this difference, as this may not only affect predictive models using eye movements as features, but also VR user experience overall.
Archive | 2018
Arun Kulshreshth; Joseph J. LaViola
This chapter talks about related work on head gesture recognition and usage of head tracking for several applications including games and virtual reality. An experiment which systematically explores the effects of head tracking, in complex gaming environments typically found in commercial video games is presented. This experiment seeks to find if there are any performance benefits of head tracking in games and how it affects the user experience. We present the results of this experiment along with some guidelines for the game designers who wish to use head tracking for games.