Mahdi Azmandian
University of Southern California
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mahdi Azmandian.
human factors in computing systems | 2016
Mahdi Azmandian; Mark S. Hancock; Hrvoje Benko; Eyal Ofek; Andrew D. Wilson
Manipulating a virtual object with appropriate passive haptic cues provides a satisfying sense of presence in virtual reality. However, scaling such experiences to support multiple virtual objects is a challenge as each one needs to be accompanied with a precisely-located haptic proxy object. We propose a solution that overcomes this limitation by hacking human perception. We have created a framework for repurposing passive haptics, called haptic retargeting, that leverages the dominance of vision when our senses conflict. With haptic retargeting, a single physical prop can provide passive haptics for multiple virtual objects. We introduce three approaches for dynamically aligning physical and virtual objects: world manipulation, body manipulation and a hybrid technique which combines both world and body manipulation. Our study results indicate that all our haptic retargeting techniques improve the sense of presence when compared to typical wand-based 3D control of virtual objects. Furthermore, our hybrid haptic retargeting achieved the highest satisfaction and presence scores while limiting the visible side-effects during interaction.
acm symposium on applied perception | 2016
Timofey Grechkin; Jerald Thomas; Mahdi Azmandian; Mark T. Bolas; Evan A. Suma
Redirected walking enables the exploration of large virtual environments while requiring only a finite amount of physical space. Unfortunately, in living room sized tracked areas the effectiveness of common redirection algorithms such as Steer-to-Center is very limited. A potential solution is to increase redirection effectiveness by applying two types of perceptual manipulations (curvature and translation gains) simultaneously. This paper investigates how such combination may affect detection thresholds for curvature gain. To this end we analyze the estimation methodology and discuss selection process for a suitable estimation method. We then compare curvature detection thresholds obtained under different levels of translation gain using two different estimation methods: method of constant stimuli and Greens maximum likelihood procedure. The data from both experiments shows no evidence that curvature gain detection thresholds were affected by the presence of translation gain (with test levels spanning previously estimated interval of undetectable translation gain levels). This suggests that in practice currently used levels of translation and curvature gains can be safely applied simultaneously. Furthermore, we present some evidence that curvature detection thresholds may be lower that previously reported. Our estimates indicate that users can be redirected on a circular arc with radius of either 11.6m or 6.4m depending on the estimation method vs. the previously reported value of 22m. These results highlight that the detection threshold estimates vary significantly with the estimation method and suggest the need for further studies to define efficient and reliable estimation methodology.
agents and data mining interaction | 2012
Mahdi Azmandian; Karan Singh; Ben Gelsey; Yu-Han Chang; Rajiv T. Maheswaran
The availability of location-based agent data is growing rapidly, enabling new research into the behavior patterns of such agents in space and time. Previously, such analysis was limited to either small experiments with GPS-equipped agents, or proprietary datasets of human cell phone users that cannot be disseminated across the academic community for followup studies. In this paper, we study the movement patterns of Twitter users in London, Los Angeles, and Tokyo. We cluster these agents by their movement patterns across space and time. We also show that it is possible to infer part of the underlying transportation net- work from Tweets alone, and uncover interesting differences between the behaviors exhibited by users across these three cities.
ieee virtual reality conference | 2017
Mahdi Azmandian; Timofey Grechkin; Evan Suma Rosenberg
As the focus of virtual reality technology is shifting from singleperson experiences to multi-user interactions, it becomes increasingly important to accommodate multiple co-located users within a shared real-world space. For locomotion and navigation, the introduction of multiple users moving both virtually and physically creates additional challenges related to potential user-on-user collisions. In this work, we focus on defining the extent of these challenges, in order to apply redirected walking to two users immersed in virtual reality experiences within a shared physical tracked space. Using a computer simulation framework, we explore the costs and benefits of splitting available physical space between users versus attempting to algorithmically prevent user-to-user collisions. We also explore fundamental components of collision prevention such as steering the users away from each other, forced stopping, and user re-orientation. Each component was analyzed for the number of potential disruptions to the flow of the virtual experience. We also develop a novel collision prevention algorithm that reduces overall interruptions by 17.6% and collision prevention events by 58.3%. Our results show that sharing space using our collision prevention method is superior to subdividing the tracked space.
2016 IEEE 2nd Workshop on Everyday Virtual Reality (WEVR) | 2016
Mahdi Azmandian; Timofey Grechkin; Mark T. Bolas; Evan A. Suma
With the imminent emergence of low-cost tracking solutions, everyday VR users will soon experience the enhanced immersion of natural walking. Even with consumer-grade room-scale tracking, exploring large virtual environments can be made possible using a software solution known as redirected walking. Wide adoption of this technique has been hindered by the complexity and subtleties involved in successfully deploying redirection. To address this matter, we introduce the Redirected Walking Toolkit, to serve as a unified platform for developing, benchmarking, and deploying redirected walking algorithms. Our design enables seamless integration with standard virtual reality configurations, requiring minimal setup effort for content developers. The toolkit’s flexible architecture offers an interface that is not only easy to extend, but also complimented with a suite of simulation tools for testing and analysis. We envision the Redirected Walking Toolkit to be a common testbed for VR researchers as well as a publicly-available tool for large virtual exploration in virtual reality applications.
ieee virtual reality conference | 2014
Mahdi Azmandian; Rhys Yahata; Mark T. Bolas; Evan A. Suma
Redirected walking techniques enable natural locomotion through immersive virtual environments that are considerably larger than the available real world walking space. However, the most effective strategy for steering the user remains an open question, as most previously presented algorithms simply redirect toward the center of the physical space. In this work, we present a theoretical framework that plans a walking path through a virtual environment and calculates the parameters for combining translation, rotation, and curvature gains such that the user can traverse a series of defined waypoints efficiently based on a utility function. This function minimizes the number of overt reorientations to avoid introducing potential breaks in presence. A notable advantage of this approach is that it leverages knowledge of the layout of both the physical and virtual environments to enhance the steering strategy.
symposium on 3d user interfaces | 2016
Mahdi Azmandian; Timofey Grechkin; Mark T. Bolas; Evan A. Suma
Redirected walking techniques have been introduced to overcome physical space limitations for natural locomotion in virtual reality. These techniques decouple real and virtual user trajectories by subtly steering the user away from the boundaries of the physical space while maintaining the illusion that the user follows the intended virtual path. Effectiveness of redirection algorithms can significantly improve when a reliable prediction of the users future virtual path is available. In current solutions, the future user trajectory is predicted based on non-standardized manual annotations of the environment structure, which is both tedious and inflexible. We propose a method for automatically generating environment annotation graphs and predicting the user trajectory using navigation meshes. We discuss the integration of this method with existing redirected walking algorithms such as FORCE and MPCRed. Automated annotation of the virtual environments structure enables simplified deployment of these algorithms in any virtual environment.
human factors in computing systems | 2016
Mahdi Azmandian; Mark S. Hancock; Hrvoje Benko; Eyal Ofek; Andrew D. Wilson
Manipulating a virtual object with appropriate passive haptic cues provides a satisfying sense of presence in virtual reality. However, scaling such experiences to support multiple virtual objects is a challenge as each one needs to be accompanied with a precisely-located haptic proxy object. We showcase a solution that overcomes this limitation by hacking human perception. Our framework for repurposing passive haptics, called haptic retargeting, leverages the dominance of vision when our senses conflict. With haptic retargeting, a single physical prop can provide passive haptics for multiple virtual objects. We introduce three approaches for dynamically aligning physical and virtual objects: body manipulation, world manipulation and a hybrid technique which combines both world and body warping. This video accompanies our CHI paper.
acm symposium on applied perception | 2014
Mahdi Azmandian; Mark T. Bolas; Evan A. Suma
Redirected Walking is technique that leverages human perception characteristics to allow locomotion in virtual environments larger than the tracking area. Among the many redirection techniques, some strictly depend on the users current position and orientation, while more recent algorithms also depend on the users predicted behavior. This prediction serves as an input to a computationally expensive search to determine an optimal path. The search output is formulated as a series of gains to be applied at different stages along the path. An example prediction could be if a user is walking down a corridor, a natural prediction would be that the user will walk along a straight line down the corridor, and she will choose one of the possible directions with equal probability. In practice, deviations from the expected virtual path are inevitable, and as a result, the real world path traversed will differ from the original prediction. These deviations can not only force the search to select a less optimal path in the next iteration, but also in cases cause the users to go off bounds, requiring resets, causing a jarring experience for the user. We propose a method to account for these deviations by modifying the redirection gains per update frame, aiming to keep the user on the intended predicted physical path.
Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces | 2016
Mahdi Azmandian; Mark S. Hancock; Hrvoje Benko; Eyal Ofek; Andrew D. Wilson
Manipulating a virtual object with appropriate passive haptic cues provides a satisfying sense of presence in virtual reality. However, scaling such experiences to support multiple virtual objects is a challenge as each one needs to be accompanied with a precisely-located haptic proxy object. We showcase a solution that overcomes this limitation by hacking human perception. Our framework for repurposing passive haptics, called haptic retargeting, leverages the dominance of vision when our senses conflict. With haptic retargeting, a single physical prop can provide passive haptics for multiple virtual objects. We introduce three approaches for dynamically aligning physical and virtual objects: body manipulation, world manipulation and a hybrid technique which combines both world and body manipulation. This demo has been presented previously at CHI 2016.