Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jason Jerald is active.

Publication


Featured researches published by Jason Jerald.


IEEE Transactions on Visualization and Computer Graphics | 2010

Estimation of Detection Thresholds for Redirected Walking Techniques

Frank Steinicke; Gerd Bruder; Jason Jerald; Harald Frenz; Markus Lappe

In immersive virtual environments (IVEs), users can control their virtual viewpoint by moving their tracked head and walking through the real world. Usually, movements in the real world are mapped one-to-one to virtual camera motions. With redirection techniques, the virtual camera is manipulated by applying gains to user motion so that the virtual world moves differently than the real world. Thus, users can walk through large-scale IVEs while physically remaining in a reasonably small workspace. In psychophysical experiments with a two-alternative forced-choice task, we have quantified how much humans can unknowingly be redirected on physical paths that are different from the visually perceived paths. We tested 12 subjects in three different experiments: (E1) discrimination between virtual and physical rotations, (E2) discrimination between virtual and physical straightforward movements, and (E3) discrimination of path curvature. In experiment E1, subjects performed rotations with different gains, and then had to choose whether the visually perceived rotation was smaller or greater than the physical rotation. In experiment E2, subjects chose whether the physical walk was shorter or longer than the visually perceived scaled travel distance. In experiment E3, subjects estimate the path curvature when walking a curved path in the real world while the visual display shows a straight path in the virtual world. Our results show that users can be turned physically about 49 percent more or 20 percent less than the perceived virtual rotation, distances can be downscaled by 14 percent and upscaled by 26 percent, and users can be redirected on a circular arc with a radius greater than 22 m while they believe that they are walking straight.


virtual reality software and technology | 2008

Analyses of human sensitivity to redirected walking

Frank Steinicke; Gerd Bruder; Jason Jerald; Harald Frenz; Markus Lappe

Redirected walking allows users to walk through large-scale immersive virtual environments (IVEs) while physically remaining in a reasonably small workspace by intentionally injecting scene motion into the IVE. In a constant stimuli experiment with a two-alternative-forced-choice task we have quantified how much humans can unknowingly be redirected on virtual paths which are different from the paths they actually walk. 18 subjects have been tested in four different experiments: (E1a) discrimination between virtual and physical rotation, (E1b) discrimination between two successive rotations, (E2) discrimination between virtual and physical translation, and discrimination of walking direction (E3a) without and (E3b) with start-up. In experiment E1a subjects performed rotations to which different gains have been applied, and then had to choose whether or not the visually perceived rotation was greater than the physical rotation. In experiment E1b subjects discriminated between two successive rotations where different gains have been applied to the physical rotation. In experiment E2 subjects chose if they thought that the physical walk was longer than the visually perceived scaled travel distance. In experiment E3a subjects walked a straight path in the IVE which was physically bent to the left or to the right, and they estimate the direction of the curvature. In experiment E3a the gain was applied immediately, whereas the gain was applied after a start-up of two meters in experiment E3b. Our results show that users can be turned physically about 68% more or 10% less than the perceived virtual rotation, distances can be up- or down-scaled by 22%, and users can be redirected on an circular arc with a radius greater than 24 meters while they believe they are walking straight.


applied perception in graphics and visualization | 2008

Sensitivity to scene motion for phases of head yaws

Jason Jerald; Tabitha C. Peck; Frank Steinicke

In order to better understand how scene motion is perceived in immersive virtual environments and to provide guidelines for designing more useable systems, we measured sensitivity to scene motion for different phases of quasi-sinusoidal head yaw motions. We measured and compared scene-velocity thresholds for nine subjects across three conditions: visible <u>W</u>ith head rotation (W) where the scene is presented during the center part of sinusoidal head yaws and the scene moves in the same direction the head is rotating, visible <u>A</u>gainst head rotation (A) where the scene is presented during the center part of sinusoidal head yaws and the scene moves in the opposite direction the head is rotating, and visible at the <u>E</u>dge of head rotation (E) where the scene is presented at the extreme of sinusoidal head yaws and the scene moves during the time that head direction changes. The W condition had a significantly higher threshold (decreased sensitivity) than both the E and A conditions. The median threshold for the W condition was 2.1 times the A condition and 1.5 times the E condition. We did not find a significant difference between the E and A conditions, although there was a trend for the A thresholds to be less than the E thresholds. An Equivalence Test showed the A and E thresholds to be statistically equivalent. Our results suggest the phase of users head yaw should be taken into account when inserting additional scene motion into immersive virtual environments if one does not want users to perceive that motion. In particular, there is much more latitude for artificially and imperceptibly rotating a scene, as in Razzaques redirecting walking technique, in the same direction of head yaw than against the direction of yaw. The implications for maximum end-to-end latency in a head-mounted display is that users are less likely to notice latency when beginning a head yaw (when the scene moves with the head) than when slowing down a head yaw (when the scene moves against the head) or when changing head direction (when the head is near still and scene motion due to latency is maximized).


cyberworlds | 2008

Taxonomy and Implementation of Redirection Techniques for Ubiquitous Passive Haptic Feedback

Frank Steinicke; Gerd Bruder; Luv Kohli; Jason Jerald; Klaus H. Hinrichs

Traveling through immersive virtual environments (IVEs) by means of real walking is an important activity to increase naturalness of VR-based interaction. However, the size of the virtual world often exceeds the size of the tracked space so that a straightforward implementation of omni-directional and unlimited walking is not possible. Redirected walking is one concept to solve this problem of walking in IVEs by inconspicuously guiding the user on a physical path that may differ from the path the user visually perceives. When the user approaches a virtual object she can be redirected to a real proxy object that is registered to the virtual counterpart and provides passive haptic feedback. In such passive haptic environments, any number of virtual objects can be mapped to proxy objects having similar haptic properties, e.g., size, shape and texture. The user can sense a virtual object by touching its real world counterpart. Redirecting a user to a registered proxy object makes it necessary to predict the users intended position in the IVE. Based on this target position we determine a path through the physical space such that the user is guided to the registered proxy object. We present a taxonomy of possible redirection techniques that enable user guidance such that inconsistencies between visual and proprioceptive stimuli are imperceptible. We describe how a users target in the virtual world can be predicted reliably and how a corresponding real-world path to the registered proxy object can be derived.


symposium on 3d user interfaces | 2012

Comparison of a two-handed interface to a wand interface and a mouse interface for fundamental 3D tasks

Udo Schultheis; Jason Jerald; Fernando Toledo; Arun Yoganandan; Paul Mlyniec

This paper describes a two-handed interface that enables intuitive interaction with 3D multimedia environments. Two user studies demonstrated the effectiveness of the two-handed interface in fundamental 3D object manipulation and viewpoint manipulation tasks. Trained participants docked and constructed 3D objects 4.5-4.7 times as fast as a standard mouse interface and 1.3-2.5 times as fast as a standard one-handed wand interface. 19 of 20 participants preferred the two-handed interface over the mouse and wand interfaces. 16 participants felt very comfortable with the two-handed interface and 4 felt comfortable. No statistically significant differences in performance were found between monoscopic and stereoscopic displays although 17 of 20 participants preferred the stereoscopic display over the monoscopic display.


Presence: Teleoperators & Virtual Environments | 2010

Lessons about virtual environment software systems from 20 years of ve building

Russell M. Taylor; Jason Jerald; Chris VanderKnyff; Jeremy D. Wendt; David Borland; David Marshburn; William R. Sherman

What are desirable and undesirable features of virtual environment (VE) software architectures? What should be present (and absent) from such systems if they are to be optimally useful? How should they be structured? In order to help answer these questions, we present experience from application designers, toolkit designers, and VE system architects along with examples of useful features from existing systems. Topics are organized under the major headings of 3D space management, supporting display hardware, interaction, event management, time management, computation, portability, and the observation that less can be better. Lessons learned are presented as discussion of the issues, field experiences, nuggets of knowledge, and case studies.


ieee virtual reality conference | 2009

Relating Scene-Motion Thresholds to Latency Thresholds for Head-Mounted Displays

Jason Jerald

As users of head-tracked head-mounted display systems move their heads, latency causes unnatural scene motion. We 1) analyzed scene motion due to latency and head motion, 2) developed a mathematical model relating latency, head motion, scene motion, and perception thresholds, 3) developed procedures to determine perceptual thresholds of scene-velocity and latency without the need for a head-mounted display or a low-latency system, and 4), for six subjects under a specific set of conditions, we measured scene-velocity and latency thresholds and compared the relationship between these thresholds. Resulting PSEs (min 10 ms) and JNDs (min 3 ms) of latency thresholds are in a similar range reported by Ellis and Adelstein. The results are a step toward enabling scientists and engineers to determine latency requirements before building immersive virtual environments using head-mounted display systems.


symposium on 3d user interfaces | 2013

MakeVR: A 3D world-building interface

Jason Jerald; Paul Mlyniec; Arun Yoganandan; Amir Rubin; Dan Paullus; Simon Solotko

MakeVR is an intuitive and accessible digital sandbox for making 3D objects scenes with game-like simplicity for beginners and with advanced tools for experts. It presents a professional CAD engine through a natural immersive two-handed interface. Users reach into space to move themselves through a geometric playground, placing primitive shapes and more complex objects into the scene and then reaching out to modify them via booleans, sweeps, deformation, and other CAD operations. We conducted a preliminary user evaluation of four participant case studies and plan to use this evaluation to improve the system.


ieee virtual reality conference | 2014

Developing virtual reality applications with Unity

Jason Jerald; Peter Giokaris; Danny Woodall; Arno Hartbolt; Anish Chandak; Sebastien Kuntz

This tutorial will provide an introduction to Unity (http://www.unity3D.com) and several VR components that are designed to work with Unity. These VR components can be used in isolation or pieced together to provide fully immersive VR experiences. Unity is a feature rich multi-platform game engine for the creation of interactive 3D content. It includes an intuitive interface while at the same time allowing low-level access for developers. Thousands of assets provided by other content creators can be reused to quickly develop immersive experiences. Because of its intuitive interface, well designed architecture, and ability to easily reuse assets, 3D software can be developed in a fraction of time compared to traditional development. Consumer-level virtual-reality hardware combined with Unity have recently empowered hobbyists, professionals, and academics to quickly create virtual reality applications. Because of Unitys widespread use and ease of use, several virtual reality companies now fully support Unity. During this tutorial, participants will learn how to quickly build virtual reality applications from some of the leaders of Unity virtual reality development. Attendees will gain an understanding of how to use multiple VR components with Unity and will have enough knowledge to start building VR applications using Unity by the end of the tutorial.


tests and proofs | 2012

Scene-motion thresholds during head yaw for immersive virtual environments

Jason Jerald; Frederick P. Brooks

In order to better understand how scene motion is perceived in immersive virtual environments, we measured scene-motion thresholds under different conditions across three experiments. Thresholds were measured during quasi-sinusoidal head yaw, single left-to-right or right-to-left head yaw, different phases of head yaw, slow to fast head yaw, scene motion relative to head yaw, and two scene-illumination levels. We found that across various conditions (1) thresholds are greater when the scene moves with head yaw (corresponding to gain <1.0) than when the scene moves against head yaw (corresponding to gain >1.0), and (2) thresholds increase as head motion increases.

Collaboration


Dive into the Jason Jerald's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Richard Marks

Sony Computer Entertainment

View shared research outputs
Top Co-Authors

Avatar

Frederick P. Brooks

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Gerd Bruder

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Henry Fuchs

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Luv Kohli

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge