Scott A. Kuhl
Michigan Technological University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Scott A. Kuhl.
tests and proofs | 2009
Scott A. Kuhl; William B. Thompson; Sarah H. Creem-Regehr
Most head-mounted displays (HMDs) suffer from substantial optical distortion, and vendor-supplied specifications for field-of-view often are at variance with reality. Unless corrected, such displays do not present perspective-related visual cues in a geometrically correct manner. Distorted geometry has the potential to affect applications of HMDs, which depend on precise spatial perception. This article provides empirical evidence for the degree to which common geometric distortions affect one type of spatial judgment in virtual environments. We show that minification or magnification in the HMD that would occur from misstated HMD field of view causes significant changes in distance judgments. Incorrectly calibrated pitch and pincushion distortion, however, do not cause statistically significant changes in distance judgments for the degree of distortions examined. While the means for determining the optical distortion of display systems are well known, they are often not used in non-see-through HMDs due to problems in measuring and correcting for distortion. As a result, we also provide practical guidelines for creating geometrically calibrated systems.
IEEE Transactions on Visualization and Computer Graphics | 2011
Frank Steinicke; Gerd Bruder; Scott A. Kuhl; Peter Willemsen; Markus Lappe; Klaus H. Hinrichs
The display units integrated in todays head-mounted displays (HMDs) provide only a limited field of view (FOV) to the virtual world. In order to present an undistorted view to the virtual environment (VE), the perspective projection used to render the VE has to be adjusted to the limitations caused by the HMD characteristics. In particular, the geometric field of view (GFOV), which defines the virtual aperture angle used for rendering of the 3D scene, is set up according to the display field of view (DFOV). A discrepancy between these two fields of view distorts the geometry of the VE in a way that either minifies or magnifies the imagery displayed to the user. It has been shown that this distortion has the potential to affect a users perception of the virtual space, sense of presence, and performance on visual search tasks. In this paper, we analyze the users perception of a VE displayed in a HMD, which is rendered with different GFOVs. We introduce a psychophysical calibration method to determine the HMDs actual field of view, which may vary from the nominal values specified by the manufacturer. Furthermore, we conducted two experiments to identify perspective projections for HMDs, which are identified as natural by subjects-even if these perspectives deviate from the perspectives that are inherently defined by the DFOV. In the first experiment, subjects had to adjust the GFOV for a rendered virtual laboratory such that their perception of the virtual replica matched the perception of the real laboratory, which they saw before the virtual one. In the second experiment, we displayed the same virtual laboratory, but restricted the viewing condition in the real world to simulate the limited viewing condition in a HMD environment. We found that subjects evaluate a GFOV as natural when it is larger than the actual DFOV of the HMD-in some cases up to 50 percent-even when subjects viewed the real space with a limited field of view.
acm symposium on applied perception | 2014
Bochao Li; Ruimin Zhang; Scott A. Kuhl
Distance perception is a crucial component for many virtual reality applications, and numerous studies have shown that egocentric distances are judged to be compressed in head-mounted display (HMD) systems. Geometric minification, a technique where the graphics are rendered with a field of view that larger than the HMDs field of view, is one known method of eliminating the distance compression [Kuhl et al. 2009; Zhang et al. 2012]. This study uses direct blind walking to determine how minification might impact distance judgments in the Oculus Rift HMD which has a significantly larger FOV than previous minification studies. Our results show that people were able to make accurate distance judgments in a calibrated condition and that geometric minification causes people to overestimate distances. Since this study shows that minification can impact wide FOV displays such as the Oculus, we discuss how it may be necessary to use calibration techniques which are more thorough than those described in this paper.
virtual reality software and technology | 2009
Frank Steinicke; Gerd Bruder; Klaus H. Hinrichs; Scott A. Kuhl; Markus Lappe; Peter Willemsen
The display units integrated in todays head-mounted displays (HMDs) provide only a limited field of view (FOV) to the virtual world. In order to present an undistorted view to the virtual environment (VE), the perspective projection used to render the VE has to be adjusted to the limitations caused by the HMD characteristics. In particular, the geometric field of view (GFOV), which defines the virtual aperture angle used for rendering of the 3D scene, is set up according to the displays field of view. A discrepancy between these two fields of view distorts the geometry of the VE in a way that either minifies or magnifies the imagery displayed to the user. Discrepancies between the geometric and physical FOV causes the imagery to be minified or magnified. This distortion has the potential to negatively or positively affect a users perception of the virtual space, sense of presence, and performance on visual search tasks. In this paper we analyze if a user is consciously aware of perspective distortions of the VE displayed in the HMD. We introduce a psychophysical calibration method to determine the HMDs actual field of view, which may vary from the nominal values specified by the manufacturer. Furthermore, we conducted an experiment to identify perspective projections for HMDs which are identified as natural by subjects---even if these perspectives deviate from the perspectives that are inherently defined by the displays field of view. We found that subjects evaluate a field of view as natural when it is larger than the actual field of view of the HMD---in some cases up to 50%.
acm symposium on applied perception | 2013
Ruimin Zhang; Scott A. Kuhl
Head-mounted display (HMD) systems make it possible to introduce discrepancies between physical and virtual world rotations. Small and hopefully unnoticed discrepancies can be useful for redirected walking algorithms which seek to allow a user to explore a large virtual space while confined to a small real space. Previous work has examined if people can detect discrepancies which are fixed (such as when the virtual world rotation rate is amplified by a fixed value). In this work, we conducted an experiment where participants turn 360 degrees in the real world and indicate if the virtual world rotation rate increased or decreased over the course of the turn. Our results show no difference between rotational gains which instantaneously jump from one value to another compared to gains which slowly change over the course of a 360 degree turn. We also found that the starting gain influenced the point of subjective equality. Finally, our work indicates that the range of reliably detectable gain changes is consistent for starting gains at 1 and starting gains at 2.
tests and proofs | 2008
Scott A. Kuhl; Sarah H. Creem-Regehr; William B. Thompson
This work uses an immersive virtual environment (IVE) to examine how people maintain a calibration between biomechanical and visual information for rotational self-motion. First, we show that no rotational recalibration occurs when visual and biomechanical rates of rotation are matched. Next, we demonstrate that mismatched physical and visual rotation rates cause rotational recalibration. Although previous work has shown that rotational locomotion can be recalibrated in real environments, this work extends the finding to virtual environments. We further show that people do not completely recalibrate left and right rotations independently when different visual--biomechanical discrepancies are used for left and right rotations during a recalibration phase. Finally, since the majority of participants did not notice mismatched physical and visual rotation rates, we discuss the implications of using such mismatches to enable IVE users to explore a virtual space larger than the physical space they are in.
international conference on computer graphics and interactive techniques | 2015
Bochao Li; Ruimin Zhang; Anthony Nordman; Scott A. Kuhl
Distance perception is important for many virtual reality applications, and numerous studies have found underestimated egocentric distances in head-mounted display (HMD) based virtual environments. Applying minification to imagery displayed in HMDs is a method that can reduce or eliminate the underestimation [Kuhl et al. 2009; Zhang et al. 2012]. In a previous study, we measured distance judgments with direct blind walking through an Oculus Rift DK1 HMD and found that participants judged distance accurately in a calibrated condition, and minification caused subjects to overestimate distances [Li et al. 2014]. This article describes two experiments built on the previous study to examine distance judgments and minification with the Oculus Rift DK2 HMD (Experiment 1), and in the real world with a simulated HMD (Experiment 2). From the results, we found statistically significant distance underestimation with the DK2, but the judgments were more accurate than results typically reported in HMD studies. In addition, we discovered that participants made similar distance judgments with the DK2 and the simulated HMD. Finally, we found for the first time that minification had a similar impact on distance judgments in both virtual and real-world environments.
ieee virtual reality conference | 2013
Ruimin Zhang; Scott A. Kuhl
Traditional head-mounted display (HMD) locomotion interfaces map real world movements into an equivalent virtual world movement. Therefore, people cannot naturally explore a virtual world that is larger than the real world space. Joysticks, treadmills, redirected walking, and other techniques have been proposed by others to relax this restriction. In this work, we propose a new “general redirected walking” interface which works with arbitrary shaped and sized real worlds and virtual worlds by injecting dynamic translation or rotation gains into virtual world and steering a user to walk along a best direction in the real world identified by three heuristics in real time. We tested our algorithm with 10 virtual world models and 5 real-world (i.e., tracking system) models using software designed to simulate a user walking on a random path. We have also developed a prototype implementation for our HMD system.
human factors in computing systems | 2017
James W. Walker; Bochao Li; Keith Vertanen; Scott A. Kuhl
The rise of affordable head-mounted displays (HMDs) has raised questions about how to best design user interfaces for this technology. This paper focuses on the use of HMDs for home and office applications that require substantial text input. A physical keyboard is a familiar and effective text input device in normal desktop computing. But without additional camera technology, an HMD occludes all visual feedback about a users hand position over the keyboard. We describe a system that assists HMD users in typing on a physical keyboard. Our system has a virtual keyboard assistant that provides visual feedback inside the HMD about a users actions on the physical keyboard. It also provides powerful automatic correction of typing errors by extending a state-of-the-art touchscreen decoder. In a study with 24 participants, we found our virtual keyboard assistant enabled users to type more accurately on a visually-occluded keyboard. We found users wearing an HMD could type at over 40 words-per-minute while obtaining an error rate of less than 5%.
international conference on distributed, ambient, and pervasive interactions | 2014
Myounghoon Jeon; Michael T. Smith; James W. Walker; Scott A. Kuhl
For decades, researchers have spurred research on sonification, the use of non-speech audio to convey information [1]. With ‘interaction’ and ‘user experience’ being pervasive, interactive sonification [2], an emerging interdisciplinary area, has been introduced and its role and importance have rapidly increased in the auditory display community. From this background, we have devised a novel platform, “iISoP” (immersive Interactive Sonification Platform) for location, movement, and gesture-based interactive sonification research, by leveraging the existing Immersive Visualization Studio (IVS) at Michigan Tech. Projects in each developmental phase and planned research are discussed with a focus on “design research” and “interactivity”.