Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kenneth R. Moser is active.

Publication


Featured researches published by Kenneth R. Moser.


IEEE Transactions on Visualization and Computer Graphics | 2015

Subjective Evaluation of a Semi-Automatic Optical See-Through Head-Mounted Display Calibration Technique

Kenneth R. Moser; Yuta Itoh; Kohei Oshima; J. Edward Swan; Gudrun Klinker; Christian Sandor

With the growing availability of optical see-through (OST) head-mounted displays (HMDs) there is a present need for robust, uncomplicated, and automatic calibration methods suited for non-expert users. This work presents the results of a user study which both objectively and subjectively examines registration accuracy produced by three OST HMD calibration methods: (1) SPAAM, (2) Degraded SPAAM, and (3) Recycled INDICA, a recently developed semi-automatic calibration method. Accuracy metrics used for evaluation include subject provided quality values and error between perceived and absolute registration coordinates. Our results show all three calibration methods produce very accurate registration in the horizontal direction but caused subjects to perceive the distance of virtual objects to be closer than intended. Surprisingly, the semi-automatic calibration method produced more accurate registration vertically and in perceived object distance overall. User assessed quality values were also the highest for Recycled INDICA, particularly when objects were shown at distance. The results of this study confirm that Recycled INDICA is capable of producing equal or superior on-screen registration compared to common OST HMD calibration methods. We also identify a potential hazard in using reprojection error as a quantitative analysis technique to predict registration accuracy. We conclude with discussing the further need for examining INDICA calibration in binocular HMD systems, and the present possibility for creation of a closed-loop continuous calibration method for OST Augmented Reality.


ieee virtual reality conference | 2014

Baseline SPAAM calibration accuracy and precision in the absence of human postural sway error

Kenneth R. Moser; Magnus Axholt; J. Edward Swan

We conducted an experiment in an attempt to generate baseline accuracy and precision values for optical see-through (OST) head mounted display (HMD) calibration without the inclusion of human postural sway error. This preliminary work will act as a control condition for future studies into postural error reduction. An experimental apparatus was constructed to allow performance of a SPAAM calibration using 25 alignments taken using one of three distance distribution patterns: static, sequential, and magic square. The accuracy of the calibrations were determined by calculating the extrinsic X, Y, Z translation values from the resulting projection matrix. The standard deviation for each translation component was also calculated. The results show that the magic square distribution resulted in the most accurate parameter estimation and also resulted in the smallest standard deviation for each extrinsic translation component.


symposium on 3d user interfaces | 2016

Evaluation of user-centric optical see-through head-mounted display calibration using a leap motion controller

Kenneth R. Moser; J. Edward Swan

Advances in optical see-through head-mounted display technology have yielded a number of consumer accessible options, such as the Google Glass and Epson Moverio BT-200, and have paved the way for promising next generation hardware, including the Microsoft HoloLens and Epson Pro BT-2000. The release of consumer devices, though, has also been accompanied by an ever increasing need for standardized optical see-through display calibration procedures easily implemented and performed by researchers, developers, and novice users alike. Automatic calibration techniques offer the possibility for ubiquitous environment independent solutions, un-reliant upon user interaction. These processes, however, require the use of additional eye tracking hardware and algorithms not natively present in current display offerings. User dependent approaches, therefore, remain the only viable option for effective calibration of current generation optical see-through hardware. Inclusion of depth sensors and hand tracking cameras, promised in forthcoming consumer models, offer further potential to improve these manual methods and provide practical intuitive calibration options accessible to a wide user base. In this work, we evaluate the accuracy and precision of manual optical see-through head-mounted display calibration performed using a Leap Motion controller. Both hand and stylus based methods for monocular and stereo procedures are examined, along with several on-screen reticle designs for improving alignment context during calibration. Our study shows, that while enhancing the context of reticles for hand based alignments does yield improved results, Leap Motion calibrations performed with a stylus offer the most accurate and consistent performance, comparable to that found in previous studies for environment-centric routines. In addition, we found that stereo calibration further improved precision in every case. We believe that our findings not only validate the potential of hand and gesture based trackers in facilitating optical see-through calibration methodologies, but also provide a suitable benchmark to help guide future efforts in standardizing calibration practices for user friendly consumer systems.


IEEE Transactions on Visualization and Computer Graphics | 2018

A Survey of Calibration Methods for Optical See-Through Head-Mounted Displays

Jens Grubert; Yuta Itoh; Kenneth R. Moser; J. Edward Swan

Optical see-through head-mounted displays (OST HMDs) are a major output medium for Augmented Reality, which have seen significant growth in popularity and usage among the general public due to the growing release of consumer-oriented models, such as the Microsoft Hololens. Unlike Virtual Reality headsets, OST HMDs inherently support the addition of computer-generated graphics directly into the light path between a users eyes and their view of the physical world. As with most Augmented and Virtual Reality systems, the physical position of an OST HMD is typically determined by an external or embedded 6-Degree-of-Freedom tracking system. However, in order to properly render virtual objects, which are perceived as spatially aligned with the physical environment, it is also necessary to accurately measure the position of the users eyes within the tracking systems coordinate frame. For over 20 years, researchers have proposed various calibration methods to determine this needed eye position. However, to date, there has not been a comprehensive overview of these procedures and their requirements. Hence, this paper surveys the field of calibration methods for OST HMDs. Specifically, it provides insights into the fundamentals of calibration techniques, and presents an overview of both manual and automatic approaches, as well as evaluation methods and metrics. Finally, it also identifies opportunities for future research.


symposium on 3d user interfaces | 2016

SharpView: Improved clarity of defocused content on optical see-through head-mounted displays

Koheiushima; Kenneth R. Moser; Damien Constantine Rompapas; J. Edward Swan; Sei Ikeda; Goshiro Yamamoto; Takafumi Taketomi; Christian Sandor; Hirokazu Kato

Augmented Reality (AR) systems, which utilize optical see-through head-mounted displays, are becoming more common place, with several consumer level options already available, and the promise of additional, more advanced, devices on the horizon. A common factor among current generation optical see-through devices, though, is fixed focal distance to virtual content. While fixed focus is not a concern for video see-through AR, since both virtual and real world imagery are combined into a single image by the display, unequal distances between real world objects and the virtual display screen in optical see-through AR is unavoidable. In this work, we investigate the issue of focus blur, in particular, the blurring caused by simultaneously viewing virtual content and physical objects in the environment at differing focal distances. We additionally examine the application of dynamic sharpening filters as a straight forward, system independent, means for mitigating this effect improving the clarity of defocused AR content. We assess the utility of this method, termed SharpView, by employing an adjustment experiment in which users actively apply varying amounts of sharpening to reduce the perception of blur in AR content shown at four focal disparity levels relative to real world imagery. Our experimental results confirm that dynamic correction schemes are required for adequately addressing the presence of blur in Optical See-Through AR. Furthermore, we validate the ability of our SharpView model to improve the perceived visual clarity of focus blurred content, with optimal performance at focal differences well suited for near field AR applications.


ieee virtual reality conference | 2015

Evaluating optical see-through head-mounted display calibration via frustum visualization

Kenneth R. Moser; J. Edward Swan

Summary form only given. Effectively evaluating optical see-through (OST) head-mounted display (HMD) calibration is problematic and largely relies on feedback from the user. Studies evaluating OST HMD calibration, such as those by McGarrity, Tang, and Navab et al. [2, 3, 1], utilize user interaction methods, such as touch pads, to facilitate on-line evaluation and correction of calibration results. In all of these studies, however, only the users themselves receive any visual feedback related to the calibration quality or the corrective actions taken to improve it. In this video, we present the use of standard frustum visualization to provide calibration quality information to the researcher in real time. We use a standard Single Point Active Alignment Method (SPAAM) calibration, [4], after which both the eye location estimate and resulting intrinsic values are displayed superimposed onto the user. Presenting the eye position relative to the users head benefits studies on system error sources, and rendering on-screen visuals also allows outside observers to identify calibration issues and offer corrective suggestions. We believe that techniques, such as frustum visualization, will expand the amount of information available for evaluating calibration results, and will greatly aid those investigating new and improved calibration procedures.


ieee virtual reality conference | 2014

Quantification of error from system and environmental sources in Optical See-Through head mounted display calibration methods

Kenneth R. Moser

A common problem with Optical See-Through (OST) Augmented Reality (AR) is misalignment or registration error with the amount of acceptable error being heavily dependent upon the type of application. Approximation methods, driven by user feedback, have been developed to estimate the necessary corrections for alignment errors. These calibration methods, however, are susceptable to induced error from system and environmental sources, such as human alignment error. The proposed research plan is intended to further the development of accurate and robust calibration methods for OST AR systems by quantifying the impact of specific factors shown to contribute to calibration error. An important aspect of this research will be to develop methods for examining each factor in isolation in order to determine the independent error contribution of each source. This will facilitate the establishment of acceptable thresholds for each type of error and be a meaningful step toward defining quality metrics for OST AR calibration techniques.


ieee virtual reality conference | 2016

Calibration and interaction in optical see-through augmented reality using leap motion

Kenneth R. Moser; Sujan Anreddy; J. Edward Swan

The growing prevalence of hand and gesture tracking technology has led to an increased availability of consumer level devices, such as the Leap Motion controller, and also facilitated the inclusion of similar hardware into forth coming head mounted display offerings, including the Microsoft HoloLens and Moverio Pro BT-2000. In this video, we demonstrate the utility of the Leap Motion for calibrating optical see-through augmented reality systems by employing a variation on Tuceryan and Navabs Single Point Active Alignment Method [3]. We also showcase a straightforward method for calibrating the coordinate frame of the Leap Motion to a secondary tracking system by employing absolute orientation algorithms [2, 1, 4], allowing us to properly transform and visualize hand and finger tracking data from the users viewpoint. Our combined display and coordinate frame calibration techniques produce a viable mechanism for not only intuitive interaction with virtual objects but also the creation of natural occlusion between computer generated content and the user themselves. We believe that these techniques will be pivotal in the development of novel consumer applications for next generation head mounted display hardware.


ieee virtual reality conference | 2016

Evaluation of hand and stylus based calibration for optical see-through head-mounted displays using leap motion

Kenneth R. Moser; J. Edward Swan

Next generation OST HMDs promise the inclusion of a variety of integrated and on-board sensors. In particular, hand tracking cameras, such as the Leap Motion, show potential for facilitating intuitive OST calibration procedures accessible to researchers, developers, and novice users alike. In this work, we evaluate hand and stylus based OST calibration utilizing tracking data from a Leap Motion. Our findings show that performance of both methods is comparable to results from prior studies using standard environment-centric methods. Also, while our hand based calibration improved through the use of more contextual reticle designs, calibrations performed with a stylus yielded the most accurate and precise results over all.


ieee virtual reality conference | 2016

Spatial consistency perception in optical and video see-through head-mounted augmentations

Alexander Plopski; Kenneth R. Moser; Kiyoshi Kiyokawa; J. Edward Swan; Haruo Takemura

Correct spatial alignment is an essential requirement for convincing augmented reality experiences. Registration error, caused by a variety of systematic, environmental, and user influences decreases the realism and utility of head mounted display AR applications. Focus is often given to rigorous calibration and prediction methods seeking to entirely remove misalignment error between virtual and real content. Unfortunately, producing perfect registration is often simply not possible. Our goal is to quantify the sensitivity of users to registration error in these systems, and identify acceptability thresholds at which users can no longer distinguish between the spatial positioning of virtual and real objects. We simulate both video see-through and optical see-through environments using a projector system and experimentally measure user perception of virtual content misalignment. Our results indicate that users are less perceptive to rotational errors over all and that translational accuracy is less important in optical see-through systems than in video see-through.

Collaboration


Dive into the Kenneth R. Moser's collaboration.

Top Co-Authors

Avatar

J. Edward Swan

Mississippi State University

View shared research outputs
Top Co-Authors

Avatar

Christian Sandor

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chunya Hua

Mississippi State University

View shared research outputs
Top Co-Authors

Avatar

Damien Constantine Rompapas

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Goshiro Yamamoto

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Hirokazu Kato

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Kohei Oshima

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Sei Ikeda

Ritsumeikan University

View shared research outputs
Top Co-Authors

Avatar

Takafumi Taketomi

Nara Institute of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge