Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yuta Itoh is active.

Publication


Featured researches published by Yuta Itoh.


IEEE Transactions on Visualization and Computer Graphics | 2015

Corneal-Imaging Calibration for Optical See-Through Head-Mounted Displays

Alexander Plopski; Yuta Itoh; Christian Nitschke; Kiyoshi Kiyokawa; Gudrun Klinker; Haruo Takemura

In recent years optical see-through head-mounted displays (OST-HMDs) have moved from conceptual research to a market of mass-produced devices with new models and applications being released continuously. It remains challenging to deploy augmented reality (AR) applications that require consistent spatial visualization. Examples include maintenance, training and medical tasks, as the view of the attached scene camera is shifted from the users view. A calibration step can compute the relationship between the HMD-screen and the users eye to align the digital content. However, this alignment is only viable as long as the display does not move, an assumption that rarely holds for an extended period of time. As a consequence, continuous recalibration is necessary. Manual calibration methods are tedious and rarely support practical applications. Existing automated methods do not account for user-specific parameters and are error prone. We propose the combination of a pre-calibrated display with a per-frame estimation of the users cornea position to estimate the individual eye center and continuously recalibrate the system. With this, we also obtain the gaze direction, which allows for instantaneous uncalibrated eye gaze tracking, without the need for additional hardware and complex illumination. Contrary to existing methods, we use simple image processing and do not rely on iris tracking, which is typically noisy and can be ambiguous. Evaluation with simulated and real data shows that our approach achieves a more accurate and stable eye pose estimation, which results in an improved and practical calibration with a largely improved distribution of projection error.


Neural Networks | 2011

Least-squares two-sample test

Masashi Sugiyama; Taiji Suzuki; Yuta Itoh; Takafumi Kanamori; Manabu Kimura

The goal of the two-sample test (a.k.a. the homogeneity test) is, given two sets of samples, to judge whether the probability distributions behind the samples are the same or not. In this paper, we propose a novel non-parametric method of two-sample test based on a least-squares density ratio estimator. Through various experiments, we show that the proposed method overall produces smaller type-II error (i.e., the probability of judging the two distributions to be the same when they are actually different) than a state-of-the-art method, with slightly larger type-I error (i.e., the probability of judging the two distributions to be different when they are actually the same).


IEEE Transactions on Visualization and Computer Graphics | 2015

Light-Field Correction for Spatial Calibration of Optical See-Through Head-Mounted Displays

Yuta Itoh; Gudrun Klinker

A critical requirement for AR applications with Optical See-Through Head-Mounted Displays (OST-HMD) is to project 3D information correctly into the current viewpoint of the user - more particularly, according to the users eye position. Recently-proposed interaction-free calibration methods [16], [17] automatically estimate this projection by tracking the users eye position, thereby freeing users from tedious manual calibrations. However, the method is still prone to contain systematic calibration errors. Such errors stem from eye-/HMD-related factors and are not represented in the conventional eye-HMD model used for HMD calibration. This paper investigates one of these factors - the fact that optical elements of OST-HMDs distort incoming world-light rays before they reach the eye, just as corrective glasses do. Any OST-HMD requires an optical element to display a virtual screen. Each such optical element has different distortions. Since users see a distorted world through the element, ignoring this distortion degenerates the projection quality. We propose a light-field correction method, based on a machine learning technique, which compensates the world-scene distortion caused by OST-HMD optics. We demonstrate that our method reduces the systematic error and significantly increases the calibration accuracy of the interaction-free calibration.


IEEE Transactions on Visualization and Computer Graphics | 2015

Subjective Evaluation of a Semi-Automatic Optical See-Through Head-Mounted Display Calibration Technique

Kenneth R. Moser; Yuta Itoh; Kohei Oshima; J. Edward Swan; Gudrun Klinker; Christian Sandor

With the growing availability of optical see-through (OST) head-mounted displays (HMDs) there is a present need for robust, uncomplicated, and automatic calibration methods suited for non-expert users. This work presents the results of a user study which both objectively and subjectively examines registration accuracy produced by three OST HMD calibration methods: (1) SPAAM, (2) Degraded SPAAM, and (3) Recycled INDICA, a recently developed semi-automatic calibration method. Accuracy metrics used for evaluation include subject provided quality values and error between perceived and absolute registration coordinates. Our results show all three calibration methods produce very accurate registration in the horizontal direction but caused subjects to perceive the distance of virtual objects to be closer than intended. Surprisingly, the semi-automatic calibration method produced more accurate registration vertically and in perceived object distance overall. User assessed quality values were also the highest for Recycled INDICA, particularly when objects were shown at distance. The results of this study confirm that Recycled INDICA is capable of producing equal or superior on-screen registration compared to common OST HMD calibration methods. We also identify a potential hazard in using reprojection error as a quantitative analysis technique to predict registration accuracy. We conclude with discussing the further need for examining INDICA calibration in binocular HMD systems, and the present possibility for creation of a closed-loop continuous calibration method for OST Augmented Reality.


international symposium on mixed and augmented reality | 2014

Performance and Sensitivity Analysis of INDICA: INteraction-Free DIsplay CAlibration for Optical See-Through Head-Mounted Displays

Yuta Itoh; Gudrun Klinker

An issue in AR applications with Optical See-Through Head-Mounted Display (OST-HMD) is to correctly project 3D information to the current viewpoint of the user. Manual calibration methods give the projection as a black box which explains observed 2D-3D relationships well (Fig. 1). Recently, we have proposed an INteraction-free DIsplay CAlibration method (INDICA) for OST-HMD, utilizing camera-based eye tracking [7]. It reformulates the projection in two ways: a black box with an actual eye model (Recycle Setup), and a combination of an explicit display model and an eye model (Full Setup). Although we have shown the former performs more stably than a repeated SPAAM calibration, we could not yet prove whether the same holds for the Full Setup. More importantly, it is still unclear how the error in the calibration parameters affects the final results. Thus, the users can not know how accurately they need to estimate each parameter in practice. We provide: (1) the fact that the Full Setup performs as accurately as the Recycle Setup under a marker-based display calibration, (2) an error sensitivity analysis for both SPAAM and INDICA over the on-/offline parameters, and (3) an investigation of the theoretical sensitivity on an OST-HMD justified by the real measurements.


augmented human international conference | 2015

Vision enhancement: defocus correction via optical see-through head-mounted displays

Yuta Itoh; Gudrun Klinker

Vision is our primary, essential sense to perceive the real world. Human beings have been keen to enhance the limit of the eye function by inventing various vision devices such as corrective glasses, sunglasses, telescopes, and night vision goggles. Recently, Optical See-Through Head-Mounted Displays (OST-HMD) have penetrated in the commercial market. While the traditional devices have improved our vision by altering or replacing it, OST-HMDs can augment and mediate it. We believe that future OST-HMDs will dramatically improve our vision capability, combined with wearable sensing systems including image sensors. For taking a step toward this future, this paper investigates Vision Enhancement (VE) techniques via OST-HMDs. We aim at correcting optical defects of human eyes, especially defocus, by overlaying a compensation image on the users actual view so that the filter cancels the aberration. Our contributions are threefold. Firstly, we formulate our method by taking the optical relationships between OST-HMD and human eye into consideration. Secondly, we demonstrate the method in proof-of-concept experiments. Lastly and most importantly, we provide a thorough analysis of the results including limitations of the current system, potential research issues necessary for realizing practical VE systems, and possible solutions for the issues for future research.


IEEE Transactions on Visualization and Computer Graphics | 2015

Semi-Parametric Color Reproduction Method for Optical See-Through Head-Mounted Displays

Yuta Itoh; Maksym Dzitsiuk; Toshiyuki Amano; Gudrun Klinker

The fundamental issues in Augmented Reality (AR) are on how to naturally mediate the reality with virtual content as seen by users. In AR applications with Optical See-Through Head-Mounted Displays (OST-HMD), the issues often raise the problem of rendering color on the OST-HMD consistently to input colors. However, due to various display constraints and eye properties, it is still a challenging task to indistinguishably reproduce the colors on OST-HMDs. An approach to solve this problem is to pre-process the input color so that a user perceives the output color on the display to be the same as the input. We propose a color calibration method for OST-HMDs. We start from modeling the physical optics in the rendering and perception process between the HMD and the eye. We treat the color distortion as a semi-parametric model which separates the non-linear color distortion and the linear color shift. We demonstrate that calibrated images regain their original appearance on two OST-HMD setups with both synthetic and real datasets. Furthermore, we analyze the limitations of the proposed method and remaining problems of the color reproduction in OST-HMDs. We then discuss how to realize more practical color reproduction methods for future HMD-eye system.


international symposium on mixed and augmented reality | 2016

Automated Spatial Calibration of HMD Systems with Unconstrained Eye-cameras

Alexander Plopski; Jason Orlosky; Yuta Itoh; Christian Nitschke; Kiyoshi Kiyokawa; Gudrun Klinker

Properly calibrating an optical see-through head-mounted display (OST-HMD) and maintaining a consistent calibration over time can be a very challenging task. Automated methods need an accurate model of both the OST-HMD screen and the users constantly changing eye-position to correctly project virtual information. While some automated methods exist, they often have restrictions, including fixed eye-cameras that cannot be adjusted for different users.To address this problem, we have developed a method that automatically determines the position of an adjustable eye-tracking camera and its unconstrained position relative to the display. Unlike methods that require a fixed pose between the HMD and eye camera, our framework allows for automatic calibration even after adjustments of the camera to a particular individuals eye and even after the HMD moves on the users face. Using two sets of IR-LEDs rigidly attached to the camera and OST-HMD frame, we can calculate the correct projection for different eye positions in real time and changes in HMD position within several frames. To verify the accuracy of our method, we conducted two experiments with a commercial HMD by calibrating a number of different eye and camera positions. Ground truth was measured through markers on both the camera and HMD screens, and we achieve a viewing accuracy of 1.66 degrees for the eyes of 5 different experiment participants.


augmented human international conference | 2016

Laplacian Vision: Augmenting Motion Prediction via Optical See-Through Head-Mounted Displays

Yuta Itoh; Jason Orlosky; Kiyoshi Kiyokawa; Gudrun Klinker

Naïve physics [7], or folk physics, is our ability to understand physical phenomena. We regularly use this ability in life to avoid collisions in traffic, follow a tennis ball and time the return shot, or while working in dynamic industrial settings. Though this skill improves with practice, it is still imperfect, which leads to mistakes and misjudgments for time intensive tasks. People still often miss a tennis shot, which might cause them to lose the match, or fail to avoid a car or pedestrian, which can lead to injury or even death. As a step towards reducing these errors in human judgement, we present Laplacian Vision (LV), a vision augmentation system which assists the human ability to predict future trajectory information. By tracking real world objects and estimating their trajectories, we can improve a userss prediction of the landing spot of a ball or the path of an oncoming car. We have designed a system that can track a flying ball in real time, predict its future trajectory, and visualize it in the users field of view. The system is also calibrated to account for end-to-end delays so that the trajectory appears to emanate forward from the moving object. We also conduct a user study where 29 subjects predict an objects landing spot, and show that prediction accuracy improves 3 fold using LV.


international symposium on mixed and augmented reality | 2015

Simultaneous Direct and Augmented View Distortion Calibration of Optical See-Through Head-Mounted Displays

Yuta Itoh; Gudrun Klinker

In Augmented Reality (AR) with an Optical See-Through Head-Mounted Display (OST-HMD), the spatial calibration between a users eye and the display screen is a crucial issue in realizing seamless AR experiences. A successful calibration hinges upon proper modeling of the display system which is conceptually broken down into an eye part and an HMD part. This paper breaks the HMD part down even further to investigate optical aberration issues. The display optics causes two different optical aberrations that degrade the calibration quality: the distortion of incoming light from the physical world, and that of light from the image source of the HMD. While methods exist for correcting either of the two distortions independently, there is, to our knowledge, no method which corrects for both simultaneously. This paper proposes a calibration method that corrects both of the two distortions simultaneously for an arbitrary eye position given an OST-HMD system. We expand a light-field (LF) correction approach [8] originally designed for the former distortion. Our method is camera-based and has an offline learning and an online correction step. We verify our method in exemplary calibrations of two different OST-HMDs: a professional and a consumer OST-HMD. The results show that our method significantly improves the calibration quality compared to a conventional method with the accuracy comparable to 20/50 visual acuity. The results also indicate that only by correcting both the distortions simultaneously can improve the quality.

Collaboration


Dive into the Yuta Itoh's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge