Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexander Plopski is active.

Publication


Featured researches published by Alexander Plopski.


IEEE Transactions on Visualization and Computer Graphics | 2015

Corneal-Imaging Calibration for Optical See-Through Head-Mounted Displays

Alexander Plopski; Yuta Itoh; Christian Nitschke; Kiyoshi Kiyokawa; Gudrun Klinker; Haruo Takemura

In recent years optical see-through head-mounted displays (OST-HMDs) have moved from conceptual research to a market of mass-produced devices with new models and applications being released continuously. It remains challenging to deploy augmented reality (AR) applications that require consistent spatial visualization. Examples include maintenance, training and medical tasks, as the view of the attached scene camera is shifted from the users view. A calibration step can compute the relationship between the HMD-screen and the users eye to align the digital content. However, this alignment is only viable as long as the display does not move, an assumption that rarely holds for an extended period of time. As a consequence, continuous recalibration is necessary. Manual calibration methods are tedious and rarely support practical applications. Existing automated methods do not account for user-specific parameters and are error prone. We propose the combination of a pre-calibrated display with a per-frame estimation of the users cornea position to estimate the individual eye center and continuously recalibrate the system. With this, we also obtain the gaze direction, which allows for instantaneous uncalibrated eye gaze tracking, without the need for additional hardware and complex illumination. Contrary to existing methods, we use simple image processing and do not rely on iris tracking, which is typically noisy and can be ambiguous. Evaluation with simulated and real data shows that our approach achieves a more accurate and stable eye pose estimation, which results in an improved and practical calibration with a largely improved distribution of projection error.


international symposium on mixed and augmented reality | 2013

An outdoor ground truth evaluation dataset for sensor-aided visual handheld camera localization

Daniel Kurz; Peter Meier; Alexander Plopski; Gudrun Klinker

We introduce the first publicly available test dataset for outdoor handheld camera localization comprising over 45,000 real camera images of an urban environment captured under natural camera motions and different illumination settings. For all these images the dataset not only contains readings of the sensors attached to the camera, but also ground truth information on the geometry and texture of the environment and the full 6DoF ground truth camera pose. This poster describes the extensive process of creating this comprehensive dataset that we have made available to the public. We hope this not only enables researchers to objectively evaluate their camera localization and tracking algorithms and frameworks on realistic data but also stimulates further research.


international symposium on mixed and augmented reality | 2016

Automated Spatial Calibration of HMD Systems with Unconstrained Eye-cameras

Alexander Plopski; Jason Orlosky; Yuta Itoh; Christian Nitschke; Kiyoshi Kiyokawa; Gudrun Klinker

Properly calibrating an optical see-through head-mounted display (OST-HMD) and maintaining a consistent calibration over time can be a very challenging task. Automated methods need an accurate model of both the OST-HMD screen and the users constantly changing eye-position to correctly project virtual information. While some automated methods exist, they often have restrictions, including fixed eye-cameras that cannot be adjusted for different users.To address this problem, we have developed a method that automatically determines the position of an adjustable eye-tracking camera and its unconstrained position relative to the display. Unlike methods that require a fixed pose between the HMD and eye camera, our framework allows for automatic calibration even after adjustments of the camera to a particular individuals eye and even after the HMD moves on the users face. Using two sets of IR-LEDs rigidly attached to the camera and OST-HMD frame, we can calculate the correct projection for different eye positions in real time and changes in HMD position within several frames. To verify the accuracy of our method, we conducted two experiments with a commercial HMD by calibrating a number of different eye and camera positions. Ground truth was measured through markers on both the camera and HMD screens, and we achieve a viewing accuracy of 1.66 degrees for the eyes of 5 different experiment participants.


international symposium on mixed and augmented reality | 2014

Corneal imaging in localization and HMD interaction

Alexander Plopski; Kiyoshi Kiyokawa; Haruo Takemura; Christian Nitschke

The human eyes perceive our surroundings and are one of, if not our most important sensory organs. Contrary to our other senses the eyes not only perceive but also provide information to a keen observer. However, thus far this has been mainly used to detect reflection of infrared light sources to estimate the user’s gaze. The reflection of the visible spectrum on the other hand has rarely been utilized. In this dissertation we want to explore how the analysis of the corneal image can improve currently available eye-related solutions, such as calibration of optical see-through head-mounted devices or eye-gaze tracking and point of regard estimation in arbitrary environments. We also aim to study how corneal imaging can become an alternative for established augmented reality tasks such as tracking and localization.


international symposium on ubiquitous virtual reality | 2017

Estimating Gaze Depth Using Multi-Layer Perceptron

Youngho Lee; Choonsung Shin; Alexander Plopski; Yuta Itoh; Thammathip Piumsomboon; Arindam Dey; Gun A. Lee; Seungwon Kim; Mark Billinghurst

In this paper we describe a new method for determining gaze depth in a head mounted eye-tracker. Eye-trackers are being incorporated into head mounted displays (HMDs), and eye-gaze is being used for interaction in Virtual and Augmented Reality. For some interaction methods, it is important to accurately measure the x-and y-direction of the eye-gaze and especially the focal depth information. Generally, eye tracking technology has a high accuracy in x-and y-directions, but not in depth. We used a binocular gaze tracker with two eye cameras, and the gaze vector was input to an MLP neural network for training and estimation. For the performance evaluation, data was obtained from 13 people gazing at fixed points at distances from 1m to 5m. The gaze classification into fixed distances produced an average classification error of nearly 10%, and an average error distance of 0.42m. This is sufficient for some Augmented Reality applications, but more research is needed to provide an estimate of a users gaze moving in continuous space.


international conference on artificial reality and telexistence | 2015

Hybrid eye tracking: combining iris contour and corneal imaging

Alexander Plopski; Christian Nitschke; Kiyoshi Kiyokawa; Dieter Schmalstieg; Haruo Takemura

Passive eye-pose estimation methods that recover the eye-pose from natural images generally suffer from low accuracy, the result of a static eye model, and the recovery of the eye model from the estimated iris contour. Active eye-pose estimation methods use precisely calibrated light sources to estimate a user specific eye-model. These methods recover an accurate eye-pose at the cost of complex setups and additional hardware. A common application of eye-pose estimation is the recovery of the point-of-gaze (PoG) given a 3D model of the scene. We propose a novel method that exploits this 3D model to recover the eye-pose and the corresponding PoG from natural images. Our hybrid approach combines active and passive eye-pose estimation methods to recover an accurate eye-pose from natural images. We track the corneal reflection of the scene to estimate an accurate position of the eye and then determine its orientation. The positional constraint allows us to estimate user specific eye-model parameters and improve the orientation estimation. We compare our method with standard iris-contour tracking and show that our method is more robust and accurate than eye-pose estimation from the detected iris with a static iris size. Accurate passive eye-pose and PoG estimation allows users to naturally interact with the scene, e.g., augmented reality content, without the use of infra-red light sources.


international symposium on mixed and augmented reality | 2016

EyeAR: Refocusable Augmented Reality Content through Eye Measurements

Damien Constantine Rompapas; Aitor Rovira; Sei Ikeda; Alexander Plopski; Takafumi Taketomi; Christian Sandor; Hirokazu Kato

The human visual system always focuses at a distinct depth. Therefore, objects that lie at different depths appear blurred, a phenomenon known as Depth of Field (DoF); as the users focus depth changes, different objects come in and out of focus. Augmented Reality (AR) is a technology that superimposes computer graphics (CG) images onto a users view of the real world. A commonly used AR display device is an Optical See-Through Head-Mounted Display (OST-HMD), enabling users to observe the real-world directly, with CG added to it. A common problem in such systems is the mismatch between the DoF properties of the users eyes and the virtual camera used to generate CG.In this demonstration, we present an improved version of the system presented in [11] as two implementations: The first as a high quality tabletop system, the second as a component which has been integrated into the Microsoft Hololens [18].


international conference on artificial reality and telexistence | 2016

Simulation based camera localization under a variable lighting environment

Tomohiro Mashita; Alexander Plopski; Akira Kudo; Tobias Höllerer; Kiyoshi Kiyokawa; Haruo Takemura

Localizing the user from a feature database of a scene is a basic and necessary step for presentation of localized augmented reality (AR) content. Commonly such a database depicts a single appearance of the scene, due to time and effort required to prepare it. However, the appearance depends on various factors, e.g., the position of the sun and cloudiness. Observing the scene under different lighting conditions results in a decreased success rate and accuracy of the localization. To address this we propose to generate the feature database from a simulated appearance of the scene model under a number of different lighting conditions. We also propose to extend the feature descriptors used in the localization with a parametric representation of their changes under varying lighting conditions. We compare our method with a standard representation and matching based on L2-norm in a simulation and real world experiments. Our results show that our simulated environment is a satisfactory representation of the scenes appearance and improves feature matching over a single database. The proposed feature descriptor achieves a higher localization ratio with fewer feature points and a lower process cost.


ieee virtual reality conference | 2016

Spatial consistency perception in optical and video see-through head-mounted augmentations

Alexander Plopski; Kenneth R. Moser; Kiyoshi Kiyokawa; J. Edward Swan; Haruo Takemura

Correct spatial alignment is an essential requirement for convincing augmented reality experiences. Registration error, caused by a variety of systematic, environmental, and user influences decreases the realism and utility of head mounted display AR applications. Focus is often given to rigorous calibration and prediction methods seeking to entirely remove misalignment error between virtual and real content. Unfortunately, producing perfect registration is often simply not possible. Our goal is to quantify the sensitivity of users to registration error in these systems, and identify acceptability thresholds at which users can no longer distinguish between the spatial positioning of virtual and real objects. We simulate both video see-through and optical see-through environments using a projector system and experimentally measure user perception of virtual content misalignment. Our results indicate that users are less perceptive to rotational errors over all and that translational accuracy is less important in optical see-through systems than in video see-through.


international symposium on mixed and augmented reality | 2013

In-situ lighting and reflectance estimations for indoor AR systems

Tomohiro Mashita; Alexander Plopski; Kiyoshi Kiyokawa; Haruo Takemura

We introduce an in-situ lighting and reflectance estimation method that does not require specific light probes and/or preliminary scanning. Our method uses images taken from multiple viewpoints while data accumulation and lighting and reflectance estimations run in the background of the primary AR system. As a result, our method requires little in the way of manipulations for image collection because it consists primarily of image processing and optimization. When used, lighting directions and initial optimization values are estimated via image processing. Eventually, the full parameters are obtained by optimization of the differences between real images. This system uses current best parameters because the parameter estimation and input image updates are run independently.

Collaboration


Dive into the Alexander Plopski's collaboration.

Top Co-Authors

Avatar

Christian Sandor

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Takafumi Taketomi

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Haruo Takemura

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Hirokazu Kato

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sei Ikeda

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge