Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christian Nitschke is active.

Publication


Featured researches published by Christian Nitschke.


international conference on computer graphics and interactive techniques | 2005

Enabling view-dependent stereoscopic projection in real environments

Oliver Bimber; Gordon Wetzstein; Andreas Emmerling; Christian Nitschke; Anselm Grundhöfer

We show how view-dependent image-based and geometric warping, radiometric compensation, and multi-focal projection enable a view-dependent stereoscopic visualization on ordinary (geometrically complex, colored and textured) surfaces within everyday environments. Special display configurations for immersive or semi-immersive AR/VR applications that require permanent and artificial projection canvases might become unnecessary. We demonstrate several ad-hoc visualization examples in a real architectural and museum application context.


IEEE Transactions on Visualization and Computer Graphics | 2015

Corneal-Imaging Calibration for Optical See-Through Head-Mounted Displays

Alexander Plopski; Yuta Itoh; Christian Nitschke; Kiyoshi Kiyokawa; Gudrun Klinker; Haruo Takemura

In recent years optical see-through head-mounted displays (OST-HMDs) have moved from conceptual research to a market of mass-produced devices with new models and applications being released continuously. It remains challenging to deploy augmented reality (AR) applications that require consistent spatial visualization. Examples include maintenance, training and medical tasks, as the view of the attached scene camera is shifted from the users view. A calibration step can compute the relationship between the HMD-screen and the users eye to align the digital content. However, this alignment is only viable as long as the display does not move, an assumption that rarely holds for an extended period of time. As a consequence, continuous recalibration is necessary. Manual calibration methods are tedious and rarely support practical applications. Existing automated methods do not account for user-specific parameters and are error prone. We propose the combination of a pre-calibrated display with a per-frame estimation of the users cornea position to estimate the individual eye center and continuously recalibrate the system. With this, we also obtain the gaze direction, which allows for instantaneous uncalibrated eye gaze tracking, without the need for additional hardware and complex illumination. Contrary to existing methods, we use simple image processing and do not rely on iris tracking, which is typically noisy and can be ambiguous. Evaluation with simulated and real data shows that our approach achieves a more accurate and stable eye pose estimation, which results in an improved and practical calibration with a largely improved distribution of projection error.


symposium on spatial user interaction | 2014

Fisheye vision: peripheral spatial compression for improved field of view in head mounted displays

Jason Orlosky; Qifan Wu; Kiyoshi Kiyokawa; Haruo Takemura; Christian Nitschke

A current problem with many video see-through displays is the lack of a wide field of view, which can make them dangerous to use in real world augmented reality applications since peripheral vision is severely limited. Existing wide field of view displays are often bulky, lack stereoscopy, or require complex setups. To solve this problem, we introduce a prototype that utilizes fisheye lenses to expand a users peripheral vision inside a video see-through head mounted display. Our system provides an undistorted central field of view, so that natural stereoscopy and depth judgment can occur. The peripheral areas of the display show content through the curvature of each of two fisheye lenses using a modified compression algorithm so that objects outside of the inherent viewing angle of the display become visible. We first test an initial prototype with 180° field of view lenses, and then build an improved version with 238° lenses. We also describe solutions to several problems associated with aligning undistorted binocular vision and the compressed periphery, and finally compare our prototype to natural human vision in a series of visual acuity experiments. Results show that users can effectively see objects up to 180°, and that overall detection rate is 62.2% for the display versus 89.7% for the naked eye.


international conference on control, automation and systems | 2014

A Quadrocopter Automatic Control Contest as an Example of Interdisciplinary Design Education

Christian Nitschke; Yuki Minami; Masayuki Hiromoto; Hiroaki Ohshima; Takashi Sato

Unmanned aerial vehicles (UAVs) have many applications and quickly gain popularity with the availability of low-cost micro aerial vehicles (MAVs). Robotics is a popular interdisciplinary education target as it involves understanding and collaboration of several disciplines. Thus, UAVs can serve as an ideal study platform. However, as robotics requires technical background, skills and initial efforts, it is commonly applied in long-term courses. In this paper we successfully exploit the opposite case of robotics in short-term education for students without background, in form of a one-day contest on automatic visual UAV navigation. We provide an extensive survey, and show that existing material and tools do not fit the task and lack in technical aspects. We introduce a novel open-source programming library that comprises programs to guide learning by experience and allow rapid development. It makes contributions to marker-based tracking, with a novel nested-marker design and accurate calibration parameters estimated from 14 Parrot AR.Drone 2.0 front cameras. We show a detailed discussion of the contest results, which represents an extensive user study regarding robotics in education and the effectiveness of the library. The achievement of a steep learning curve for a complex subject has important implications in interdisciplinary design education, as it allows deep understanding of potentials and limitations to facilitate decision-making, unconventional problem solutions and novel applications.


international conference on multimedia and expo | 2015

Non-calibrated and real-time human view estimation using a mobile corneal imaging camera

Atsushi Nakazawa; Christian Nitschke; Toyoaki Nishida

We present a mobile human view estimation system using a corneal imaging technique. Compared to the current eye gaze tracking (EGT) systems, our system does not require per-session calibrations and a frontal view (scene) camera, making it suitable for wearable glass systems because it is easier to use and more socially acceptable due to the lack of a frontal scene camera. Our glasses system consists of a glass frame and a micro eye camera that captures the eye (corneal) reflections of a user. 3D corneal pose tracking is performed for the captured images by using a particle filter-based real-time tracking method leveraged by a 3D eye model and weak perspective projection. We then compute the gaze reflection point (GRP) where the light from the point of gaze (PoG) is reflected, enabling us to identify where a user is looking in a scene image reflected on the corneal surface. We conducted experiments using a standard computer display setup and several real-world scenes, and found that the proposed method performs with considerable accuracy under non-calibrated setups. This demonstrates its potential for various purposes such as the user interface of a glasses systems and the analysis of human perceptions in actual scenes for marketing, environmental design, and quality-of-life applications.


asian conference on pattern recognition | 2013

I See What You See: Point of Gaze Estimation from Corneal Images

Christian Nitschke; Atsushi Nakazawa; Toyoaki Nishida

Eye-gaze tracking (EGT) is an important problem with a long history and various applications. However, state-of-the-art geometric vision-based techniques still suffer from major limitations, especially (1) the requirement for calibration of a static relationship between eye camera and scene, and (2) a parallax error that occurs when the depth of the scene varies. This paper introduces a novel concept for EGT that overcomes these limitations using corneal imaging. Based on the observation that the cornea reflects the surrounding scene over a wide field of view, it is shown how to extract that information and determine the point of gaze (PoG) directly in an eye image. To realize this, a closed-form solution is developed to obtain the gaze-reflection point (GRP), where light from the PoG reflects at the corneal surface into a camera. This includes compensation for the individual offset between optical and visual axis. Quantitative and qualitative evaluation shows that the strategy achieves considerable accuracy and successfully supports depth-varying environments. The novel approach provides important practical advantages, including reduced intrusiveness and complexity, and support for flexible dynamic setups, non-planar scenes and outdoor application.


british machine vision conference | 2012

Super-Resolution from Corneal Images.

Christian Nitschke; Atsushi Nakazawa

The cornea of the human eye reflects the light from a person’s environment. Modeling corneal reflections from an image of the eye enables a number of applications, including the computation of scene panorama and 3D model, together with the person’s field of view and point of gaze [4]. The obtained environment map enables general applications in vision and graphics, such as face reconstruction, relighting [3] and recognition [5]. In reality, however, even if we use a carefully-adjusted high-resolution camera in front of the eye, the quality of corneal reflections is limited due to low resolution and contrast, iris texture and geometric distortion. This paper introduces an approach to overcome these issues through a super-resolution (SR) [6] strategy for corneal imaging that reconstructs a high-resolution (HR) scene image from a series of lower resolution (LR) corneal images such as occurring in surveillance or personal videos. The process comprises (1) single image environment map recovery, (2) multiple image registration, and (3) HR image reconstruction. This is also the first non-central catadioptric approach for multiple image SR. Corneal reflection modeling. We apply a common geometric eye model, where eyeball and cornea (Figure 1 (a)) are approximated as two overlapping spherical surfaces. A simple strategy assuming weak perspective projection recovers the pose of the model by reconstructing the pose of the circular iris from its elliptical projected contour (Figure 1 (b)). A corneal image is transformed into a spherical environment map by calculating the intersection and reflection at the corneal surface. Since the eye model only approximates the true corneal geometry, it is not possible to obtain an accurate registration for the whole environment map. Instead, we assume spherical curvature for a user-defined region of interest, where we project the environment map to a local tangent plane (Figure 1 (c)). Registration further requires the forward projection from the tangent plane into the image. As common iterative methods are not feasible to handle the large number of re-projections, we apply a recent analytic method that requires solving a 4th-order polynomial equation (for the case of a spherical mirror), that is calculated in closed form [1].


international symposium on mixed and augmented reality | 2016

Automated Spatial Calibration of HMD Systems with Unconstrained Eye-cameras

Alexander Plopski; Jason Orlosky; Yuta Itoh; Christian Nitschke; Kiyoshi Kiyokawa; Gudrun Klinker

Properly calibrating an optical see-through head-mounted display (OST-HMD) and maintaining a consistent calibration over time can be a very challenging task. Automated methods need an accurate model of both the OST-HMD screen and the users constantly changing eye-position to correctly project virtual information. While some automated methods exist, they often have restrictions, including fixed eye-cameras that cannot be adjusted for different users.To address this problem, we have developed a method that automatically determines the position of an adjustable eye-tracking camera and its unconstrained position relative to the display. Unlike methods that require a fixed pose between the HMD and eye camera, our framework allows for automatic calibration even after adjustments of the camera to a particular individuals eye and even after the HMD moves on the users face. Using two sets of IR-LEDs rigidly attached to the camera and OST-HMD frame, we can calculate the correct projection for different eye positions in real time and changes in HMD position within several frames. To verify the accuracy of our method, we conducted two experiments with a commercial HMD by calibrating a number of different eye and camera positions. Ground truth was measured through markers on both the camera and HMD screens, and we achieve a viewing accuracy of 1.66 degrees for the eyes of 5 different experiment participants.


international symposium on mixed and augmented reality | 2014

Corneal imaging in localization and HMD interaction

Alexander Plopski; Kiyoshi Kiyokawa; Haruo Takemura; Christian Nitschke

The human eyes perceive our surroundings and are one of, if not our most important sensory organs. Contrary to our other senses the eyes not only perceive but also provide information to a keen observer. However, thus far this has been mainly used to detect reflection of infrared light sources to estimate the user’s gaze. The reflection of the visible spectrum on the other hand has rarely been utilized. In this dissertation we want to explore how the analysis of the corneal image can improve currently available eye-related solutions, such as calibration of optical see-through head-mounted devices or eye-gaze tracking and point of regard estimation in arbitrary environments. We also aim to study how corneal imaging can become an alternative for established augmented reality tasks such as tracking and localization.


databases in networked information systems | 2015

Synthetic Evidential Study as Primordial Soup of Conversation

Toyoaki Nishida; Atsushi Nakazawa; Yoshimasa Ohmoto; Christian Nitschke; Yasser F. O. Mohammad; Sutasinee Thovuttikul; Divesh Lala; Masakazu Abe; Takashi Ookaki

Synthetic evidential study (SES for short) is a novel technology-enhanced methodology for combining theatrical role play and group discussion to help people spin stories by bringing together partial thoughts and evidences. SES not only serves as a methodology for authoring stories and games but also exploits the framework of game framework to help people sustain in-depth learning. In this paper, we present the conceptual framework of SES, a computational platform that supports the SES workshops, and advanced technologies for increasing the utility of SES. The SES is currently under development. We discuss conceptual issues and technical details to delineate how much we can implement the idea with our technology and how much challenges are left for the future work.

Collaboration


Dive into the Christian Nitschke's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexander Plopski

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge