Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ingo Schiller is active.

Publication


Featured researches published by Ingo Schiller.


Computer Vision and Image Understanding | 2010

Time-of-Flight sensor calibration for accurate range sensing

Marvin Lindner; Ingo Schiller; Andreas Kolb; Reinhard Koch

Over the past years Time-of-Flight (ToF) sensors have become a considerable alternative to conventional distance sensing techniques like laser scanners or image based stereo-vision. Due to the ability to provide full-range distance information at high frame-rates, ToF sensors achieve a significant impact onto current research areas like online object recognition, collision prevention or scene and object reconstruction. Nevertheless, ToF-cameras like the Photonic Mixer Device (PMD) still exhibit a number of error sources that affect the accuracy of measured distance information. For this reason, major error sources for ToF-cameras will be discussed, along with a new calibration approach that combines intrinsic, distance as well as a reflectivity related error calibration in an overall, easy to use system and thus significantly reduces the number of necessary reference images. The main contribution, in this context, is a new intensity-based calibration model that requires less input data compared to other models and thus significantly contributes to the reduction of calibration data.


International Journal of Intelligent Systems Technologies and Applications | 2008

Pose estimation and map building with a Time-Of-Flight-camera for robot navigation

A. Prusak; O. Melnychuk; H. Roth; Ingo Schiller; Reinhard Koch

In this paper, we describe a joint approach for robot navigation with collision avoidance, pose estimation and map building with a 2.5D Photonic Mixer Device (PMD)-camera combined with a high-resolution spherical camera. The cameras are mounted at the front of the robot with a certain inclination angle. The navigation and map building consists of two steps: when entering new terrain the robot first scans the surrounding. Simultaneously a 3D-panorama is generated from the PMD-images. In the second step, the robot moves along the pre-defined path, using the PMD-camera for collision avoidance and a combined Structure-from-Motion (SfM) and model-tracking approach for self-localisation. The computed poses of the robot are simultaneously used for map building with new measurements from the PMD-camera.


scandinavian conference on image analysis | 2011

Improved video segmentation by adaptive combination of depth keying and mixture-of-gaussians

Ingo Schiller; Reinhard Koch

Video segmentation or matting, the separation of foreground objects from background in video sequences, is a demanding task and is needed for a broad range of applications. The most widespread method for video segmentation is chroma-keying using a known background color for which a controlled environment is required. Recently a different method of keying fore-and background has been proposed in which the chroma-keying is replaced by depth-keying using a Time-of-Flight (ToF) camera. The current ToF-cameras suffer from noise and low resolution sensors, which results in unsatisfying segmentation results. We propose to combine the segmentation of dynamic objects in depth with a segmentation in the color domain using adaptive background models. We weight the two measures depending on the actual depth values using either the variance of the depth images of the ToF-camera or the amplitude image of the ToF-camera as reliability measure. We show that both methods significantly improve the segmentation results.


Dyn3D '09 Proceedings of the DAGM 2009 Workshop on Dynamic 3D Imaging | 2009

MixIn3D: 3D Mixed Reality with ToF-Camera

Reinhard Koch; Ingo Schiller; Bogumil Bartczak; Falko Kellner; Kevin Köser

This work discusses an approach to seamlessly integrate real and virtual scene content by on-the-fly 3D scene modeling and dynamic scene interaction. The key element is a ToF-depth camera, accompanied by color cameras, mounted on a pan-tilt head. The system allows to scan the environment for easy 3D reconstruction, and will track and model dynamically moving objects like human actors in 3D. This allows to compute mutual occlusions between real and virtual objects and correct light and shadow generation with mutual light interaction. No dedicated studio is required, as virtually any room can be turned into a virtual studio with this approach. Since the complete process operates in 3D and produces consistent color and depth sequences, this system can be used for full 3D TV production.


intelligent vehicles symposium | 2005

An airborne Bayesian color tracking system

Felix Woelk; Ingo Schiller; Reinhard Koch

Mobile tracking systems are of wide interest. For instance sport photographers often hire helicopters to obtain close range images of e.g. aquatic athletes. Besides the high price for hiring a helicopter, good helicopter pilots are not everywhere available. One way to circumvent these problems is the use of a cheap model helicopter. Since flying a model helicopter is a demanding task, there are either two persons needed for the operation of the system (one person for steering the helicopter and the other for the operation of the camera), or alternatively an automatic tracking system can be used. In this work an automatic, image based tracking system is described. The algorithm combines color histograms as similarity measures with a particle filter and achieves fast and robust tracking results. A novel method for fast histogram computation is proposed. The system keeps the object of interest automatically in the viewing field of the camera so that the only task left to the camera operator is to initiate the imaging process. This simple task can be additionally managed by the helicopter pilot so that a single person is sufficient to steer the helicopter and to take the images.


international symposium on 3d data processing visualization and transmission | 2006

A Mobile Augmented Reality System with Distributed Tracking

Jan-Friso Evers-Senne; Ingo Schiller; Arne Petersen; Reinhard Koch

For AR applications the 3D position and direction of the users view have to be determined in real time. At the same time, the augmentation, one or more 3D objects, have to be rendered. This article describes a distributed mobile AR-System designed for industrial service applications. By distributing the different tasks involved in AR, the computational load of the portable system can be minimized such that a small and lightweight computer can be used. The portable computer is connected to a backend server via WLAN. Minimisation of the bandwidth usage of the system ensures that 20 or more systems can operate on one standard WLAN access point. The main focus of this article is to minimise the computational load of the mobile system and the network load simultaneously.


Dyn3D '09 Proceedings of the DAGM 2009 Workshop on Dynamic 3D Imaging | 2009

Datastructures for Capturing Dynamic Scenes with a Time-of-Flight Camera

Ingo Schiller; Reinhard Koch

To capture 3D dynamic scenes, a suitable 3D data structure is needed that can represent the dynamic scene content. In this contribution we analyse and evaluate different data structures for capturing time-varying depth and color data of a dynamic scene obtained with Time-of-Flight and color cameras. The comparison of depth-panoramas, layered depth images and volumetric structures shows that a volumetric octree is best suited to fuse time-varying 3D scene data. We exploit the octree data fusion capabilities for different application scenarios, like 3D environment building, volumetric object reconstruction, and geometric interaction.


joint pattern recognition symposium | 2008

Photoconsistent Relative Pose Estimation between a PMD 2D3D-Camera and Multiple Intensity Cameras

Christian Beder; Ingo Schiller; Reinhard Koch

Active range cameras based on the Photonic Mixer Device (PMD) allow to capture low-resolution depth images of dynamic scenes at high frame rates. To use such devices together with high resolution optical cameras (e.g. in media production) the relative pose of the cameras with respect to each other has to be determined. This task becomes even more challenging, if the camera is to be moved and the scene is highly dynamic. We will present an efficient algorithm for the estimation of the relative pose between a single 2D3D-camera with respect to several optical cameras. The camera geometry together with an intensity consistency criterion will be used to derive a suitable cost function, which will be optimized using gradient descend. It will be shown, how the gradient of the cost function can be efficiently computed from the gradient images of the high resolution optical cameras. We will show, that the proposed method allows to track and to refine the pose of a moving 2D3D-camera for fully dynamic scenes.


dagm conference on pattern recognition | 2010

High-resolution object deformation reconstruction with active range camera

Andreas Jordt; Ingo Schiller; Johannes Bruenger; Reinhard Koch

This contribution discusses the 3D reconstruction of deformable freeform surfaces with high spatial and temporal resolution. These are conflicting requirements, since high-resolution surface scanners typically cannot achieve high temporal resolution, while high-speed range cameras like the Time-of-Flight (ToF) cameras capture depth at 25 fps but have a limited spatial resolution. We propose to combine a high-resolution surface scan with a ToF-camera and a color camera to achieve both requirements. The 3D surface deformation is modeled by a NURBS surface that approximates the object surface and estimates the 3D object motion and local 3D deformation from the ToF and color camera data. A set of few NURBS control points can faithfully model the motion and deformation and will be estimated from the ToF and color data with high accuracy. The contribution will focus on the estimation of the 3D deformation NURBS from the ToF and color data.


Archive | 2008

Integration of a Time-of-Flight Camera into a Mixed Reality System for Handling Dynamic Scenes, Moving Viewpoints and Occlusions in Real-Time

Bogumil Bartczak; Ingo Schiller; Christian Beder; Reinhard Koch

Collaboration


Dive into the Ingo Schiller's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

H. Roth

University of Siegen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge