Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bogumil Bartczak is active.

Publication


Featured researches published by Bogumil Bartczak.


computer vision and pattern recognition | 2007

A Comparison of PMD-Cameras and Stereo-Vision for the Task of Surface Reconstruction using Patchlets

Christian Beder; Bogumil Bartczak; Reinhard Koch

Recently real-time active 3D range cameras based on time-of-flight technology (PMD) have become available. Those cameras can be considered as a competing technique for stereo-vision based surface reconstruction. Since those systems directly yield accurate 3d measurements, they can be used for benchmarking vision based approaches, especially in highly dynamic environments. Therefore, a comparative study of the two approaches is relevant. In this work the achievable accuracy of the two techniques, PMD and stereo, is compared on the basis of patch-let estimation. As patchlet we define an oriented small planar 3d patch with associated surface normal. Least-squares estimation schemes for estimating patchlets from PMD range images as well as from a pair of stereo images are derived. It is shown, how the achivable accuracy can be estimated for both systems. Experiments under optimal conditions for both systems are performed and the achievable accuracies are compared. It has been found that the PMD system outperformed the stereo system in terms of achievable accuracy for distance measurements, while the estimation of normal direction is comparable for both systems.


international symposium on visual computing | 2009

Dense Depth Maps from Low Resolution Time-of-Flight Depth and High Resolution Color Views

Bogumil Bartczak; Reinhard Koch

In this paper a systematic approach to the processing and combination of high resolution color images and low resolution time-of-flight depth maps is described. The purpose is the calculation of a dense depth map for one of the high resolution color images. Special attention is payed to the different nature of the input data and their large difference in resolution. This way the low resolution time-of-flight measurements are exploited without sacrificing the high resolution observations in the color data.


dagm conference on pattern recognition | 2007

A combined approach for estimating patchlets from PMD depth images and stereo intensity images

Christian Beder; Bogumil Bartczak; Reinhard Koch

Real-time active 3D range cameras based on time-of-flight technology using the Photonic Mixer Device (PMD) can be considered as a complementary technique for stereo-vision based depth estimation. Since those systems directly yield 3D measurements, they can also be used for initializing vision based approaches, especially in highly dynamic environments. Fusion of PMD depth images with passive intensity-based stereo is a promising approach for obtaining reliable surface reconstructions even in weakly textured surface regions. In this work a PMD-stereo fusion algorithm for the estimation of patchlets from a combined PMD-stereo camera rig will be presented. As patchlet we define an oriented small planar 3d patch with associated surface normal. Least-squares estimation schemes for estimating patchlets from PMD range images as well as from a pair of stereo images are derived. It is shown, how those two approaches can be fused into one single estimation, that yields results even if either of the two single approaches fails.


Dyn3D '09 Proceedings of the DAGM 2009 Workshop on Dynamic 3D Imaging | 2009

MixIn3D: 3D Mixed Reality with ToF-Camera

Reinhard Koch; Ingo Schiller; Bogumil Bartczak; Falko Kellner; Kevin Köser

This work discusses an approach to seamlessly integrate real and virtual scene content by on-the-fly 3D scene modeling and dynamic scene interaction. The key element is a ToF-depth camera, accompanied by color cameras, mounted on a pan-tilt head. The system allows to scan the environment for easy 3D reconstruction, and will track and model dynamically moving objects like human actors in 3D. This allows to compute mutual occlusions between real and virtual objects and correct light and shadow generation with mutual light interaction. No dedicated studio is required, as virtually any room can be turned into a virtual studio with this approach. Since the complete process operates in 3D and produces consistent color and depth sequences, this system can be used for full 3D TV production.


Smpte Motion Imaging Journal | 2007

Realtime Camera Tracking in the MATRIS Project

Jigna Chandaria; Graham Thomas; Bogumil Bartczak; Reinhard Koch; Mario Becker; Gabriele Bleser; Didier Stricker; Cedric Wohlleber; Michael Felsberg; Fredrik Gustafsson; Jeroen D. Hol; Thomas B. Schön; Johan Skoglund; Per Johan Slycke; S. Smeitz

In order to insert a virtual object into a TV image, the graphics system needs to know precisely how the camera is moving, so that the virtual object can be rendered in the correct place in every frame. Nowadays this can be achieved relatively easily in postproduction, or in a studio equipped with a special tracking system. However, for live shooting on location, or in a studio that is not specially equipped, installing such a system can be difficult or uneconomic. To overcome these limitations, the MATRIS project is developing a real-time system for measuring the movement of a camera. The system uses image analysis to track naturally occurring features in the scene, and data from an inertial sensor. No additional sensors, special markers, or camera mounts are required. This paper gives an overview of the system and presents some results.


Journal of Real-time Image Processing | 2007

Robust GPU-assisted camera tracking using free-form surface models

Bogumil Bartczak; Reinhard Koch

We propose a marker-less model-based camera tracking approach, which makes use of GPU-assisted analysis-by-synthesis methods on a very wide field of view (e.g. fish-eye) camera. After an initial registration based on a learned database of robust features, the synthesis part of the tracking is performed on graphics hardware, which simulates internal and external parameters of the camera, this way minimizing lens and viewpoint differences between a model view and a real camera image. Based on an automatically reconstructed free-form surface model we analyze the sensitivity of the tracking to the model accuracy, in particular the case when we represent curved surfaces by planar patches. We also examine accuracy and show on synthetic and on real data that the system does not suffer from drift accumulation. The wide field of view of the camera and the subdivision of our reference model into many textured free-form surface patches make the system robust against illumination changes, moving persons and other occlusions within the environment and provide a camera pose estimate in a fixed and known coordinate system.


3dtv-conference: the true vision - capture, transmission and display of 3d video | 2009

Generation of 3D-TV LDV-content with Time-Of-Flight Camera

A. Frick; Falko Kellner; Bogumil Bartczak; Reinhard Koch

In this paper we describe an approach for 3D-TV Layered Depth Video (LDV) - Content creation using a capturing system of four CCD - Cameras and Time-Of-Flight - Sensor (ToF - Camera). We demonstrate a whole video production chain, from calibration of the camera rig, to generation of reliable depth maps for a single view of one of the CCD - Cameras, using only estimated depth provided by the ToF - Camera. We additionally show that we are able to generate proper occlusion layers for LDV - Content through a straight forward approach based on depth background extrapolation and backward texture mapping.


scandinavian conference on image analysis | 2007

Supporting structure from motion with a 3D-range-camera

Birger Streckel; Bogumil Bartczak; Reinhard Koch; Andreas Kolb

Tracking of a camera pose in all 6 degrees of freedom is a task with many applications in 3D-imaging as i.e. augmentation or robot navigation. Structure from motion is a well known approach for this task, with several well known restrictions. These are namely the scale ambiguity of the calculated relative pose and the need of a certain camera movement (preferably lateral) to initiate the tracking. In the last few years time-of-flight imaging sensors were developed that allow the measuring of metric depth over a whole region with a frame rate similar to a standard CCD-camera. In this work a camera rig consisting of a standard 2D CCD camera and a time-of-flight 3D camera is used. Structure from motion is calculated on the 2D image, aided by the depth measurement from the time-of-flight camera to overcome the restrictions named above. It is shown how the additional 3D-information can be used to improve the accuracy of the camera pose estimation.


Journal of Real-time Image Processing | 2007

Extraction of 3D freeform surfaces as visual landmarks for real-time tracking

Bogumil Bartczak; Felix Woelk; Reinhard Koch

This work presents a system for the generation of a free-form surface model from video sequences. Although any single centered camera can be applied in the proposed system the approach is demonstrated using fish-eye lenses because of their good properties for tracking. The system is designed to function automatically and to be flexible with respect to size and shape of the reconstructed scene. To minimize geometric assumptions a statistic fusion of dense depth maps is utilized. Special attention is payed to the necessary rectification of the spherical images and the resulting iso-disparity surfaces, which can be exploited in the fusion approach. Before dense depth estimation can be performed the cameras’ pose parameters are extracted by means of a structure-from-motion (SfM) scheme. In this respect automation of the system is achieved by thorough decision model based on robust statistics and error propagation of projective measurement uncertainties. This leads to a scene-independent set of only a few parameters. All system components are formulated in a general way, making it possible to cope with any single centered projection model, in particular with spherical cameras. In using wide field-of-view cameras the presented system is able to reliably retrieve poses and consistently reconstruct large scenes. A textured triangle mesh constructed on basis of the scene’s reconstructed depth, makes the system’s results suitable to function as reference models in a GPU driven analysis-by-synthesis framework for real-time tracking.


dagm conference on pattern recognition | 2007

An analysis-by-synthesis camera tracking approach based on free-form surfaces

Bogumil Bartczak; Reinhard Koch

We propose a model-based camera pose estimation approach, which makes use of GPU-assisted analysis-by-synthesis methods on a very wide field of view (e.g. fish-eye) camera. After an initial registration, the synthesis part of the tracking is performed on graphics hardware, which simulates internal and external parameters of the camera, this way minimizing lens and perspective differences between a model view and a real camera image. We show how such a model is automatically created from a scene and analyze the sensitivity of the tracking to the model accuracy, in particular the case when we represent free-form surfaces by planar patches. We also examine accuracy and show on synthetic and on real data that the system does not suffer from drift accumulation. The wide field of view of the camera and the subdivision of our reference model into many textured free-form surfaces make the system robust against moving persons and other occlusions within the environment and provide a camera pose estimate in a fixed and known coordinate system.

Collaboration


Dive into the Bogumil Bartczak's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gabriele Bleser

Kaiserslautern University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge