Michal Jancosek
Czech Technical University in Prague
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michal Jancosek.
international conference on computer vision | 2011
Jan Smisek; Michal Jancosek; Tomas Pajdla
We analyze Kinect as a 3D measuring device, experimentally investigate depth measurement resolution and error properties and make a quantitative comparison of Kinect accuracy with stereo reconstruction from SLR cameras and a 3D-TOF camera. We propose Kinect geometrical model and its calibration procedure providing an accurate calibration of Kinect 3D measurement and Kinect cameras. We demonstrate the functionality of Kinect calibration by integrating it into an SfM pipeline where 3D measurements from a moving Kinect are transformed into a common coordinate system by computing relative poses from matches in color camera.
computer vision and pattern recognition | 2011
Michal Jancosek; Tomas Pajdla
We propose a novel method for the multi-view reconstruction problem. Surfaces which do not have direct support in the input 3D point cloud and hence need not be photo-consistent but represent real parts of the scene (e.g. low-textured walls, windows, cars) are important for achieving complete reconstructions. We augmented the existing Labatut CGF 2009 method with the ability to cope with these difficult surfaces just by changing the t-edge weights in the construction of surfaces by a minimal s-t cut. Our method uses Visual-Hull to reconstruct the difficult surfaces which are not sampled densely enough by the input 3D point cloud. We demonstrate importance of these surfaces on several real-world data sets. We compare our improvement to our implementation of the Labatut CGF 2009 method and show that our method can considerably better reconstruct difficult surfaces while preserving thin structures and details in the same quality and computational time.
international conference on computer vision | 2009
Michal Jancosek; Alexander Shekhovtsov; Tomas Pajdla
This paper presents a scalable multi-view stereo reconstruction method which can deal with a large number of large unorganized images in affordable time and effort. The computational effort of our technique is a linear function of the surface area of the observed scene which is conveniently discretized to represent sufficient but not excessive detail. Our technique works as a filter on a limited number of images at a time and can thus process arbitrarily large data sets using limited memory. By building reconstructions gradually, we avoid unnecessary processing of data which bring little improvement. In experiments with Middlebury and Strechas databases, we demonstrate that we achieve results comparable to the state of the art with considerably smaller effort than used by previous methods. We present a large scale experiments in which we processed 294 unorganized images of an outdoor scene and reconstruct its 3D model and 1000 images from the Google Street View Pittsburgh Experimental Data Set 1.
International Scholarly Research Notices | 2014
Michal Jancosek; Tomas Pajdla
We present a novel method for 3D surface reconstruction from an input cloud of 3D points augmented with visibility information. We observe that it is possible to reconstruct surfaces that do not contain input points. Instead of modeling the surface from input points, we model free space from visibility information of the input points. The complement of the modeled free space is considered full space. The surface occurs at interface between the free and the full space. We show that under certain conditions a part of the full space surrounded by the free space must contain a real object also when the real object does not contain any input points; that is, an occluder reveals itself through occlusion. Our key contribution is the proposal of a new interface classifier that can also detect the occluder interface just from the visibility of input points. We use the interface classifier to modify the state-of-the-art surface reconstruction method so that it gains the ability to reconstruct weakly supported surfaces. We evaluate proposed method on datasets augmented with different levels of noise, undersampling, and amount of outliers. We show that the proposed method outperforms other methods in accuracy and ability to reconstruct weakly supported surfaces.
computer vision and pattern recognition | 2009
Michal Havlena; Andreas Ess; Wim Moreau; Akihiko Torii; Michal Jancosek; Tomas Pajdla; Luc Van Gool
We present a wearable audio-visual capturing system, termed AWEAR 2.0, along with its underlying vision components that allow robust self-localization, multi-body pedestrian tracking, and dense scene reconstruction. Designed as a backpack, the system is aimed at supporting the cognitive abilities of the wearer. In this paper, we focus on the design issues for the hardware platform and on the performance of the current state-of-the-art computer vision methods on the acquired sequences. We describe the calibration procedure of the two omni-directional cameras present in the system as well as a structure-from-motion pipeline that allows for stable multi-body tracking even from rather shaky video sequences thanks to ground plane stabilization. Furthermore, we show how a dense scene reconstruction can be obtained from the data acquired with the platform.
european conference on computer vision | 2010
Michal Jancosek; Tomas Pajdla
We present a multi-view stereo method that avoids producing hallucinated surfaces which do not correspond to real surfaces. Our approach to 3D reconstruction is based on the minimal s-t cut of the graph derived from the Delaunay tetrahedralization of a dense 3D point cloud, which produces water-tight meshes. This is often a desirable property but it hallucinates surfaces in complicated scenes with multiple objects and free open space. For example, a sequence of images obtained from a moving vehicle often produces meshes where the sky is hallucinated because there are no images looking from the above to the ground plane. We present a method for detecting and removing such surfaces. The method is based on removing perturbation sensitive parts of the reconstruction using multiple reconstructions of perturbed input data. We demonstrate our method on several standard datasets often used to benchmark multi-view stereo and show that it outperforms the state-of-the-art techniques .
international conference on machine vision | 2015
Jan Heller; Michal Havlena; Michal Jancosek; Akihiko Torii; Tomas Pajdla
Archive | 2009
Michal Jancosek; Tomas Pajdla
Archive | 2008
Michal Jancosek; Tomas Pajdla
Archive | 2005
Michal Jancosek