Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Branislav Micusik is active.

Publication


Featured researches published by Branislav Micusik.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2006

Structure from motion with wide circular field of view cameras

Branislav Micusik; Tomas Pajdla

This paper presents a method for fully automatic and robust estimation of two-view geometry, autocalibration, and 3D metric reconstruction from point correspondences in images taken by cameras with wide circular field of view. We focus on cameras which have more than 180deg field of view and for which the standard perspective camera model is not sufficient, e.g., the cameras equipped with circular fish-eye lenses Nikon FC-E8 (183deg), Sigma 8 mm-f4-EX (180deg), or with curved conical mirrors. We assume a circular field of view and axially symmetric image projection to autocalibrate the cameras. Many wide field of view cameras can still be modeled by the central projection followed by a nonlinear image mapping. Examples are the above-mentioned fish-eye lenses and properly assembled catadioptric cameras with conical mirrors. We show that epipolar geometry of these cameras can be estimated from a small number of correspondences by solving a polynomial eigenvalue problem. This allows the use of efficient RANSAC robust estimation to find the image projection model, the epipolar geometry, and the selection of true point correspondences from tentative correspondences contaminated by mismatches. Real catadioptric cameras are often slightly noncentral. We show that the proposed autocalibration with approximate central models is usually good enough to get correct point correspondences which can be used with accurate noncentral models in a bundle adjustment to obtain accurate 3D scene reconstruction. Noncentral camera models are dealt with and results are shown for catadioptric cameras with parabolic and spherical mirrors


computer vision and pattern recognition | 2003

Estimation of omnidirectional camera model from epipolar geometry

Branislav Micusik; Tomas Pajdla

We generalize the method of simultaneous linear estimation of multiple view geometry and lens distortion, introduced by Fitzgibbon at CVPR 2001, to an omnidirectional (angle of view larger than 180/spl deg/) camera. The perspective camera is replaced by a linear camera with a spherical retina and a nonlinear mapping of the sphere into the image plane. Unlike the previous distortion-based models, the new camera model is capable to describe a camera with an angle of view larger than 180/spl deg/ at the cost of introducing only one extra parameter. A suitable linearization of the camera model and of the epipolar constraint is developed in order to arrive at a quadratic eigenvalue problem for which efficient algorithms are known. The lens calibration is done from automatically established image correspondences only. Besides rigidity, no assumptions about the scene are made (e.g. presence of a calibration object). We demonstrate the method in experiments with Nikon FC-E8 fish-eye converter for COOLPIX digital camera. In practical situations, the proposed method allows to incorporate the new omnidirectional camera model into RANSAC - a robust estimation technique.


computer vision and pattern recognition | 2009

Piecewise planar city 3D modeling from street view panoramic sequences

Branislav Micusik; Jana Kosecka

City environments often lack textured areas, contain repetitive structures, strong lighting changes and therefore are very difficult for standard 3D modeling pipelines. We present a novel unified framework for creating 3D city models which overcomes these difficulties by exploiting image segmentation cues as well as presence of dominant scene orientations and piecewise planar structures. Given panoramic street view sequences, we first demonstrate how to robustly estimate camera poses without a need for bundle adjustment and propose a multi-view stereo method which operates directly on panoramas, while enforcing the piecewise planarity constraints in the sweeping stage. At last, we propose a new depth fusion method which exploits the constraints of urban environments and combines advantages of volumetric and viewpoint based fusion methods. Our technique avoids expensive voxelization of space, operates directly on 3D reconstructed points through effective kd-tree representation, and obtains a final surface by tessellation of backprojections of those points into the reference image.


computer vision and pattern recognition | 2004

Autocalibration & 3D reconstruction with non-central catadioptric cameras

Branislav Micusik; Tomas Pajdla

We present a technique for modeling non-central catadioptric cameras consisting of perspective cameras and curved mirrors. The real catadioptric cameras have to be treated as non-central cameras, since they do not possess a single viewpoint. We present a method for solving the correspondence problem, auto-calibrating cameras, and computing a 3D metric reconstruction automatically from two uncalibrated non-central catadioptric images. The method is demonstrated on spherical, parabolic, and hyperbolic mirrors. We observed that the reconstruction & auto-calibration with non-central catadioptric cameras is as easy (or as difficult) as with central catadioptric cameras, provided that the correspondence problem can be solved with a suitable approximate central model. It turns out that it is the number of parameters of the camera model that matters rather than the exact centrality of the projection. Our technique allows to autocalibrate catadioptric cameras even with genuinely non-central mirrors such as spheres (simple model, low blur, easy to manufacture) or uniform resolution mirrors (optimized projection).


International Journal of Computer Vision | 2010

Multi-view Superpixel Stereo in Urban Environments

Branislav Micusik; Jana Kosecka

Urban environments possess many regularities which can be efficiently exploited for 3D dense reconstruction from multiple widely separated views. We present an approach utilizing properties of piecewise planarity and restricted number of plane orientations to suppress reconstruction and matching ambiguities causing failures of standard dense stereo methods. We formulate the problem of the 3D reconstruction in MRF framework built on an image pre-segmented into superpixels. Using this representation, we propose novel photometric and superpixel boundary consistency terms explicitly derived from superpixels and show that they overcome many difficulties of standard pixel-based formulations and handle favorably problematic scenarios containing many repetitive structures and no or low textured regions. We demonstrate our approach on several wide-baseline scenes demonstrating superior performance compared to previously proposed methods.


international conference on computer vision | 2009

Semantic segmentation of street scenes by superpixel co-occurrence and 3D geometry

Branislav Micusik; Jana Kosecka

We present a novel approach for image semantic segmentation of street scenes into coherent regions, while simultaneously categorizing each region as one of the predefined categories representing commonly encountered object and background classes. We formulate the segmentation on small blob-based superpixels and exploit a visual vocabulary tree as an intermediate image representation. The main novelty of this generative approach is the introduction of an explicit model of spatial co-occurrence of visual words associated with super-pixels and utilization of appearance, geometry and contextual cues in a probabilistic framework. We demonstrate how individual cues contribute towards global segmentation accuracy and how their combination yields superior performance to the best known method on the challenging benchmark dataset which exhibits diversity of street scenes with varying viewpoints, large number of categories, captured in daylight and dusk.


computer vision and pattern recognition | 2008

Detection and matching of rectilinear structures

Branislav Micusik; Horst Wildenauer; Jana Kosecka

Indoor and outdoor urban environments posses many regularities which can be efficiently exploited and used for general image parsing tasks. We present a novel approach for detecting rectilinear structures and demonstrate their use for wide baseline stereo matching, planar 3D reconstruction, and computation of geometric context. Assuming a presence of dominant orthogonal vanishing directions, we proceed by formulating the detection of the rectilinear structures as a labeling problem on detected line segments. The line segment labels, respecting the proposed grammar rules, are established as the MAP assignment of the corresponding MRF. The proposed framework allows to detect both full as well as partial rectangles, rectangle-in-rectangle structures, and rectangles sharing edges. The use of detected rectangles is demonstrated in the context of difficult wide baseline matching tasks in the presence of repetitive structures and large appearance changes.


european conference on computer vision | 2006

Automatic image segmentation by positioning a seed

Branislav Micusik; Allan Hanbury

We present a method that automatically partitions a single image into non-overlapping regions coherent in texture and colour. An assumption that each textured or coloured region can be represented by a small template, called the seed, is used. Positioning of the seed across the input image gives many possible sub-segmentations of the image having same texture and colour property as the pixels behind the seed. A probability map constructed during the sub-segmentations helps to assign each pixel to just one most probable region and produce the final pyramid representing various detailed segmentations at each level. Each sub-segmentation is obtained as the min-cut/max-flow in the graph built from the image and the seed. One segment may consist of several isolated parts. Compared to other methods our approach does not need a learning process or a priori information about the textures in the image. Performance of the method is evaluated on images from the Berkeley database.


british machine vision conference | 2007

Sparse MRF Appearance Models for Fast Anatomical Structure Localisation

René Donner; Branislav Micusik; Georg Langs; Horst Bischof

Image segmentation methods like active shape models, active appearance models or snakes require an initialisation that guarantees a considerable overlap with the object to be segmented. In this paper we present an approach that localises anatomical structures in a global manner by means of Markov Random Fields (MRF). It does not need initialisation, but finds the most plausible match of the query structure in the image. It provides for precise, reliable and fast detection of the structure and can serve as initialisation for more detailed segmentation steps. Sparse MRF Appearance Models (SAMs) encode a priori information about the geometric configurations of interest points, local features at these points and local features along the edges of adjacent points. This information is used to formulate a Markov Random Field and the mapping of the modeled object (e.g. a sequence of vertebrae) to the query image interest points is performed by the MAX-SUM algorithm. The local image information is captured by novel symmetry-based interest points and local descriptors derived from Gradient Vector Flow. Experimental results are reported for two data-sets showing the applicability to complex medical data.


computer vision and pattern recognition | 2010

Simultaneous surveillance camera calibration and foot-head homology estimation from human detections

Branislav Micusik; Tomas Pajdla

We propose a novel method for automatic camera calibration and foot-head homology estimation by observing persons standing at several positions in the camera field of view. We demonstrate that human body can be considered as a calibration target thus avoiding special calibration objects or manually established fiducial points. First, by assuming roughly parallel human poses we derive a new constraint which allows to formulate the calibration of internal and external camera parameters as a Quadratic Eigenvalue Problem. Secondly, we couple the calibration with an improved effective integral contour based human detector and use 3D projected models to capture a large variety of person and camera mutual positions. The resulting camera auto-calibration method is very robust and efficient, and thus well suited for surveillance applications where the camera calibration process cannot use special calibration targets and must be simple.

Collaboration


Dive into the Branislav Micusik's collaboration.

Top Co-Authors

Avatar

Horst Wildenauer

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Roman P. Pflugfelder

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Tomas Pajdla

Czech Technical University in Prague

View shared research outputs
Top Co-Authors

Avatar

Martin Kampel

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Hödlmoser

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jana Kosecka

George Mason University

View shared research outputs
Top Co-Authors

Avatar

Allan Hanbury

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Cristina Picus

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Georg Langs

Medical University of Vienna

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge