Oliver Demetz
Saarland University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Oliver Demetz.
international conference on scale space and variational methods in computer vision | 2013
David Hafner; Oliver Demetz; Joachim Weickert
The census transform is becoming increasingly popular in the context of optic flow computation in image sequences. Since it is invariant under monotonically increasing grey value transformations, it forms the basis of an illumination-robust constancy assumption. However, its underlying mathematical concepts have not been studied so far. The goal of our paper is to provide this missing theoretical foundation. We study the continuous limit of the inherently discrete census transform and embed it into a variational setting. Our analysis shows two surprising results: The census-based technique enforces matchings of extrema, and it induces an anisotropy in the data term by acting along level lines. Last but not least, we establish links to the widely-used gradient constancy assumption and present experiments that confirm our findings.
european conference on computer vision | 2014
Oliver Demetz; Michael Stoll; Sebastian Volz; Joachim Weickert; Andrés Bruhn
The increasing importance of outdoor applications such as driver assistance systems or video surveillance tasks has recently triggered the development of optical flow methods that aim at performing robustly under uncontrolled illumination. Most of these methods are based on patch-based features such as the normalized cross correlation, the census transform or the rank transform. They achieve their robustness by locally discarding both absolute brightness and contrast. In this paper, we follow an alternative strategy: Instead of discarding potentially important image information, we propose a novel variational model that jointly estimates both illumination changes and optical flow. The key idea is to parametrize the illumination changes in terms of basis functions that are learned from training data. While such basis functions allow for a meaningful representation of illumination effects, they also help to distinguish real illumination changes from motion-induced brightness variations if supplemented by additional smoothness constraints. Experiments on the KITTI benchmark show the clear benefits of our approach. They do not only demonstrate that it is possible to obtain meaningful basis functions, they also show state-of-the-art results for robust optical flow estimation.
british machine vision conference | 2013
Oliver Demetz; David Hafner; Joachim Weickert
Most researchers agree that invariances are desirable in computer vision systems. However, one always has to keep in mind that this is at the expense of accuracy: By construction, all invariances inevitably discard information. The concept of morphological invariance is a good example for this trade-off and will be in the focus of this paper. Our goal is to develop a descriptor of local image structure that carries the maximally possible amount of local image information under this invariance. To fulfill this requirement, our descriptor has to encode the full ordering of the pixel intensities in the local neighbourhood. As a solution, we introduce the complete rank transform, which stores the intensity rank of every pixel in the local patch. As a proof of concept, we embed our novel descriptor in a prototypical TV−L1-type energy functional for optical flow computation, which we minimise with a traditional coarse-to-fine warping scheme. In this straightforward framework, we demonstrate that our descriptor is preferable over related features that exhibit the same invariance. Finally, we show by means of public benchmark systems that our method produces in spite of its simplicity results of competitive quality.
34th Symposium of the German Association for Pattern Recognition ; 36th Annual Austrian Association for Pattern Recognition Conference | 2012
Christopher Schroers; Henning Zimmer; Levi Valgaerts; Andrés Bruhn; Oliver Demetz; Joachim Weickert
Obtaining high-quality 3D models of real world objects is an important task in computer vision. A very promising approach to achieve this is given by variational range image integration methods: They are able to deal with a substantial amount of noise and outliers, while regularising and thus creating smooth surfaces at the same time. Our paper extends the state-of-the-art approach of Zach et al.(2007) in several ways: (i) We replace the isotropic space-variant smoothing behaviour by an anisotropic (direction-dependent) one. Due to the directional adaptation, a better control of the smoothing with respect to the local structure of the signed distance field can be achieved. (ii) In order to keep data and smoothness term in balance, a normalisation factor is introduced. As a result, oversmoothing of locations that are seen seldom is prevented. This allows high quality reconstructions in uncontrolled capture setups, where the camera positions are unevenly distributed around an object. (iii) Finally, we use the more accurate closest signed distances instead of directional signed distances when converting range images into 3D signed distance fields. Experiments demonstrate that each of our three contributions leads to clearly visible improvements in the reconstruction quality.
international conference on pattern recognition | 2014
David Hafner; Oliver Demetz; Joachim Weickert
Camera shakes and moving objects pose a severe problem in the high dynamic range (HDR) reconstruction from differently exposed images. We present the first approach that simultaneously computes the aligned HDR composite as well as accurate displacement maps. In this way, we can not only cope with dynamic scenes but even precisely represent the underlying motion. We design our fully coupled model transparently in a well-founded variational framework. The proposed joint optimisation has beneficial effects, such as intrinsic ghost removal or HDR-coupled smoothing. Both the HDR images and the optic flows benefit substantially from these features and the induced mutual feedback. We demonstrate this with synthetic and real-world experiments.
international conference on scale space and variational methods in computer vision | 2011
Oliver Demetz; Joachim Weickert; Andrés Bruhn; Henning Zimmer
While image scale spaces are well understood, it is undeniable that the regularisation parameter in variational optic flow methods serves a similar role as the scale parameter in scale space evolutions. However, no thorough analysis of this optic flow scale-space exists to date. Our paper closes this gap by interpreting variational optic flow methods as Whittaker-Tikhonov regularisations of the normal flow, evaluated in a constraint-specific norm. The transition from this regularisation framework to an optic flow evolution creates novel vector-valued scale-spaces that are not in divergence form and act in a highly anisotropic way. From a practical viewpoint, the deep structure in optic flow scale space allows the automatic selection of the most accurate scale by means of an optimal prediction principle. Moreover, we show that our general class of optic flow scale-spaces incorporates novel methods that outperform classical variational approaches.
International Journal of Computer Vision | 2015
Oliver Demetz; David Hafner; Joachim Weickert
Invariances are one of the key concepts to render computer vision algorithms robust against severe illumination changes. However, there is no free lunch: With any invariance comes an unavoidable loss of information. The goal of our paper is to introduce two novel descriptors which minimise this loss: the complete rank transform and the complete census transform. They are invariant under monotonically increasing intensity rescalings, while containing a maximum possible amount of information. To analyse our descriptors, we embed them as constancy assumptions into a variational framework for optic flow computation. As a suitable regularisation term, we choose total generalised variation that favours piecewise affine solutions. Our experiments focus on the KITTI benchmark where robustness w.r.t. illumination changes is one of the main issues. The results demonstrate that our descriptors yield state-of-the-art accuracy.
asian conference on computer vision | 2012
Vladislav Kramarev; Oliver Demetz; Christopher Schroers; Joachim Weickert
We study an advanced method for supervised multi-label image segmentation. To this end, we adopt a classic framework which recently has been revitalised by Rhemann et al. (2011). Instead of the usual global energy minimisation step, it relies on a mere evaluation of a cost function for every solution label, which is followed by a spatial smoothing step of these costs. While Rhemann et al. concentrate on efficiency, the goal of this paper is to equip the general framework with sophisticated subcomponents in order to develop a high-quality method for multi-label image segmentation: First, we present a substantially improved cost computation scheme which incorporates texture descriptors, as well as an automatic feature selection strategy. This leads to a high-dimensional feature space, from which we extract the label costs using a support vector machine. Second, we present a novel anisotropic diffusion scheme for the filtering step. In this PDE-based process, the smoothing of the cost volume is steered along the structures of the previously computed feature space. Experiments on widely used image databases show that our scheme produces segmentations of clearly superior quality.
Archive | 2015
Oliver Demetz
The robust estimation of correspondences in image sequences belongs to the fundamental problems in computer vision. One of the biggest challenges in this context are appearance changes, because the traditional brightness constancy assumption only holds under idealised conditions. In realistic scenarios, more advanced concepts are necessary to estimate correspondence reliably. Thus, in the context of dense optic flow estimation, the main goal of this thesis is to develop, analyse and compare strategies and constancy assumptions that are able to handle uncontrolled lighting situations robustly. In that respect, we contribute two solutions to the mentioned problem, that, however, follow very different strategies. First, we consider invariances, which are the dominating concept in literature to tackle appearance changes and uncontrolled lighting conditions today. Invariant features are properties that can be derived from the input images that remain unaffected under certain classes of intensity rescalings. We present a systematic and broad overview of available invariances from the literature and introduce two novel orderingbased features, the complete rank transform and the complete census transform. We analyse important properties and suitable metrics for these signatures and present a generic variational framework that allows to incorporate any of the discussed signatures. Our second contribution is motivated by the biggest disadvantage of invariance-based constancy assumptions: the most challenging appearance changes are local phenomena, but invariances act globally. For instance in case of drop shadows, only parts of the image change. Thus, in regions without appearance changes, the invariance blindly discards potentially valuable information. This fact motivates us to develop a method that integrates illumination changes explicitly into the data model and compensates the constancy assumption locally for occurring changes. Finally, we consider another aspect of optic flow, where variational methods are prevailing today. All such functionals have some type of regularisation term in common, whose weight is the crucial parameter in practice. The larger its value, the smoother the solution will be. Obviously, there exists scale space behaviour in this parameter, which is, however, only well understood in the context of signal regularisation and image scale spaces. Thus, we perform this missing analysis of the optic flow scale space as the third contribution of this thesis.
Journal of Mathematical Imaging and Vision | 2015
David Hafner; Oliver Demetz; Joachim Weickert; Martin Reiβel
In recent years, the popularity of the census transform has grown rapidly. It provides features that are invariant under monotonically increasing intensity transformations. Therefore, it is exploited as a key ingredient of various computer vision problems, in particular for illumination-robust optic flow models. However, despite being extensively applied, its underlying mathematical foundations are not well understood so far. The goal of our paper is to provide these missing insights, and in this way to generalise the concept of the census transform. To this end, we transfer the inherently discrete transform to the continuous setting and embed it into a variational framework for optic flow estimation. This uncovers two important properties: the strong reliance on local extrema and the induced anisotropy of the data term by acting along isolines of the image. These new findings open the door to generalisations of the census transform that are not obvious in the discrete formulation. To illustrate this, we introduce and analyse second order census models that are based on thresholding the second order directional derivatives. Last but not least, we constitute links of census-based approaches to established data terms such as gradient constancy, Hessian constancy, and Laplacian constancy. We confirm our findings by means of experiments.