David Hafner
Saarland University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David Hafner.
international conference on scale space and variational methods in computer vision | 2013
David Hafner; Oliver Demetz; Joachim Weickert
The census transform is becoming increasingly popular in the context of optic flow computation in image sequences. Since it is invariant under monotonically increasing grey value transformations, it forms the basis of an illumination-robust constancy assumption. However, its underlying mathematical concepts have not been studied so far. The goal of our paper is to provide this missing theoretical foundation. We study the continuous limit of the inherently discrete census transform and embed it into a variational setting. Our analysis shows two surprising results: The census-based technique enforces matchings of extrema, and it induces an anisotropy in the data term by acting along level lines. Last but not least, we establish links to the widely-used gradient constancy assumption and present experiments that confirm our findings.
british machine vision conference | 2013
Oliver Demetz; David Hafner; Joachim Weickert
Most researchers agree that invariances are desirable in computer vision systems. However, one always has to keep in mind that this is at the expense of accuracy: By construction, all invariances inevitably discard information. The concept of morphological invariance is a good example for this trade-off and will be in the focus of this paper. Our goal is to develop a descriptor of local image structure that carries the maximally possible amount of local image information under this invariance. To fulfill this requirement, our descriptor has to encode the full ordering of the pixel intensities in the local neighbourhood. As a solution, we introduce the complete rank transform, which stores the intensity rank of every pixel in the local patch. As a proof of concept, we embed our novel descriptor in a prototypical TV−L1-type energy functional for optical flow computation, which we minimise with a traditional coarse-to-fine warping scheme. In this straightforward framework, we demonstrate that our descriptor is preferable over related features that exhibit the same invariance. Finally, we show by means of public benchmark systems that our method produces in spite of its simplicity results of competitive quality.
international conference on pattern recognition | 2014
David Hafner; Oliver Demetz; Joachim Weickert
Camera shakes and moving objects pose a severe problem in the high dynamic range (HDR) reconstruction from differently exposed images. We present the first approach that simultaneously computes the aligned HDR composite as well as accurate displacement maps. In this way, we can not only cope with dynamic scenes but even precisely represent the underlying motion. We design our fully coupled model transparently in a well-founded variational framework. The proposed joint optimisation has beneficial effects, such as intrinsic ghost removal or HDR-coupled smoothing. Both the HDR images and the optic flows benefit substantially from these features and the induced mutual feedback. We demonstrate this with synthetic and real-world experiments.
german conference on pattern recognition | 2015
David Hafner; Christopher Schroers; Joachim Weickert
On the one hand, anisotropic diffusion is a well-established concept that has improved numerous computer vision approaches by permitting direction-dependent smoothing. On the other hand, recent applications have uncovered the importance of second order regularisation. The goal of this work is to combine the benefits of both worlds. To this end, we propose a second order regulariser that allows to penalise both jumps and kinks in a direction-dependent way. We start with an isotropic coupling model, and systematically introduce anisotropic concepts from first order approaches. We demonstrate the benefits of our model by experiments, and apply it to improve an existing focus fusion method.
Pattern Recognition | 2015
Madina Boshtayeva; David Hafner; Joachim Weickert
Focus fusion is the task of combining a set of images focused at different depths into a single image that is entirely in-focus. The crucial point of all focus fusion methods is the decision about the in-focus areas. To this end, we present a general framework for focus fusion that introduces a modern regularisation strategy on these per-pixel decisions. We assume that neighbouring pixels in the fused image belong to similar depth layers. Following this assumption, we smooth the depth map with a sophisticated anisotropic diffusion process combined with a robust data fidelity term. The experiments with synthetic and real-world data demonstrate that our new model yields a better quality than several existing focus fusion methods. Moreover, our methodology is general and can be applied to improve many fusion approaches. HighlightsWe show that depth map regularisation is an important tool for focus fusion.We incorporate modern concepts such as a coupled anisotropic diffusion term.We substantially improve the runtime with a fast GPU implementation.We evaluate different in-focus measures.We compare the overall performance to several methods from the literature.
International Journal of Computer Vision | 2015
Oliver Demetz; David Hafner; Joachim Weickert
Invariances are one of the key concepts to render computer vision algorithms robust against severe illumination changes. However, there is no free lunch: With any invariance comes an unavoidable loss of information. The goal of our paper is to introduce two novel descriptors which minimise this loss: the complete rank transform and the complete census transform. They are invariant under monotonically increasing intensity rescalings, while containing a maximum possible amount of information. To analyse our descriptors, we embed them as constancy assumptions into a variational framework for optic flow computation. As a suitable regularisation term, we choose total generalised variation that favours piecewise affine solutions. Our experiments focus on the KITTI benchmark where robustness w.r.t. illumination changes is one of the main issues. The results demonstrate that our descriptors yield state-of-the-art accuracy.
Computer Graphics Forum | 2016
David Hafner; Joachim Weickert
In this paper, we present a general variational method for image fusion. In particular, we combine different images of the same subject to a single composite that offers optimal exposedness, saturation and local contrast. Previous research approaches this task by first pre‐computing application‐specific weights based on the input, and then combining these weights with the images to the final composite later on. In contrast, we design our model assumptions directly on the fusion result. To this end, we formulate the output image as a convex combination of the input and incorporate concepts from perceptually inspired contrast enhancement such as a local and non‐linear response. This output‐driven approach is the key to the versatility of our general image fusion model. In this regard, we demonstrate the performance of our fusion scheme with several applications such as exposure fusion, multispectral imaging and decolourization. For all application domains, we conduct thorough validations that illustrate the improvements compared to state‐of‐the‐art approaches that are tailored to the individual tasks.
international conference on scale space and variational methods in computer vision | 2015
David Hafner; Joachim Weickert
In this paper, we present a variational method for exposure fusion. In particular, we combine differently exposed images to a single composite that offers optimal exposedness, saturation, and local contrast. To this end, we formulate the output image as a convex combination of the input, and design an energy functional that implements important perceptually inspired concepts from contrast enhancement such as a local and nonlinear response. Several experiments demonstrate the quality of our technique and show improvements w.r.t. state-of-the-art methods.
international conference on scale space and variational methods in computer vision | 2015
Christopher Schroers; David Hafner; Joachim Weickert
In this paper we consider the problem of estimating depth maps from multiple views within a variational framework. Previous work has demonstrated that multiple views improve the depth reconstruction, and that higher order regularisers model a good prior for typical real-world 3D scenes. We build on these findings and stress an important aspect that has not been considered in variational multiview depth estimation so far: We investigate several parameterisations of the unknown depth. This allows us to show, both analytically and experimentally, that directly working with depth values introduces an undesirable bias. As a remedy, we reveal that an inverse depth parameterisation is generally preferable. Our analysis clearly points out its benefits w.r.t. the data and the smoothness term. We verify these theoretical findings by means of experiments.
Journal of Mathematical Imaging and Vision | 2015
David Hafner; Oliver Demetz; Joachim Weickert; Martin Reiβel
In recent years, the popularity of the census transform has grown rapidly. It provides features that are invariant under monotonically increasing intensity transformations. Therefore, it is exploited as a key ingredient of various computer vision problems, in particular for illumination-robust optic flow models. However, despite being extensively applied, its underlying mathematical foundations are not well understood so far. The goal of our paper is to provide these missing insights, and in this way to generalise the concept of the census transform. To this end, we transfer the inherently discrete transform to the continuous setting and embed it into a variational framework for optic flow estimation. This uncovers two important properties: the strong reliance on local extrema and the induced anisotropy of the data term by acting along isolines of the image. These new findings open the door to generalisations of the census transform that are not obvious in the discrete formulation. To illustrate this, we introduce and analyse second order census models that are based on thresholding the second order directional derivatives. Last but not least, we constitute links of census-based approaches to established data terms such as gradient constancy, Hessian constancy, and Laplacian constancy. We confirm our findings by means of experiments.