Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michel Bätz is active.

Publication


Featured researches published by Michel Bätz.


Signal Processing-image Communication | 2014

High dynamic range video reconstruction from a stereo camera setup

Michel Bätz; Thomas Richter; Jens-Uwe Garbas; Anton Papst; Jürgen Seiler; André Kaup

To overcome the dynamic range limitations in images taken with regular consumer cameras, several methods exist for creating high dynamic range (HDR) content. Current low-budget solutions apply a temporal exposure bracketing which is not applicable for dynamic scenes or HDR video. In this article, a framework is presented that utilizes two cameras to realize a spatial exposure bracketing, for which the different exposures are distributed among the cameras. Such a setup allows for HDR images of dynamic scenes and HDR video due to its frame by frame operating principle, but faces challenges in the stereo matching and HDR generation steps. Therefore, the modules in this framework are selected to alleviate these challenges and to properly handle under- and oversaturated regions. In comparison to existing work, the camera response calculation is shifted to an offline process and a masking with a saturation map before the actual HDR generation is proposed. The first aspect enables the use of more complex camera setups with different sensors and provides robust camera responses. The second one makes sure that only necessary pixel values are used from the additional camera view, and thus, reduces errors in the final HDR image. The resulting HDR images are compared with the quality metric HDR-VDP-2 and numerical results are given for the first time. For the Middlebury test images, an average gain of 52 points on a 0-100 mean opinion score is achieved in comparison to temporal exposure bracketing with camera motion. Finally, HDR video results are provided.


international conference on image processing | 2015

Hybrid super-resolution combining example-based single-image and interpolation-based multi-image reconstruction approaches

Michel Bätz; Andrea Eichenseer; Jürgen Seiler; Markus Jonscher; André Kaup

Achieving a higher spatial resolution is of particular interest in many applications such as video surveillance and can be realized by employing higher resolution sensors or applying super-resolution methods. Traditional super-resolution algorithms are based on either a single low resolution image or on multiple low resolution frames. In this paper, a hybrid super-resolution method is proposed which combines both a single-image and a multi-image approach using a soft decision mask. The mask is computed from the motion information utilized in the multi-image super-resolution part. This concept is shown to work for one particular setup but is also extensible toward other combinations of single-image and multi-image super-resolution algorithms as well as other merging metrics. Simulation results show an average luminance PSNR gain of up to 0.85 dB and 0.59 dB for upscaling factors of 2 and 4, respectively. Visual results substantiate the objective results.


international conference on image processing | 2015

A hybrid motion estimation technique for fisheye video sequences based on equisolid re-projection

Andrea Eichenseer; Michel Bätz; Jürgen Seiler; André Kaup

Capturing large fields of view with only one camera is an important aspect in surveillance and automotive applications, but the wide-angle fisheye imagery thus obtained exhibits very special characteristics that may not be very well suited for typical image and video processing methods such as motion estimation. This paper introduces a motion estimation method that adapts to the typical radial characteristics of fisheye video sequences by making use of an equisolid re-projection after moving part of the motion vector search into the perspective domain via a corresponding back-projection. By combining this approach with conventional translational motion estimation and compensation, average gains in luminance PSNR of up to 1.14 dB are achieved for synthetic fish-eye sequences and up to 0.96 dB for real-world data. Maximum gains for selected frame pairs amount to 2.40 dB and 1.39 dB for synthetic and real-world data, respectively.


visual communications and image processing | 2015

Centroid adapted frequency selective extrapolation for reconstruction of lost image areas

Wolfgang Schnurrer; Markus Jonscher; Jürgen Seiler; Thomas Richter; Michel Bätz; André Kaup

Lost image areas with different size and arbitrary shape can occur in many scenarios such as error-prone communication, depth-based image rendering or motion compensated wavelet lifting. The goal of image reconstruction is to restore these lost image areas as close to the original as possible. Frequency selective extrapolation is a block-based method for efficiently reconstructing lost areas in images. So far, the actual shape of the lost area is not considered directly. We propose a centroid adaption to enhance the existing frequency selective extrapolation algorithm that takes the shape of lost areas into account. To enlarge the test set for evaluation we further propose a method to generate arbitrarily shaped lost areas. On our large test set, we obtain an average reconstruction gain of 1.29 dB.


international conference on image processing | 2016

Motion estimation for fisheye video sequences combining perspective projection with camera calibration information

Andrea Eichenseer; Michel Bätz; André Kaup

Fisheye cameras prove a convenient means in surveillance and automotive applications as they provide a very wide field of view for capturing their surroundings. Contrary to typical rectilinear imagery, however, fisheye video sequences follow a different mapping from the world coordinates to the image plane which is not considered in standard video processing techniques. In this paper, we present a motion estimation method for real-world fisheye videos by combining perspective projection with knowledge about the underlying fisheye projection. The latter is obtained by camera calibration since actual lenses rarely follow exact models. Furthermore, we introduce a re-mapping for ultra-wide angles which would otherwise lead to wrong motion compensation results for the fisheye boundary. Both concepts extend an existing hybrid motion estimation method for equisolid fisheye video sequences that decides between traditional and fisheye block matching in a block-based manner. Compared to that method, the proposed calibration and re-mapping extensions yield gains of up to 0.58 dB in luminance PSNR for real-world fisheye video sequences. Overall gains amount to up to 3.32 dB compared to traditional block matching.


international conference on image processing | 2016

Multi-image super-resolution using a dual weighting scheme based on Voronoi tessellation

Michel Bätz; Andrea Eichenseer; André Kaup

Increasing spatial resolution is often required in many applications such as entertainment systems or video surveillance. Apart from using higher resolution sensors, it is also possible to apply superresolution algorithms to realize an increased resolution. Those methods can be divided into approaches that rely on only a single low resolution image or on multiple low resolution video frames. While incorporating more frames into the super-resolution is beneficial for the resolution enhancement in principle, it is also likely to introduce more artifacts from inaccurate motion estimation. To alleviate this problem, various weightings have been proposed in the literature. In this paper, we propose an extended dual weighting scheme for an interpolation-based super-resolution method based on Voronoi tessellation that relies on both a motion confidence weight and a distance weight. Compared to non-weighted super-resolution, the proposed method yields an average gain in luminance PSNR of up to 1.29 dB and 0.61 dB for upscaling factors of 2 and 4, respectively. Visual comparisons substantiate the objective results.


european signal processing conference | 2016

Multi-image super-resolution for fisheye video sequences using subpixel motion estimation based on calibrated re-projection

Michel Bätz; Andrea Eichenseer; André Kaup

Super-resolution techniques are a means for reconstructing a higher spatial resolution from low resolution content, which is especially important for automotive or surveillance systems. Furthermore, being able to capture a large area with a single camera can be realized by using ultra-wide angle lenses, as employed in so-called fisheye cameras. However, the underlying non-perspective projection function of fisheye cameras introduces significant radial distortion, which is not considered by conventional super-resolution techniques. In this paper, we therefore propose the integration of a fisheye-adapted motion estimation approach that is based on a calibrated re-projection into a multi-image super-resolution framework. The proposed method is capable of taking the fisheye characteristics into account, thus improving the reconstruction quality. Simulation results show an average gain in luminance PSNR of up to 0.3 dB for upscaling factors of 2 and 4. Visual examples substantiate the objective results.


european signal processing conference | 2015

Temporal error concealment for fisheye video sequences based on equisolid re-projection

Andrea Eichenseer; Jürgen Seiler; Michel Bätz; André Kaup

Wide-angle video sequences obtained by fisheye cameras exhibit characteristics that may not very well comply with standard image and video processing techniques such as error concealment. This paper introduces a temporal error concealment technique designed for the inherent characteristics of equisolid fisheye video sequences by applying a re-projection into the equisolid domain after conducting part of the error concealment in the perspective domain. Combining this technique with conventional decoder motion vector estimation achieves average gains of 0.71 dB compared against pure decoder motion vector estimation for the test sequences used. Maximum gains amount to up to 2.04 dB for selected frames.


international conference on image processing | 2014

Reconstruction of images taken by a pair of non-regular sampling sensors using correlation based matching

Markus Jonscher; Jürgen Seiler; Thomas Richter; Michel Bätz; André Kaup

Multi-view image acquisition systems with two or more cameras can be rather costly due to the number of high resolution image sensors that are required. Recently, it has been shown that by covering a low resolution sensor with a non-regular sampling mask and by using an efficient algorithm for image reconstruction, a high resolution image can be obtained. In this paper, a stereo image reconstruction setup for multi-view scenarios is proposed. A scene is captured by a pair of non-regular sampling sensors and by incorporating information from the adjacent view, the reconstruction quality can be increased. Compared to a state-of-the-art single-view reconstruction algorithm, this leads to a visually noticeable average gain in PSNR of 0.74 dB.


Proceedings of SPIE | 2014

Cost-effective multi-camera array for high quality video with very high dynamic range

Joachim Keinert; Marcus Wetzel; Michael Schöberl; Peter Schäfer; Frederik Zilly; Michel Bätz; Siegfried Fößel; André Kaup

Temporal bracketing can create images with higher dynamic range than the underlying sensor. Unfortunately, moving objects cause disturbing artifacts. Moreover, the combination with high frame rates is almost unachiev able since a single video frame requires multiple sensor readouts. The combination of multiple synchronized side-by-side cameras equipped with different attenuation filters promises a remedy, since all exposures can be performed at the same time with the same duration using the playout video frame rate. However, a disparity correction is needed to compensate the spatial displacement of the cameras. Unfortunately, the requirements for a high quality disparity correction contradict the goal to increase dynamic range. When using two cameras, disparity correction needs objects to be properly exposed in both cameras. In contrast, a dynamic range increase needs the cameras to capture different luminance ranges. As this contradiction has not been addressed in literature so far, this paper proposes a novel solution based on a three camera setup. It enables accurate de termination of the disparities and an increase of the dynamic range by nearly a factor of two while still limiting costs. Compared to a two camera solution, the mean opinion score (MOS) is improved by 13.47 units in average for the Middleburry images.

Collaboration


Dive into the Michel Bätz's collaboration.

Top Co-Authors

Avatar

André Kaup

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Andrea Eichenseer

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Jürgen Seiler

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Markus Jonscher

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Thomas Richter

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Wolfgang Schnurrer

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Jan Koloda

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Andreas K. Maier

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Christian Riess

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Farzad Naderi

University of Erlangen-Nuremberg

View shared research outputs
Researchain Logo
Decentralizing Knowledge