Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jürgen Seiler is active.

Publication


Featured researches published by Jürgen Seiler.


IEEE Signal Processing Letters | 2010

Complex-Valued Frequency Selective Extrapolation for Fast Image and Video Signal Extrapolation

Jürgen Seiler; André Kaup

Signal extrapolation tasks arise in miscellaneous manners in the field of image and video signal processing. But, due to the widespread use of low-power and mobile devices, the computational complexity of an algorithm plays a crucial role in selecting an algorithm for a given problem. Within the scope of this contribution, we introduce the complex-valued Frequency Selective Extrapolation for fast image and video signal extrapolation. This algorithm iteratively generates a generic complex-valued model of the signal to be extrapolated as weighted superposition of Fourier basis functions. We further show that this algorithm is up to 10 times faster than the existent real-valued Frequency Selective Extrapolation that takes the real-valued nature of the input signals into account during the model generation. At the same time, the quality which is achievable by the complex-valued model generation is similar to the quality of the real-valued model generation.


international conference on acoustics, speech, and signal processing | 2008

Fast orthogonality deficiency compensation for improved frequency selective image extrapolation

Jürgen Seiler; André Kaup

The purpose of this paper is to introduce a very efficient algorithm for signal extrapolation. It can widely be used in many applications in image and video communication, e. g. for concealment of block errors caused by transmission errors or for prediction in video coding. The signal extrapolation is performed by extending a signal from a limited number of known samples into areas beyond these samples. Therefore a finite set of orthogonal basis functions is used and the known part of the signal is projected onto them. Since the basis functions are not orthogonal regarding the area of the known samples, the projection does not lead to the real portion a basis function has of the signal. The proposed algorithm efficiently copes with this non-orthogonality resulting in very good objective and visual extrapolation results for edges, smooth areas, as well as structured areas. Compared to an existent implementation, this algorithm has a significantly lower computational complexity without any degradation in quality. The processing time can be reduced by a factor larger than 100.


international conference on image processing | 2008

Spatio-temporal prediction in video coding by spatially refined motion compensation

Jürgen Seiler; André Kaup

The purpose of this contribution is to introduce a new method of signal prediction in video coding. Unlike most existent prediction methods that either use temporal or use spatial correlations to generate the prediction signal, the proposed method uses spatial and temporal correlations at the same time. The spatio-temporal prediction is obtained by first performing motion compensation for a macroblock, followed by a refinement step that pays attention to the correlations between the macroblock and its surroundings. At the decoder, the refinement step can be performed in the same manner, thus no additional side information has to be transmitted. Implementation of the spatial refinement step into the H.264/AVC video codec leads to reduction in data rate of up to nearly 15% and increase in PSNR of up to 0.75 dB, compared to pure motion compensated prediction.


Signal Processing-image Communication | 2014

High dynamic range video reconstruction from a stereo camera setup

Michel Bätz; Thomas Richter; Jens-Uwe Garbas; Anton Papst; Jürgen Seiler; André Kaup

To overcome the dynamic range limitations in images taken with regular consumer cameras, several methods exist for creating high dynamic range (HDR) content. Current low-budget solutions apply a temporal exposure bracketing which is not applicable for dynamic scenes or HDR video. In this article, a framework is presented that utilizes two cameras to realize a spatial exposure bracketing, for which the different exposures are distributed among the cameras. Such a setup allows for HDR images of dynamic scenes and HDR video due to its frame by frame operating principle, but faces challenges in the stereo matching and HDR generation steps. Therefore, the modules in this framework are selected to alleviate these challenges and to properly handle under- and oversaturated regions. In comparison to existing work, the camera response calculation is shifted to an offline process and a masking with a saturation map before the actual HDR generation is proposed. The first aspect enables the use of more complex camera setups with different sensors and provides robust camera responses. The second one makes sure that only necessary pixel values are used from the additional camera view, and thus, reduces errors in the final HDR image. The resulting HDR images are compared with the quality metric HDR-VDP-2 and numerical results are given for the first time. For the Middlebury test images, an average gain of 52 points on a 0-100 mean opinion score is achieved in comparison to temporal exposure bracketing with camera motion. Finally, HDR video results are provided.


international conference on image processing | 2015

Hybrid super-resolution combining example-based single-image and interpolation-based multi-image reconstruction approaches

Michel Bätz; Andrea Eichenseer; Jürgen Seiler; Markus Jonscher; André Kaup

Achieving a higher spatial resolution is of particular interest in many applications such as video surveillance and can be realized by employing higher resolution sensors or applying super-resolution methods. Traditional super-resolution algorithms are based on either a single low resolution image or on multiple low resolution frames. In this paper, a hybrid super-resolution method is proposed which combines both a single-image and a multi-image approach using a soft decision mask. The mask is computed from the motion information utilized in the multi-image super-resolution part. This concept is shown to work for one particular setup but is also extensible toward other combinations of single-image and multi-image super-resolution algorithms as well as other merging metrics. Simulation results show an average luminance PSNR gain of up to 0.85 dB and 0.59 dB for upscaling factors of 2 and 4, respectively. Visual results substantiate the objective results.


international conference on image processing | 2012

High dynamic range video by spatially non-regular optical filtering

Michael Schöberl; Alexander Belz; Jürgen Seiler; Siegfried Foessel; André Kaup

We present a new method for capturing high dynamic range video (HDRV). Our method is based on spatially varying exposures, where individual pixels are covered with filters for different optical attenuation. For preventing the loss in resolution we use a new non-regular arrangement of the attenuation pattern. Subsequent image reconstruction based on the sparsity assumption allows the reconstruction of natural images with high detail.


multimedia signal processing | 2008

Adaptive joint spatio-temporal error concealment for video communication

Jürgen Seiler; André Kaup

In the past years, video communication has found its application in an increasing number of environments. Unfortunately, some of them are error-prone and the risk of block losses caused by transmission errors is ubiquitous. To reduce the effects of these block losses, a new spatio-temporal error concealment algorithm is presented. The algorithm uses spatial as well as temporal information for extrapolating the signal into the lost areas. The extrapolation is carried out in two steps, first a preliminary temporal extrapolation is performed which then is used to generate a model of the original signal, using the spatial neighborhood of the lost block. By applying the spatial refinement a significantly higher concealment quality can be achieved resulting in a gain of up to 5.2 dB in PSNR compared to the unrefined underlying pure temporal extrapolation.


multimedia signal processing | 2012

Robust super-resolution in a multiview setup based on refined high-frequency synthesis

Thomas Richter; Jürgen Seiler; Wolfgang Schnurrer; André Kaup

Increasing image sharpness and thus improving the visual quality is an important task in multiview image and video processing. We propose a novel super-resolution approach for multiview images in a mixed-resolution setup that is robust to various depth map distortions. The considered distortion scenarios may be caused by an inaccurate calibration of the depth camera or a limitation of depth range. Our method is based on a refined high-frequency synthesis that relies on a blockwise and depth-dependant low-frequency registration. The refinement step efficiently adapts the high-frequency content from a neighboring high-resolution camera to a low-resolution view and thereby compensates the displacement caused by depth inaccuracies. In case of undistorted depth maps, the results show that our algorithm leads to a PSNR gain of up to 1.33 dB with respect to a comparable unrefined super-resolution approach for a mixed-resolution multiview video plus depth format. Compared to the initial low-resolution view, a PSNR gain of up to 2.61 dB is obtained. In case of distorted depth maps, a PSNR gain of even 4.78 dB is achieved with respect to the reference superresolution algorithm. The PSNR gains get confirmed by the corresponding SSIM values which manifest a similar behaviour. The improvement of visual quality is also convincingly for all considered scenarios.


IEEE Transactions on Circuits and Systems for Video Technology | 2016

Robust Super-Resolution for Mixed-Resolution Multiview Image Plus Depth Data

Thomas Richter; Jürgen Seiler; Wolfgang Schnurrer; André Kaup

Increasing spatial resolution and thus improving the image quality is a key issue in the mixed-resolution multiview image and video processing domain. Given adjacent camera perspectives with various spatial resolutions and their corresponding depth information, the high-frequency part of a high-resolution view can be used for increasing the image quality of a neighboring low-resolution camera perspective. However, a reasonable projection of high-frequency information onto the image plane of a neighboring low-resolution view typically requires pixel-wise error-free depth data for the high-resolution reference image. Starting from this, a novel image super-resolution approach is proposed that is robust against both inaccurate depth acquisition and nonperfect calibration of spatially low-resolution depth sensors. The algorithm is based on displacement-compensated high-frequency synthesis and aims at correcting the projection errors introduced by inaccurate depth information. The proposed approach is further effectively extended by a signal extrapolation technique. For a wide range of proper scenarios, the proposed framework achieves substantial objective and visual gains compared with the considered reference approaches. The improvement of quality is shown for both simulated and self-recorded experimental data.


international conference on acoustics, speech, and signal processing | 2014

Frequency selective extrapolation with residual filtering for image error concealment

Ján Koloda; Jürgen Seiler; André Kaup; Victoria E. Sánchez; Antonio M. Peinado

The purpose of signal extrapolation is to estimate unknown signal parts from known samples. This task is especially important for error concealment in image and video communication. For obtaining a high quality reconstruction, assumptions have to be made about the underlying signal in order to solve this underdetermined problem. Among existent reconstruction algorithms, frequency selective extrapolation (FSE) achieves high performance by assuming that image signals can be sparsely represented in the frequency domain. However, FSE does not take into account the low-pass behaviour of natural images. In this paper, we propose a modified FSE that takes this prior knowledge into account for the modelling, yielding significant PSNR gains.

Collaboration


Dive into the Jürgen Seiler's collaboration.

Top Co-Authors

Avatar

André Kaup

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Thomas Richter

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Wolfgang Schnurrer

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Markus Jonscher

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Michel Bätz

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Huynh Van Luong

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Søren Forchhammer

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Nils Genser

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nikos Deligiannis

Vrije Universiteit Brussel

View shared research outputs
Researchain Logo
Decentralizing Knowledge