Eugen Wige
University of Erlangen-Nuremberg
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Eugen Wige.
international conference on image processing | 2013
Eugen Wige; Gilbert Yammine; Peter Amon; Andreas Hutter; André Kaup
This paper presents an intra-frame prediction scheme designed for lossless coding using HEVC. The proposed coding method comprises a pixel-wise prediction based on original samples. It is realized as a separate intra prediction mode, which replaces the PLANAR mode. In order to perform the prediction, a four-sample template around the pixel that is to be predicted is compared to the respective template of a four-pixel neighborhood. For each reference template, the sum of absolute differences (SAD) is determined. A table look-up of the SAD value gives the respective weighting factor for each neighborhood pixel. The predictor for the current pixel is calculated as the weighted average of the neighborhood pixels. In comparison to the unmodified HEVC Test Model HM-9.1 configured for lossless coding by disabling/bypassing transformation, quantization, and in-loop filters, the proposed method provides average bitrate savings up to 10.88% for intra-only coding at similar computational complexity.
picture coding symposium | 2013
Eugen Wige; Gilbert Yammine; Peter Amon; Andreas Hutter; André Kaup
The recently introduced High Efficiency Video Coding (HEVC) standard is currently further investigated for potential use in professional applications. The considered Range Extensions should on the one hand introduce higher bit depths and additional color formats, and on the other hand the coding efficiency of HEVC for high fidelity compression as well as lossless compression is to be improved. In this paper we investigate and improve the recently introduced Sample-based Weighted Prediction (SWP) for HEVC lossless coding. Although being very efficient for natural video content, the SWP algorithm can be further improved for screen content by using a directional template predictor in cases where the SWP algorithm yields worse prediction. The mainly introduced predictor improves the lossless coding results by up to 9.9% compared to the unmodified HEVC reference software for lossless compression.
international conference on image processing | 2010
Eugen Wige; Peter Amon; Andreas Hutter; André Kaup
The major gain in video coding applications compared to single image coding is the use of temporal prediction, which exploits the correlation between adjacent frames. However, in high quality video coding, especially lossless video coding, the compression gain of P-frames over I-Frames becomes very small. The reason for that is that the reference frame for inter prediction is not good enough, and therefore the amount of inter-predicted blocks in a P-Frame becomes relatively small compared to the number of intra-predicted blocks. In order to generate a better predictor for inter-prediction, we propose to remove additive noise from the reference frame using an adaptive Wiener filter. This way, we could achieve a maximum compression gain of 4.6% and an average compression gain of 3.3% in contrast to the H.264/AVC standard for lossless coding of high quality image sequences without affecting the encoding time noticeably.
visual communications and image processing | 2012
Wolfgang Schnurrer; Jürgen Seiler; Eugen Wige; André Kaup
A huge advantage of the wavelet transform in image and video compression is its scalability. Wavelet-based coding of medical computed tomography (CT) data becomes more and more popular. While much effort has been spent on encoding of the wavelet coefficients, the extension of the transform by a compensation method as in video coding has not gained much attention so far. We will analyze two compensation methods for medical CT data and compare the characteristics of the displacement compensated wavelet transform with video data. We will show that for thorax CT data the transform coding gain can be improved by a factor of 2 and the quality of the lowpass band can be improved by 8 dB in terms of PSNR compared to the original transform without compensation.
international conference on image processing | 2010
Gilbert Yammine; Eugen Wige; André Kaup
In this paper, we present a new low-complexity no-reference metric that assesses the visibility of blocking artifacts in DCT-coded images. The metric works in the spatial domain as well as in the gradient image domain without remarkable changes. Our conducted simulations showed that the metric is highly robust among many image distortion databases and extremely consistent with subjective mean opinion scores (MOS). The metric also provides a blocking visibility map that can be used for adaptive filtering of blocking artifacts.
picture coding symposium | 2012
Gilbert Yammine; Eugen Wige; Franz Simmet; Dieter Niederkorn; André Kaup
In multimedia systems, system errors and artifacts should be avoided in order to keep the Quality of Experience (QoE) of the user as high as possible. For that, the system should be tested and monitored for a long time to assure normal operation. In this paper, we present a new algorithm that automatically detects freezing artifacts in coded videos. The system output is captured and encoded, and the error/artifact detection is run later on the decoded video. One major constraint on our detection algorithm is that every freeze should be detected in order to analyze the reason of its occurrence. Such a constraint entails a high number of false alarms when using the standard freeze detection algorithms. For that, we present a new algorithm that keeps the number of true positives to its maximum, while minimizing the number of false positives.
international conference on vehicular electronics and safety | 2012
Gilbert Yammine; Eugen Wige; Franz Simmet; Dieter Niederkorn; André Kaup
In this paper, we present an automated error/ artifact detection framework that monitors the infotainment system of the car. Specifically, this paper focuses on the detection of freezing artifacts that could occur in the navigation system of the car during field tests. Knowing that the motion in the navigation system could also stop when the car stops, it is crucial to differentiate between this situation and a real freezing artifact. We propose an algorithm that reliably detects map jumps which occur after the freezing artifacts. The proposed algorithm extracts lines from the possibly frozen frame and the frame following the freeze event and matches the extracted lines together in order to output a geometric transformation matrix that describes the motion between both frames. If the motion is larger than normal, a map jump is detected and a freeze event is signaled.
picture coding symposium | 2010
Gilbert Yammine; Eugen Wige; Andr e Kaup
In this paper, we provide a simple method for analyzing the GOP structure of an MPEG-2 or H.264/AVC decoded video without having access to the bitstream. Noise estimation is applied on the decoded frames and the variance of the noise in the different I-, P-, and B-frames is measured. After the encoding process, the noise variance in the video sequence shows a periodic pattern, which helps in the extraction of the GOP period, as well as the type of frames. This algorithm can be used along with other algorithms to blindly analyze the encoding history of a video sequence. The method has been tested on several MPEG-2 DVB and DVD streams, as well as on H.264/AVC encoded sequences, and shows successful results in both cases.
IEEE Transactions on Circuits and Systems for Video Technology | 2014
Gilbert Yammine; Eugen Wige; Franz Simmet; Dieter Niederkorn; André Kaup
We present a new and fast line descriptor and matching algorithm to geometrically register video frames that are deformed by a similarity transformation and contain line segments but with very low texture details, such as navigation maps. Line segments are extracted from the frames, and each line is described with a novel line descriptor that does not depend on pixel intensities. The described lines are then matched together and the matches are input to an outlier removal algorithm in order to estimate the parameters of the transformation describing the global motion between the frames. We propose a method for fast parameter estimation of the transformation using line segments instead of points. Additionally, we apply this algorithm in the testing and error detection context of navigation systems, in which we show how to detect map jump artifacts that could occur during the development of these systems, leading to a jerky and unsmooth motion between the frames. The proposed descriptor and its matching algorithm are shown to be fast enough for online use and very robust against a wide range of translation, rotation, and scale changes. Furthermore, the error detection algorithm allows to detect almost all map jump artifacts while maintaining a very low number of false alarms.
international conference on image processing | 2013
Gilbert Yammine; Eugen Wige; Franz Simmet; Dieter Niederkorn; André Kaup
In this paper we present a new line descriptor that can be used in the process of tracking 2D navigation maps that are low in texture details. The problem of tracking this kind of maps is very important in the development and testing phases of a navigation system. In order to track the motion of the map, which could be a translation, rotation, or scale (also known as similarity transform), line segments are used as features and are matched between two consecutive frames using the proposed descriptor in order to determine the motion between the frames. The descriptor is shown to be very robust against typical translation, rotation, and scale transformations, while requiring an acceptable processing time.