Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthias Narroschke is active.

Publication


Featured researches published by Matthias Narroschke.


IEEE Circuits and Systems Magazine | 2004

Video coding with H.264/AVC: tools, performance, and complexity

Jörn Ostermann; Jan Bormans; Peter List; Detlev Marpe; Matthias Narroschke; Fernando Pereira; Thomas Stockhammer; Thomas Wedi

H.264/AVC, the result of the collaboration between the ISO/IEC Moving Picture Experts Group and the ITU-T Video Coding Experts Group, is the latest standard for video coding. The goals of this standardization effort were enhanced compression efficiency, network friendly video representation for interactive (video telephony) and non-interactive applications (broadcast, streaming, storage, video on demand). H.264/AVC provides gains in compression efficiency of up to 50% over a wide range of bit rates and video resolutions compared to previous standards. Compared to previous standards, the decoder complexity is about four times that of MPEG-2 and two times that of MPEG-4 Visual Simple Profile. This paper provides an overview of the new tools, features and complexity of H.264/AVC.


IEEE Transactions on Circuits and Systems for Video Technology | 2012

HEVC Deblocking Filter

Andrey Norkin; Gisle Bjontegaard; Arild Fuldseth; Matthias Narroschke; Masaru Ikeda; Kenneth Andersson; Minhua Zhou; G. Van der Auwera

This paper describes the in-loop deblocking filter used in the upcoming High Efficiency Video Coding (HEVC) standard to reduce visible artifacts at block boundaries. The deblocking filter performs detection of the artifacts at the coded block boundaries and attenuates them by applying a selected filter. Compared to the H.264/AVC deblocking filter, the HEVC deblocking filter has lower computational complexity and better parallel processing capabilities while still achieving significant reduction of the visual artifacts.


IEEE Transactions on Circuits and Systems for Video Technology | 2010

Video Coding Using a Simplified Block Structure and Advanced Coding Techniques

Frank Jan Bossen; Virginie Drugeon; Edouard Francois; Joel Jung; Sandeep Kanumuri; Matthias Narroschke; Hisao Sasai; Joel Sole; Yoshinori Suzuki; Thiow Keng Tan; Thomas Wedi; Steffen Wittmann; Peng Yin; Yunfei Zheng

This paper describes a new video coding scheme based on a simplified block structure that significantly outperforms the coding efficiency of the ISO/IEC 14496-10 ITU-T H.264 advanced video coding (AVC) standard. Its conceptual design is similar to a typical block-based hybrid coder applying prediction and subsequent prediction error coding. The basic coding unit is an 8 × 8 block for inter, and an 8 × 8 or a 16 × 16 block for intra, instead of the usual 16 × 16 macroblock. No larger block sizes are considered for prediction and transform. Based on this simplified block structure, the coding scheme uses simple and fundamental coding tools with optimized encoding algorithms. In particular, the motion representation is based on a minimum partitioning with blocks sharing motion borders. In addition, compared to AVC, the new and improved coding techniques include: block-based intensity compensation, motion vector competition, adaptive motion vector resolution, adaptive interpolation filters, edge-based intra prediction and enhanced chrominance prediction, intra template matching, larger trans forms and adaptive switchable transforms selection for intra and inter blocks, and nonlinear and frame-adaptive de-noising loop filters. Finally, the entropy coder uses a generic flexible zero tree representation applied to both motion and texture data. Attention has also been given to algorithm designs that facilitate parallelization. Compared to AVC, the new coding scheme offers clear benefits in terms of subjective video quality at the same bit rate. Objective quality improvements are equally significant. At the same quality, an average bit-rate reduction of 31% compared to AVC is reported.


picture coding symposium | 2013

Extending HEVC by an affine motion model

Matthias Narroschke; Robin Swoboda

Digital video coding standards apply hybrid coding. Each frame is partitioned into blocks of various sizes. For each block, motion compensated prediction with a translational motion model and prediction error coding is applied. The maximum block size of standards preceding HEVC is 16 × 16 samples, for HEVC it is increased to 64 × 64 samples. For non-translational motion the translational model leads to high data rate due to inaccurate prediction. Although an affine motion model can describe non-translational motion, the data rate of its additional parameters is often larger than the data rate reduction achieved by more accurate prediction, especially if small blocks are used. This paper investigates if HEVCs increased block size benefits the affine motion model. HEVC is extended such that for each block, either the translational or the affine motion model is used, selected by minimization of Lagrangian costs of data rate and summed absolute values of prediction error coefficients. For a set of ten test sequences of non-translational motion, the data rate is reduced by 6.3% in average and by 23.7% in maximum compared to HEVC at the same PSNR. Limiting the maximum block size to 16 × 16 samples, the average data rate reduction decreases to 0.1%, which proves synergy between the block size increase and the affine motion model.


asilomar conference on signals, systems and computers | 2002

Functionalities and costs of scalable video coding for streaming services

Matthias Narroschke

Functionalities and costs of scalable video coding techniques are analyzed for streaming services. Temporal, spatial and amplitude scalability, and combinations of them, are considered. Functionalities are: reduction of netload for multicast transmission; reduction of the server storage capacity; graceful degradation in case of transmission errors. Costs are: an increase of netload for unicast transmission; an increase of computational expense in the decoder. The result is that presently only temporal scalability has acceptable costs. Costs of known spatial scalability techniques are too large to be economically attractive. If costs of amplitude scalability can be reduced by future research, the use of this technique combined with temporal scalability will allow improved functionalities.


IEEE Journal of Selected Topics in Signal Processing | 2013

Coding Efficiency of the DCT and DST in Hybrid Video Coding

Matthias Narroschke

Standardized hybrid video coding algorithms, e.g., HEVC, apply intra frame prediction or motion compensated prediction and subsequent integer approximated DCT or DST transform coding. The coding efficiency of the transforms depends on the statistical moments and probability distribution of the input signals. For a Gaussian distribution, the DCT always leads to a data rate reduction. However, for Laplacian distributed prediction errors, the transforms sometimes increase the data rate. This paper presents a theoretical analysis, which explains the reason for an increase of the data rate, which is due to the generation of higher statistical moments of the coefficients by the DCT or DST in the case of Laplacian distributed input signals. The data rate can increase by up to 0.10 bit per sample for blocks with low correlation. For screen content, in about 20% of the blocks, the transform increases the data rate. By skipping the transform for these blocks, HEVC achieves a 7% data rate reduction.


picture coding symposium | 2010

Quantization noise reduction in hybrid video coding by a system of three adaptive filters

Matthias Narroschke

Hybrid video coding algorithms, e.g. H.264/MPEG-4 AVC [1], apply prediction and subsequent prediction error coding introducing quantization noise. The quantized prediction error signal and the prediction signal are added for reconstruction. Deblocking filters reduce quantization noise of the reconstructed signal at block boundaries. To further reduce quantization noise, adaptive Wiener filters are applied to the deblocked signal [2, 3, 4]. In this paper, the adaptive Wiener filter is extended to a system of three adaptive filters in order to improve the quantization noise reduction. A first filter is applied to the deblocked signal, a second filter to the quantized prediction error signal, and a third filter to the prediction signal. The three filtered signals are added for reconstruction. For a set of thirteen test sequences, the system of three adaptive filters achieves an average bit rate reduction at the same quality of 1.9% compared to the adaptive Wiener filter and of 4.9% compared to no Wiener filter. For particular sequences, a bit rate reduction of 6.1% is achieved compared to the adaptive Wiener filter and of 17.1% compared to no Wiener filter.


visual communications and image processing | 2005

Extending the prediction error coder of H.264/AVC by a vector quantizer

Matthias Narroschke

The standardized video coding algorithms are based on hybrid coding using blockwise motion compensated prediction and transform coding of the resulting prediction error. For the purpose of transform coding, the recent standard H.264/AVC applies an integer transform. For each block, the Lagrangian costs are analyzed, which are measured by the sum of the squared reconstruction errors and the bit rate weighted by a Lagrange multiplier. It is observed that the costs of blocks with marginally or diagonally correlated samples are frequently higher than the costs theoretically required due to the fact that the transform coder of H.264/AVC is unadjusted for these blocks. In this paper, it is investigated if the coding efficiency can be improved by extending the prediction error coder by a vector quantizer which is optimized for the coding of these blocks. For each block of the prediction error either standardized transform coding or vector quantization is applied whereas the algorithm with lower costs is chosen. For broadcast quality at 34 dB PSNR, the bit rate is reduced by 7-10% compared to H.264/AVC using CAVLC with slightly reduced computational expense in the decoder. Compared to H.264/AVC using CABAC, almost the same coding efficiency is achieved with significantly lower computational expense in the decoder.


visual communications and image processing | 2011

Parallelized deblocking filter for hybrid video coding

Matthias Narroschke

State of the art hybrid video coding algorithms, e.g. H.264 / MPEG-4 AVC [1] and the currently emerging HEVC video coding standard [2], apply block-based prediction and subsequent block-based prediction error coding, which includes quantization. To reduce annoying block artifacts typically resulting from the combination of block-based processing and coarse quantization, adaptive deblocking filters [3] are applied. In H.264 / MPEG-4 AVC as well as in HEVC, all vertical edges of a current block are deblocked before all horizontal edges. For each edge, it is decided whether to filter or not. If filtering is decided it is applied subsequently. The result of each filtering step is used as an input to all subsequent decision and filter operations. This paper consists of three parts. First, an implementation of the HEVC deblocking filter is presented exploiting parallel processing capabilities, which is important for todays hard- and software implementations. Second, an analysis of this implementation is performed having the result that the critical path consists of 42 sequential operations. In order to shorten the critical path, the parallel processing capabilities are increased in the third part of this paper. This is achieved by dissolving dependencies between consecutive decision and filter operations. The dissolved dependencies enable a shorter critical path of 30 sequential operations, which is a reduction of 30%. Experiments show that the coding efficiency stays unchanged with respect to both, subjective and objective quality. As a consequence of the achieved implementation benefits this technique has been adopted into the HEVC working draft.


Journal of Visual Communication and Image Representation | 2005

Benefits and costs of scalable video coding for internet streaming

Matthias Narroschke

Benefits and costs of scalable hybrid video coding techniques are analyzed with respect to internet streaming. Temporal, spatial, amplitude scalability, and combinations as described in MPEG-4 are considered. Benefits are a reduction of the server storage capacity, a reduction of the netload for multicast delivery and a graceful degradation in case of transmission errors. Costs are an increasing netload for unicast delivery and an increasing computational expense in the decoder. The result of an evaluation shows that temporal scalability has minimum costs among all analyzed techniques. It increases the netload for unicast only marginally with no additional computational expense in the decoder. Temporal scalability provides a reduction of the server storage capacity and netload for multicast by about 30% and two steps of graceful degradation. All other known standardized and nonstandardized techniques of spatial and amplitude scalability are associated with costs that appear too high to be attractive for internet streaming. Therefore, only temporal scalability is used at the present. Some of the scalable video coding techniques may become of interest for other applications where the investigated costs are less relevant.

Collaboration


Dive into the Matthias Narroschke's collaboration.

Researchain Logo
Decentralizing Knowledge