Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marko Esche is active.

Publication


Featured researches published by Marko Esche.


IEEE Transactions on Circuits and Systems for Video Technology | 2012

Adaptive Temporal Trajectory Filtering for Video Compression

Marko Esche; Alexander Glantz; Andreas Krutz; Thomas Sikora

Most in-loop filters currently being employed in video compression algorithms use spatial information from a single frame of the video sequence only. In this paper, a new filter is introduced and investigated that combines both spatial and temporal information to provide subjective and objective quality improvement. The filter only requires a small overhead on slice level while using the temporal information conveyed in the bit stream to reconstruct the individual motion trajectory of every pixel in a frame at both encoder and decoder. This information is then used to perform pixel-wise adaptive motion-compensated temporal filtering. It is shown that the filter performs better than the state-of-the-art codec H.264/AVC over a large range of sequences and bit rates. Additionally, the filter is compared with another, Wiener-based in-loop filtering approach and a complexity analysis of both algorithms is conducted.


IEEE Transactions on Circuits and Systems for Video Technology | 2012

Adaptive Global Motion Temporal Filtering for High Efficiency Video Coding

Andreas Krutz; Alexander Glantz; Michael Tok; Marko Esche; Thomas Sikora

Coding artifacts in video codecs can be reduced using several spatial in-loop filters that are part of the emerging video coding standard High Efficiency Video Coding (HEVC). In this paper, we introduce the concept of global motion temporal filtering. A theoretical framework for a concept combining the temporal overlapping of several noisy versions of the same signal is introduced. This includes a model of the motion estimation error. As an important result, it is shown that an optimum number of frames N for filtering exists. An implementation of the concept based on several versions of the HEVC test model using global motion-compensated temporal filtering shows that significant gains can be achieved.


data compression conference | 2013

A Parametric Merge Candidate for High Efficiency Video Coding

Michael Tok; Marko Esche; Alexander Glantz; Andreas Krutz; Thomas Sikora

Block based motion compensated prediction still is the main technique used for temporal redundancy reduction in modern hybrid video codecs. However, the resulting motion vector fields are highly redundant as well. So, motion vector prediction and difference coding are used to compress such vector fields. A drawback of common motion vector prediction techniques is their inability to predict complex motion such as rotation and zoom in an efficient way. We present a novel Merge candidate for improving already existing vector prediction techniques based on higher order motion models to overcome this issue. To transmit the needed models, an efficient compression scheme is utilized. The improvement results in bit rate savings of 1.7% in average and up to 4% respectively.


picture coding symposium | 2012

Weighted temporal long trajectory filtering for video compression

Marko Esche; Alexander Glantz; Andreas Krutz; Michael Tok; Thomas Sikora

In the context of the HEVC standardization activity, in-loop filters such as the adaptive loop filter and the deblocking filter are currently under investigation. Both filters work in the spatial domain only, despite the temporal correlation within video sequences. In this work a previously introduced filter, that uses temporal information for deblocking and denoising instead, is integrated into the HEVC test model HM 3.0. It is shown how the filter is to be adapted to work in combination with the adaptive loop filter for the HEVC low-delay profile. In addition, an optimal weighting function for the filtered luma samples based on the qunatization parameter is derived. Bit rate reductions of up to 7.6% are reported for individual sequences.


data compression conference | 2013

Efficient Quadtree Compression for Temporal Trajectory Filtering

Marko Esche; Michael Tok; Alexander Glantz; Andreas Krutz; Thomas Sikora

Summary form only given. Spatial in loop filters are a well established tool to improve the compression performance of todays video codecs. Temporal denoising and deblocking filters have recently also received some attention, because of their ability to stabilize pictures and to reduce flickering artifacts. One such filter, the previously introduced Quad tree-based Temporal Trajectory Filter, can produce good results, provided that the associated quad tree is sufficiently detailed. In this paper a novel, generally applicable scheme to compress such quad tree information is presented. In addition, the performance of the filter within the current HEVC test model HM 8.0 is investigated.


international conference on image processing | 2013

A dynamic model buffer for parametric motion vector prediction in random-access coding scenarios

Michael Tok; Marko Esche; Thomas Sikora

Motion compensated inter prediction is a powerful tool used in modern hybrid video codecs to reduce the temporal redundancy of video sequences. However, the motion information needed for motion compensation is highly redundant as well. Thus, motion vector prediction and difference coding is a common method in modern video codecs. During the standardization of HEVC, new methods for motion prediction such as temporal motion vector prediction have been analyzed. This paper presents a method for motion vector prediction from perspective motion models in random access scenarios with hierarchical group of picture structures. To enable this kind of prediction a dynamic buffer system for generating, compressing and transmitting the underlying motion models is introduced. Bit rate reductions of up to 5% underline the performance of the complete system.


international conference on image processing | 2012

Quadtree-based temporal trajectory filtering

Marko Esche; Alexander Glantz; Andreas Krutz; Michael Tok; Thomas Sikora

In both the HEVC draft and in H.264/AVC, in-loop filters are employed to improve the subjective and the objective quality of compressed video sequences. These filters use spatial information from a single frame only. Temporal Trajectory Filtering (TTF) constitutes an alternative approach which performs filtering in the temporal domain instead. In this work, a combination of the TTF with a quadtree partitioning algorithm for applying different filter parameters to different image regions is proposed and investigated. Experiments were conducted in the environment of the HEVC test model HM 3.0. Bit rate reductions of up to 9% for the low delay high efficiency setting of HEVC are reported.


international conference on image processing | 2011

Temporal trajectory filtering for bi-directional predicted frames

Marko Esche; Andreas Krutz; Alexander Glantz; Thomas Sikora

In this work the application of a temporal in-loop filtering approach for B-frames in video compression based on the Temporal Trajectory Filter (TTF) is investigated. The TTF constructs temporal pixel trajectories for individual image points in the P-frames of a video sequence, which can be utilized to improve the quality of the reconstructed frames used for prediction. It is shown, how this concept can be adapted to B-frames despite the fact that these already use temporal motion information to a great extent through the flexible choice of reference frames and prediction modes. The proposed filter has been integrated into the H.264/AVC encoder using the extended profile with hierarchical B-frames and was tested on a wide range of sequences. The filter produces bit rate reductions of up to −4% with an average of −1.6% over all tested sequences while also improving the subjective quality of the decoded video.


picture coding symposium | 2010

A novel inloop filter for video-compression based on temporal pixel trajectories

Marko Esche; Andreas Krutz; Alexander Glantz; Thomas Sikora

The objective of this work is to investigate the performance of a new inloop filter for video compression, which uses temporal rather than spatial information to improve the quality of reference frames used for prediction. The new filter has been integrated into the H.264/AVC baseline encoder and tested on a wide range of sequences. Experimental results show that the filter achieves a bit rate reduction of up to 12% and more than 4% on average without increasing the complexity of either encoder or decoder significantly.


data compression conference | 2014

Theoretical Considerations Concerning Pixelwise Temporal Filtering

Marko Esche; Michael Tok; Thomas Sikora

Temporal in loop filters present one possible way to reduce noise introduced in compressed video sequences at low bit rates. Some of these filtering approaches make use of the quantized and generally noisy motion information conveyed in the bit stream generated by the encoder. One key feature of such filters is an adaptive filter length depending on the image content and the quality of the motion field. This paper derives mathematical equations to model the behaviour of one such filter in the presence of noisy motion vectors. The predicted optimal filter lengths are demonstrated to have a global optimum. They also show strong correlation with a real-world implementation of the previously introduced Temporal Trajectory Filter based on the HEVC main profile.

Collaboration


Dive into the Marko Esche's collaboration.

Top Co-Authors

Avatar

Thomas Sikora

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Alexander Glantz

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Andreas Krutz

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Michael Tok

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Mustafa Karaman

Technical University of Berlin

View shared research outputs
Researchain Logo
Decentralizing Knowledge