Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thomas Rusert is active.

Publication


Featured researches published by Thomas Rusert.


visual communications and image processing | 2003

Transition filtering and optimized quantization in interframe wavelet video coding

Thomas Rusert; Konstantin Hanke; Jens-Rainer Ohm

In interframe wavelet video coding, wavelet-based motion-compensated temporal filtering (MCTF) is combined with spatial wavelet decomposition, allowing for efficient spatio-temporal decorrelation and temporal, spatial and SNR scalability. Contemporary interframe wavelet video coding concepts employ block-based motion estimation (ME) and compensation (MC) to exploit temporal redundancy between successive frames. Due to occlusion effects and imperfect motion modeling, block-based MCTF may generate temporal high frequency subbands with block-wise varying coefficient statistics, and low frequency subbands with block edges. Both effects may cause declined spatial transform gain and blocking artifacts. As a modification to MCTF, we present spatial highpass transition filtering (SHTF) and spatial lowpass transition filtering (SLTF), introducing smooth transitions between motion blocks in the high and low frequency subbands, respectively. Additionally, we analyze the propagation of quantization noise in MCTF and present an optimized quantization strategy to compensate for variations in synthesis filtering for different block types. Combining these approaches leads to a reduction of blocking artifacts, smoothed temporal PSNR performance, and significantly improved coding efficiency.


IEEE Transactions on Circuits and Systems for Video Technology | 2008

Enhanced MC-EZBC Scalable Video Coder

Yongjun Wu; Konstantin Hanke; Thomas Rusert; John W. Woods

In this paper, we provide some recent extension to the scalable subband/wavelet video coder MC-EZBC. The enhanced MC-EZBC employs an adaptive motion-compensated temporal filter (MCTF) framework. Directional I-BLOCKs and overlapped block motion compensation (OBMC) further improve MCTF efficiency. A scalable motion vector coder based on CABAC is shown to improve the overall performance at low bitrates/resolution. Frequency rolloff is incorporated to reduce spatial aliasing at low resolution without PSNR loss at full resolution. Experimental results show that the new features significantly improve the performance of MC-EZBC. We provide several comparisons to other recent coders.


international conference on image processing | 2003

Improvements to the MC-EZBC scalable video coder

Peisong Chen; Konstantin Hanke; Thomas Rusert; John W. Woods

MC-EZBC is a fully scalable video coder using a motion-compensated 3-D subband/wavelet decomposition. Temporal filtering is performed along estimated motion trajectories to remove temporal redundancy. To get greatly increased motion compensation (MC) accuracy and increased coding efficiency, we use the lifting filter implementation. Additionally, we introduce unconnected blocks and spatial lowpass transition filtering (SLTF). This results in improved lower frame rate data for the scalable MC-EZBC coder. Finally, an optimized quantization strategy is introduced to compensate for variations in synthesis filtering for connected and unconnected pixels.


Signal Processing-image Communication | 2004

Enhanced interframe wavelet video coding considering the interrelation of spatio-temporal transform and motion compensation

Thomas Rusert; Konstantin Hanke; Claudia Mayer

Abstract High coding performance and temporal, spatial, and SNR scalability are key elements required in future video applications. Within hybrid video coding schemes, scalability cannot be efficiently achieved due to the recursive coder structure. In motion-compensated interframe wavelet video coding, temporal filtering using wavelet filters and spatial wavelet decomposition are successively combined in an arbitrary manner, allowing for efficient spatio-temporal decorrelation and full scalability. Depending on the particular succession of spatio-temporal decomposition, typical artifacts are induced, resulting in reduced subjective and objective coding performance. In this paper, these problems are addressed within the scope of a t+2D scheme, where temporal filtering is followed by spatial transform coding, and possible solutions are discussed. Techniques for deblocking, adaptation of temporal and spatial filters, as well as for optimization of rate allocation are presented.


international conference on acoustics, speech, and signal processing | 2007

Backward Drift Estimation with Application to Quality Layer Assignment in H.264/AVC Based Scalable Video Coding

Thomas Rusert; Jens-Rainer Ohm

We present an approach for accurate estimation of the reconstruction distortion in SNR scalable video coding with drift. Based on a linear model of predictive video coding, we derive an algorithm to quantify spatio-temporal drift properties subject to prediction structure and motion information. This allows for low-complex estimation of the reconstruction distortion on a per-block basis. The accuracy of the distortion estimation is experimentally verified. We then utilize the method for quality layer assignment within the framework of H.264/AVC scalable video coding (SVC), which is currently under standardization. The quality layers allow for bit stream truncation in a rate-distortion optimized sense. Compared to the quality layer assignment as implemented in the SVC test model, use of backward drift estimation allows for achieving equivalent coding efficiency with reduced complexity.


IEEE Transactions on Circuits and Systems for Video Technology | 2009

Corrections to “Enhanced MC-EZBC Scalable Video Coder” [Oct 08 1432-1436]

Yongjun Wu; Konstantin Hanke; Thomas Rusert; John W. Woods

In the above titled paper (ibid., vol. 18, no. 10, pp. 1432-1436, Oct 08), Figs. 3 and 4 were printed incorrectly. The correct figures are presented here.


conference on image and video communications and processing | 2003

Motion-compensated 3D video coding using smooth transitions

Konstantin Hanke; Thomas Rusert; Jens-Rainer Ohm

For exploitation of temporal interdependencies between consecutive frames, in existing 3D wavelet video coding concepts a blockwise motion estimation (ME) and compensation (MC) is employed. Because of local object motion, rotation or scaling, the processing of occlusion areas is problematic. In these regions, the calculation of correct motion vectors (MV) is not always possible and blocking artifacts may appear at the motion boundaries to the connected areas, for which uniquely referenced MV could be estimated. In order to avoid this, smooth transitions can be included around the occlusion pixels, which means to blur out the block artifacts. The proposed algorithm is based on the MC-EZBC 3D wavelet video coder (Motion-Compensated Embedded video coding algorithm using ZeroBlocks of subband / wavelet coefficients and Context modeling), which employs a lifting approach for temporal filtering.


international conference on image processing | 2006

Macroblock Based Bit Allocation for SNR Scalable Video Coding with Hierarchical B Pictures

Thomas Rusert; Jens-Rainer Ohm

We investigate SNR scalable video coding based on motion compensated temporal prediction with hierarchical B pictures. This structure is a fundamental part of the scalable extension of H.264/AVC (SVC), which is currently under standardization. Due to SNR scalability, reconstruction of inter blocks may cause error accumulation within the B picture hierarchy, which is called the drift effect. In this paper we investigate the error propagation considering coding control, motion information, and target quality. We first consider open loop coding control and generalize to the case of one or multiple loops. We present a practical algorithm to quantify locally varying drift properties, which is utilized for performing rate-distortion optimized macroblock based bit allocation. We compare the approach to picture based bit allocation within the SVC test model and observe coding gains up to 0.4 dB.


international symposium on intelligent signal processing and communication systems | 2006

H.264/AVC Compatible Scalable Multiple Description Video Coding with RD Optimization

Thomas Rusert; Martin Spiertz; Jens-Rainer Ohm

We present a multiple description video coding scheme based on H.264/AVC. Each description is independently decodable by a standard compliant decoder, and mutually refining decoding of more than one description is possible with moderate pre-processing on the bit stream level. The scheme allows for fine-granular control of redundancy. Scalability is added by enabling the tools defined in the scalable extension of H.264/AVC. Then, the redundancy can be dynamically adjusted according to the network conditions. Furthermore, based on a formulation for the error propagation in the proposed system, we demonstrate that the redundancy can be generated in a rate-distortion optimized way


Acta Polytechnica | 2006

Central Decoding for Multiple Description Codes based on Domain Partitioning

Martin Spiertz; Thomas Rusert

Multiple Description Codes (MDC) can be used to trade redundancy against packet loss resistance for transmitting data over lossy diversity networks. In this work we focus on MD transform coding based on domain partitioning. Compared to Vaishampayan’s quantizer based MDC, domain based MD coding is a simple approach for generating different descriptions, by using different quantizers for each description. Commonly, only the highest rate quantizer is used for reconstruction. In this paper we investigate the benefit of using the lower rate quantizers to enhance the reconstruction quality at decoder side. The comparison is done on artificial source data and on image data.

Collaboration


Dive into the Thomas Rusert's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John W. Woods

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peisong Chen

Rensselaer Polytechnic Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge