Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jürgen Slowack is active.

Publication


Featured researches published by Jürgen Slowack.


Signal Processing-image Communication | 2010

Rate-distortion driven decoder-side bitplane mode decision for distributed video coding

Jürgen Slowack; Stefaan Mys; Jozef Škorupa; Nikos Deligiannis; Peter Lambert; Adrian Munteanu; Rik Van de Walle

Distributed video coding (DVC) features simple encoders but complex decoders, which lies in contrast to conventional video compression solutions such as H.264/AVC. This shift in complexity is realized by performing motion estimation at the decoder side instead of at the encoder, which brings a number of problems that need to be dealt with. One of these problems is that, while employing different coding modes yields significant coding gains in classical video compression systems, it is still difficult to fully exploit this in DVC without increasing the complexity at the encoder side. Therefore, in this paper, instead of using an encoder-side approach, techniques for decoder-side mode decision are proposed. A rate-distortion model is derived that takes into account the position of the side information in the quantization bin. This model is then used to perform mode decision at the coefficient level and bitplane level. Average rate gains of 13-28% over the state-of-the-art DISCOVER codec are reported, for a GOP of size four, for several test sequences.


IEEE Transactions on Circuits and Systems for Video Technology | 2012

Efficient Low-Delay Distributed Video Coding

Jozef Škorupa; Jürgen Slowack; Stefaan Mys; Nikos Deligiannis; J. De Cock; Peter Lambert; Christos Grecos; Adrian Munteanu; R. Van de Walle

Distributed video coding (DVC) is a video coding paradigm that allows for a low-complexity encoding process by exploiting the temporal redundancies in a video sequence at the decoder side. State-of-the-art DVC systems exhibit a structural coding delay since exploiting the temporal redundancies through motion-compensated interpolation requires the frames to be decoded out of order. To alleviate this problem, we propose a system based on motion-compensated extrapolation that allows for efficient low-delay video coding with low complexity at the encoder. The proposed extrapolation technique first estimates the motion field between the two most recently decoded frames using the Lucas-Kanade algorithm. The obtained motion field is then extrapolated to the current frame using an extrapolation grid. The proposed techniques are implemented into a novel architecture featuring hybrid block-frequency Wyner-Ziv coding as well as mode decision. Results show that having references from both temporal directions in interpolation provides superior rate-distortion performance over a single temporal direction in extrapolation, as expected. However, the proposed extrapolation method is particularly suitable for low-delay coding as it performs better than H.264/AVC intra, and it is even able to outperform the interpolation-based DVC codec from DISCOVER for several sequences.


international conference on image processing | 2011

Distributed coding of endoscopic video

Nikos Deligiannis; Frederik Verbist; Joeri Barbarien; Jürgen Slowack; Rik Van de Walle; Peter Schelkens; Adrian Munteanu

Triggered by the challenging prerequisites of wireless capsule endoscopic video technology, this paper presents a novel distributed video coding (DVC) scheme, which employs an original hash-based side-information creation method at the decoder. In contrast to existing DVC schemes, the proposed codec generates high quality side-information at the decoder, even under the strenuous motion conditions encountered in endoscopic video. Performance evaluation using broad endoscopic video material shows that the proposed approach brings notable and consistent compression gains over various state-of-the-art video codecs at the additional benefit of vastly reduced encoding complexity.


Eurasip Journal on Wireless Communications and Networking | 2012

Wyner-Ziv video coding for wireless lightweight multimedia applications

Nikos Deligiannis; Frederik Verbist; Athanassios C. Iossifides; Jürgen Slowack; Rik Van de Walle; Peter Schelkens; Adrian Munteanu

Wireless video communications promote promising opportunities involving commercial applications on a grand scale as well as highly specialized niche markets. In this regard, the design of efficient video coding systems, meeting such key requirements as low power, mobility and low complexity, is a challenging problem. The solution can be found in fundamental information theoretic results, which gave rise to the distributed video coding (DVC) paradigm, under which lightweight video encoding schemes can be engineered. This article presents a new hash-based DVC architecture incorporating a novel motion-compensated multi-hypothesis prediction technique. The presented method is able to adapt to the regional variations in temporal correlation in a frame. The proposed codec enables scalable Wyner-Ziv video coding and provides state-of-the-art distributed video compression performance. The key novelty of this article is the expansion of the application domain of DVC from conventional video material to medical imaging. Wireless capsule endoscopy in particular, which is essentially wireless video recording in a pill, is proven to be an important application field. The low complexity encoding characteristics, the ability of the novel motion-compensated multi-hypothesis prediction technique to adapt to regional degrees of temporal correlation (which is of crucial importance in the context of endoscopic video content), and the high compression performance make the proposed distributed video codec a strong candidate for future lightweight (medical) imaging applications.


picture coding symposium | 2009

Accounting for quantization noise in online correlation noise estimation for Distributed Video Coding

Jürgen Slowack; Stefaan Mys; Jozef Škorupa; Peter Lambert; Rik Van de Walle; Christos Grecos

In Distributed Video Coding (DVC), compression is achieved by exploiting correlation between frames at the decoder, instead of at the encoder. More specifically, the decoder uses already decoded frames to generate side information Y for each Wyner-Ziv frame X, and corrects errors in Y using error correcting bits received from the encoder. For efficient use of these bits, the decoder needs information about the correlation between X available at the encoder and Y at the decoder. While several techniques for online estimation of correlation noise X - Y have been proposed, the quantization noise in Y has not been taken into account. As a solution, in this paper, we calculate the quantization noise of intra frames at the encoder and use this information at the decoder to improve the accuracy of the correlation noise estimation. Results indicate averageWyner-Ziv bit rate reductions up to 19.5% (Bjøntegaard delta) for coarse quantization.


picture coding symposium | 2009

Stopping criterions for turbo coding in a Wyner-Ziv video codec

Jozef Škorupa; Jürgen Slowack; Stefaan Mys; Peter Lambert; Rik Van de Walle; Christos Grecos

Distributed video coding (DVC) targets video coding applications with low encoding complexity by generating a prediction of the video signal at the decoder. One of the most common architectures uses turbo codes to correct errors in this prediction. Unfortunately, a rigorous analysis of turbo coding in the context of DVC is missing. We have targeted one particular aspect of turbo coding: the stopping criterion. The stopping criterion indicates whether decoding was successful, i.e., whether the errors in the prediction signal have been corrected. In this paper we describe and compare several stopping criterions known from the field of channel coding and criterions currently used in DVC. As our results suggest the choice of the stopping criterion has a significant impact on the overall video-coding performance. Moreover, we have found that there are even better performing criterions than those currently used in DVC.


Signal Processing-image Communication | 2010

Exploiting quantization and spatial correlation in virtual-noise modeling for distributed video coding

Jozef Škorupa; Jürgen Slowack; Stefaan Mys; Nikos Deligiannis; Jan De Cock; Peter Lambert; Adrian Munteanu; Rik Van de Walle

Aiming for low-complexity encoding, video coders based on Wyner-Ziv theory are still unsuccessfully trying to match the performance of predictive video coders. One of the most important factors concerning the coding performance of distributed coders is modeling and estimating the correlation between the original video signal and its temporal prediction generated at the decoder. One of the problems of the state-of-the-art correlation estimators is that their performance is not consistent across a wide range of video content and different coding settings. To address this problem we have developed a correlation model able to adapt to changes in the content and the coding parameters by exploiting the spatial correlation of the video signal and the quantization distortion. In this paper we describe our model and present experiments showing that our model provides average bit rate gains of up to 12% and average PSNR gains of up to 0.5dB when compared to the state-of-the-art models. The experiments suggest that the performance of distributed coders can be significantly improved by taking video content and coding parameters into account.


Signal Processing-image Communication | 2009

Introducing skip mode in distributed video coding

Stefaan Mys; Jürgen Slowack; Jozef Škorupa; Peter Lambert; Rik Van de Walle

Although it was proven in the 1970s already by Wyner and Ziv and Slepian and Wolf that, under certain conditions, the same rate-distortion boundaries exist for distributed video coding (DVC) systems as for traditional predicting systems, until now no practical DVC system has been developed that even comes close to the performance of state-of-the-art video codecs such as H.264/AVC in terms of rate-distortion. Some important factors for this are the lower accuracy of the motion estimation performed at the decoder, the inaccurate modeling of the correlation between the side information and the original frame, and the absence in most state-of-the-art DVC systems of anything conceptually similar to the notion of skipped macroblocks in predictive coding systems. This paper proposes an extension of a state-of-the-art transform domain residual DVC system with an implementation of skip mode. The skip mode has an impact at two different places: in the turbo decoder, more specifically the soft input, soft output (SISO) convolutional decoder, and in the puncturing of the parity bits. Results show average bitrate gains up to 39% (depending on the sequence) achieved by combining both approaches. Furthermore, a hybrid video codec is presented where the motion estimation task is shifted back to the encoder. This results in a drastic increase in encoder complexity, but also in a drastic performance gain in terms of rate-distortion, with average bitrate savings up to 60% relative to the distributed video codec. In the hybrid video codec, smaller but still important average bitrate gains are achieved by implementing skip mode: up to 24%.


pacific rim conference on multimedia | 2008

Accurate Correlation Modeling for Transform-Domain Wyner-Ziv Video Coding

Jozef Škorupa; Jürgen Slowack; Stefaan Mys; Peter Lambert; Rik Van de Walle

In Wyner-Ziv (WZ) video coding, low-complexity encoding is achieved by generating the prediction signal only at the decoder. An accurate model of the correlation between the original frame and its prediction is necessary for efficient coding. Firstly, we propose an improvement for the pixel-domain correlation estimation. In transform-domain WZ video coding current models estimate the necessary correlation parameters directly in the transform domain. We propose an alternative approach, where an accurate estimation in the pixel domain is followed by a novel method of transforming the pixel-domain estimation into the transform domain. The experimental results show that our model leads to average bit-rate gain of 3.5---8%.


ACM Transactions on Sensor Networks | 2014

Progressively refined wyner-ziv video coding for visual sensors

Nikos Deligiannis; Frederik Verbist; Jürgen Slowack; Rik Van de Walle; Peter Schelkens; Adrian Munteanu

Wyner-Ziv video coding constitutes an alluring paradigm for visual sensor networks, offering efficient video compression with low complexity encoding characteristics. This work presents a novel hash-driven Wyner-Ziv video coding architecture for visual sensors, implementing the principles of successively refined Wyner-Ziv coding. To this end, so-called side-information refinement levels are constructed for a number of grouped frequency bands of the discrete cosine transform. The proposed codec creates side-information by means of an original overlapped block motion estimation and pixel-based multihypothesis prediction technique, specifically built around the pursued refinement strategy. The quality of the side-information generated at every refinement level is successively improved, leading to gradually enhanced Wyner-Ziv coding performance. Additionally, this work explores several temporal prediction structures, including a new hierarchical unidirectional prediction structure, providing both temporal scalability and low delay coding. Experimental results include a thorough evaluation of our novel Wyner-Ziv codec, assessing the impact of the proposed successive refinement scheme and the supported temporal prediction structures for a wide range of hash configurations and group of pictures sizes. The results report significant compression gains with respect to benchmark systems in Wyner-Ziv video coding (e.g., up to 42.03% over DISCOVER) as well as versus alternative state-of-the-art schemes refining the side-information.

Collaboration


Dive into the Jürgen Slowack's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adrian Munteanu

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar

Nikos Deligiannis

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Schelkens

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar

Frederik Verbist

Vrije Universiteit Brussel

View shared research outputs
Researchain Logo
Decentralizing Knowledge