Peter Schelkens
Katholieke Universiteit Leuven
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Peter Schelkens.
Signal Processing-image Communication | 2005
Luk Overmeire; Lode Nachtergaele; Fabio Verdicchio; Joeri Barbarien; Peter Schelkens
In the literature, several rate control techniques have been proposed to aim at the optimal quality of digitally encoded video under given bit budget, channel rate and buffer size constraints. Typically, these approaches are group-of-picture (GOP) based. For longer, heterogeneous sequences, they become unacceptably complex or struggle with model mismatches. In this paper, an off-line segment-based rate control approach is proposed for controlling the distortion variation across successive shots of a video sequence when encoding with single-layer (MPEG-4 baseline, MPEG-4 AVC) and scalable (wavelet) video codecs. Consistent quality is achieved by optimally distributing the available bits among the different segments, based on efficient rate-distortion (R-D) modelling of each segment. The individual segments are defined based on shot segmentation and activity analysis techniques. The algorithm is formulated for three different distribution models: download, progressive download and streaming. The results indicate that the proposed technique improves the quality consistency significantly, while the processing overhead compared to classical two-pass variable bit-rate (VBR) encoding is limited.
international conference on acoustics, speech, and signal processing | 2005
Eric Salemi; Claude Desset; Jan Cornelis; Peter Schelkens
Due to the improvement of compression technology, limited bandwidth channels can convey more data than before. However, since compressed streams are very sensitive to transmission errors, a joint optimization of the source-channel coding is required. Unequal error protection (UEP) is the most pragmatic way to do it, and exploits the embedded nature of bitstreams produced by scalable coders. However, depending on the number of substreams and protection levels, optimization of UEP can be extremely complex. A large simplification comes from considering the contribution to the global distortion of each substream as independent from the others. This paper explores this distortion additivity assumption for JPEG2000, and shows that though the assumption does not fully hold, no visible consequence on UEP performance is introduced, while reducing significantly the complexity of the optimization process.
global communications conference | 2006
E. Salemi; Claude Desset; Antoine Dejonghe; J. Cornells; Peter Schelkens
Unequal error protection (UEP) is the most practical way to achieve joint source-channel coding while keeping a clean separation between the source and channel coders. In this paper, a generic UEP allocation methodology is proposed. The methodology is based on a run-time algorithm, exploiting design-time models, and is applicable to bit-level transmission scenarios as well as packet-based transmission environments. It works with source-independent modeling and a linear complexity with respect to the number of substreams in the source coded stream and the number of protection levels available, which makes it an attractive solution for practical scenarios. We obtain a substantial complexity reduction compared to a classical Lagrangian optimization while keeping optimality in the allocation of the protection levels. Given a typical source and channel granularity, we obtain a factor 6 complexity reduction.
Applications of Digital Image Processing XL | 2017
Dale Stolitzka; Tim Bruylants; Peter Schelkens
Visually lossless image coding in isochronous display streaming or plesiochronous networks reduces link complexity and power consumption and increases available link bandwidth. A new set of codecs developed within the last four years promise a new level of coding quality, but require new techniques that are sufficiently sensitive to the small artifacts or color variations induced by this new breed of codecs. This paper begins with a summary of the new ISO/IEC 29170-2, a procedure for evaluation of lossless coding and reports the new work by JPEG to extend the procedure in two important ways, for HDR content and for evaluating the differences between still images, panning images and image sequences. ISO/IEC 29170-2 relies on processing test images through a well-defined process chain for subjective, forced-choice psychophysical experiments. The procedure sets an acceptable quality level equal to one just noticeable difference. Traditional image and video coding evaluation techniques, such as, those used for television evaluation have not proven sufficiently sensitive to the small artifacts that may be induced by this breed of codecs. In 2015, JPEG received new requirements to expand evaluation of visually lossless coding for high dynamic range images, slowly moving images, i.e., panning, and image sequences. These requirements are the basis for new amendments of the ISO/IEC 29170-2 procedures described in this paper. These amendments promise to be highly useful for the new content in television and cinema mezzanine networks. The amendments passed the final ballot in April 2017 and are on track to be published in 2018.
Computer Graphics and Imaging | 2013
Jan Lievens; Ruxandra Florea; Donny Tytgat; Patrice Rondao Alface; Peter Schelkens; Adrian Munteanu
A 3D human head scan contains both texture and mesh data to convey the illusion of a human head. We perform subjective tests with a set of observers to distil the relative importance of texture resolution versus mesh resolution for human head scans, which is important for bandwidth limited applications such as video conferencing. The mean observer scores (MOS) obtained from the tests allow us to propose a numeric model for estimating user perception in the tested conditions.
Archive | 2012
Shahid M. Satti; Leon Denis; Ruxandra Florea; Jan P.H. Cornelis; Peter Schelkens; Adrian Munteanu
3D graphics applications make use of polygonal 3D meshes for object’s shape representation. The recent introduction of high-performance laser scanners and fast microcomputer systems gave rise to high-definition graphics applications. In such applications, objects with complex textures are represented using dense 3D meshes which consist of hundreds of thousands of vertices. Due to their enormous data size, such highlydetailed 3D meshes are rather intricate to store, costly to transmit via bandwidth-limited transmission media, and hard to display on end-user terminals with diverse display capabilities. Scalable compression, wherein the source representation can be adapted to the users requests, available bandwidth and computational capabilities, is thus of paramount importance in order to make efficient use of the available resources to process, store and transmit high-resolution meshes.
Archive | 2002
Geert Van der Auwera; Ioannis Andreopoulos; Adrian Munteanu; Peter Schelkens; Jan P.H. Cornelis
Proc. of International Workshop on Video Processing and Quality Metrics for Consumer Electronics 2012 | 2012
Geert Braeckman; Adriaan Barri; Gabor Fodor; Ann Dooms; Joeri Barbarien; Peter Schelkens; András Bohó; Li Weng
Archive | 2002
Alin Alecu; Adrian Munteanu; Peter Schelkens; Jan P.H. Cornelis; Steven Dewitte
Archive | 2001
Yiannis Andreopoulos; Adrian Munteanu; Geert Van der Auwera; Peter Schelkens; Jan P.H. Cornelis