Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tom Paridaens is active.

Publication


Featured researches published by Tom Paridaens.


Journal of Visual Communication and Image Representation | 2009

Moving object detection in the H.264/AVC compressed domain for video surveillance applications

Chris Poppe; Sarah De Bruyne; Tom Paridaens; Peter Lambert; Rik Van de Walle

In this paper a novel method is presented to detect moving objects in H.264/AVC [T. Wiegand, G. Sullivan, G. Bjontegaard, G. Luthra, Overview of the H.264/AVC video coding standard, IEEE Transactions on Circuits and Systems for Video Technology, 13 (7) (2003) 560-576] compressed video surveillance sequences. Related work, within the H.264/AVC compressed domain, analyses the motion vector field to find moving objects. However, motion vectors are created from a coding perspective and additional complexity is needed to clean the noisy field. Hence, an alternative approach is presented here, based on the size (in bits) of the blocks and transform coefficients used within the video stream. The system is restricted to the syntax level and achieves high execution speeds, up to 20 times faster than the related work. To show the good detection results, a detailed comparison with related work is presented for different challenging video sequences. Finally, the influence of different encoder settings is investigated to show the robustness of our system.


international symposium on broadband multimedia systems and broadcasting | 2008

Statistical multiplexing using SVC

Marc Jacobs; Joeri Barbarien; S. Tondeur; R. Van de Walle; Tom Paridaens; Peter Schelkens

In this paper, we present a new system for real-time statistical multiplexing of video streams in DVB-H broadcasting using the scalable video coding (SVC) extension of H.264/AVC. Compared to statistical multiplexing systems employing classical, non-scalable, video codecs, the proposed system does not require computationally expensive re-encoding or transcoding to adapt the bitrate of each video stream at play-out time. Additionally, a new joint rate control algorithm is proposed to dynamically distribute the channel bandwidth among the different video streams. Experimental results show that the proposed multiplexer results in a better global rate distortion performance and in lower quality differences between the different channels.


Multimedia Tools and Applications | 2010

NinSuna: a fully integrated platform for format-independent multimedia content adaptation and delivery using Semantic Web technologies

Davy Van Deursen; Wim Van Lancker; Wesley De Neve; Tom Paridaens; Erik Mannens; Rik Van de Walle

The current multimedia landscape is characterized by a significant heterogeneity in terms of coding and delivery formats, usage environments, and user preferences. The main contribution of this paper is a discussion of the design and functioning of a fully integrated platform for multimedia adaptation and delivery, called NinSuna. This platform is able to efficiently deal with the aforementioned heterogeneity in the present-day multimedia ecosystem, thanks to the use of format-agnostic adaptation engines (i.e., engines independent of the underlying coding format) and format-agnostic packaging engines (i.e., engines independent of the underlying delivery format). Moreover, NinSuna also provides a seamless integration between metadata standards and adaptation processes. Both our format-independent adaptation and packaging techniques rely on a model for multimedia bitstreams, describing the structural, semantic, and scalability properties of these multimedia streams. News sequences were used as a test case for our platform, enabling the user to select news fragments matching his/her specific interests and usage environment characteristics.


workshop on image analysis for multimedia interactive services | 2007

XML-driven Bitrate Adaptation of SVC Bitstreams

Tom Paridaens; Davy De Schrijver; W. De Neve; R. Van de Walle

Thanks to technological evolutions, the number of devices capable of playing video bitstreams is growing. The heterogeneity in these devices grows in terms of screen resolution, processing power, and available band width. In this paper, we describe an MPEG-21 Bitstream Syntax Description Language- based (BSDL-based) adaptation framework that allows providers to easily adapt scalable bitstreams without having to recode the original bitstream. We describe the steps necessary to adapt the bitstreams through BSDL. The main contribution of this paper is an optimized adaptation framework using a Bitstream Syntax Schema developed to minimize the size of the Bitstream Syntax Descriptions (BSDs). Furthermore, we created a Streaming Transformations for XML Stylesheet (STX-stylesheet) to exploit the advantages of Fine Grain Scalability, this to adapt the bitrate of Scalable Video Coding bitstreams in the most accurate way possible. Our results show that BSDL-based adaptation is able to compete with binary adaptation tools. The target bitrates can be reached within a margin of 2%, which is comparable to the reference software which uses binary adaptation.


international symposium on multimedia | 2008

NinSuna: A Format-Independent Multimedia Content Adaptation Platform Based on Semantic Web Technologies

D. Van Deursen; W. Van Lancker; Tom Paridaens; W. De Neve; Erik Mannens; R. Van de Walle

Multimedia content adaptation is gaining importance because of the growing amount of multimedia content on the one hand and the growing diversity in usage environments on the other hand. Furthermore, to deal with the growing amount of coding formats for multimedia content, format-independent adaptation systems are desired. These systems support the exploitation of scalability to meet the usage environment, as well as semantic adaptations to meet the user preferences. In this demonstration, we present NinSuna, a fully integrated multimedia content adaptation platform based on semantic web technologies. It aims at being deployable in streaming environments and relies on format-independent semantic-aware adaptation engines.


data compression conference | 2016

Leveraging CABAC for No-Reference Compression of Genomic Data with Random Access Support

Tom Paridaens; Jens Panneel; Wesley De Neve; Peter A. Lambert; Rik Van de Walle

In previous work, the authors developed a modular no-reference framework that compresses FASTA files by applying a predict-and-residue method, as used in video coding. We extended this framework with support for Context-Adaptive Binary Arithmetic Coding (CABAC), while at the same time preserving random access functionality and offering support for the full IUB/IUPAC nucleic acid codes alphabet.


IEEE Transactions on Consumer Electronics | 2016

Simultaneous encoder for high-dynamic-range and low-dynamic-range video

Johan De Praeter; Antonio Jesús Díaz-Honrubia; Tom Paridaens; Glenn Van Wallendael; Peter Lambert

High-dynamic-range (HDR) technology is an emerging video technology that allows displays to produce a higher range of luminance to better approximate the range of brightness perceived by the human eye. However, during the transition to this new technology, not all consumer devices will support the full range of luminance values offered by HDR. In order to also support these devices with lower dynamic ranges, content providers have to supply multiple dynamic range versions to provide the best experience to all viewers. This means that the processing cost to compress these versions will be multiplied by the number of versions. As a solution, this paper proposes a simultaneous encoder based on high efficiency video coding. This encoder reuses parts of the coding information generated during compression of an HDR video to accelerate the encoding of a low-dynamicrange (LDR) version of the same video. The proposed method speeds up the encoder 299 times with a bit rate increase of 12.4% compared to a non-accelerated encode of the LDR version. This is more than 90 times faster compared to stateof- the-art fast encoding algorithms and allows simultaneous encoding of the two versions for approximately the computational cost of a single encoder.


ieee global conference on signal and information processing | 2014

Towards block-based compression of genomic data with random access functionality

Tom Paridaens; Yves Van Stappen; Wesley De Neve; Peter Lambert; Rik Van de Walle

Current algorithms for compressing genomic data mostly focus on achieving high levels of effectiveness and reasonable levels of efficiency, ignoring the need for features such as random access and stream processing. Therefore, in this paper, we introduce a novel framework for compressing genomic data, with the aim of allowing for a better trade-off between effectiveness, efficiency and functionality. To that end, we draw upon concepts taken from the area of media data processing. In particular, we propose to compress genomic data as small blocks of data, using encoding tools that predict the nucleotides and that correct the prediction made by storing a residue. We also propose two techniques that facilitate random access. Our experimental results demonstrate that the compression effectiveness of the proposed approach is up to 1.91 bits per nucleotide, which is significantly better than binary encoding (3 bits per nucleotide) and Huffman coding (2.21 bits per nucleotide).


bioRxiv | 2018

An introduction to MPEG-G, the new ISO standard for genomic information representation

Claudio Alberti; Tom Paridaens; Jan Voges; Daniel Naro; Junaid Jameel Ahmad; Massimo Ravasi; Daniele Renzi; Giorgio Zoia; Idoia Ochoa; Marco Mattavelli; Jaime Delgado; Mikel Hernaez

The MPEG-G standardization initiative is a coordinated international effort to specify a compressed data format that enables large scale genomic data to be processed, transported and shared. The standard consists of a set of specifications (i.e., a book) describing: i) a nor-mative format syntax, and ii) a normative decoding process to retrieve the information coded in a compliant file or bitstream. Such decoding process enables the use of leading-edge com-pression technologies that have exhibited significant compression gains over currently used formats for storage of unaligned and aligned sequencing reads. Additionally, the standard provides a wealth of much needed functionality, such as selective access, data aggregation, ap-plication programming interfaces to the compressed data, standard interfaces to support data protection mechanisms, support for streaming and a procedure to assess the conformance of implementations. ISO/IEC is engaged in supporting the maintenance and availability of the standard specification, which guarantees the perenniality of applications using MPEG-G. Fi-nally, the standard ensures interoperability and integration with existing genomic information processing pipelines by providing support for conversion from the FASTQ/SAM/BAM file formats. In this paper we provide an overview of the MPEG-G specification, with particular focus on the main advantages and novel functionality it offers. As the standard only specifies the decoding process, encoding performance, both in terms of speed and compression ratio, can vary depending on specific encoder implementations, and will likely improve during the lifetime of MPEG-G. Hence, the performance statistics provided here are only indicative baseline examples of the technologies included in the standard.


Bioinformatics | 2018

AQUa: an adaptive framework for compression of sequencing quality scores with random access functionality

Tom Paridaens; Glenn Van Wallendael; Wesley De Neve; Peter Lambert

Motivation The past decade has seen the introduction of new technologies that significantly lowered the cost of genome sequencing. As a result, the amount of genomic data that must be stored and transmitted is increasing exponentially. To mitigate storage and transmission issues, we introduce a framework for lossless compression of quality scores. Results This article proposes AQUa, an adaptive framework for lossless compression of quality scores. To compress these quality scores, AQUa makes use of a configurable set of coding tools, extended with a Context‐Adaptive Binary Arithmetic Coding scheme. When benchmarking AQUa against generic single‐pass compressors, file sizes are reduced by up to 38.49% when comparing with GNU Gzip and by up to 6.48% when comparing with 7‐Zip at the Ultra Setting, while still providing support for random access. When comparing AQUa with the purpose‐built, single‐pass, and state‐of‐the‐art compressor SCALCE, which does not support random access, file sizes are reduced by up to 21.14%. When comparing AQUa with the purpose‐built, dual‐pass, and state‐of‐the‐art compressor QVZ, which does not support random access, file sizes are larger by 6.42‐33.47%. However, for one test file, the file size is 0.38% smaller, illustrating the strength of our single‐pass compression framework. This work has been spurred by the current activity on genomic information representation (MPEG‐G) within the ISO/IEC SC29/WG11 technical committee. Availability and implementation The software is available on Github: https://github.com/tparidae/AQUa.

Collaboration


Dive into the Tom Paridaens's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Schelkens

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge