Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Antonin Descampe is active.

Publication


Featured researches published by Antonin Descampe.


IEEE Transactions on Circuits and Systems for Video Technology | 2006

A Flexible Hardware JPEG 2000 Decoder for Digital Cinema

Antonin Descampe; François-Olivier Devaux; Gaël Rouvroy; Jean-Didier Legat; Jean-Jacques Quisquater; Benoît Macq

The image compression standard JPEG 2000 proposes a large set of features that is useful for todays multimedia applications. Unfortunately, it is much more complex than older standards. Real-time applications, such as digital cinema, require a specific, secure, and scalable hardware implementation. In this paper, a decoding scheme is proposed with two main characteristics. First, the complete scheme takes place in a field-programmable gate array without accessing any external memory, allowing integration in a secured system. Second, a customizable level of parallelization allows to satisfy a broad range of constraints, depending on the signal resolution. The resulting architecture is therefore ready to meet upcoming digital cinema specifications


IEEE Transactions on Image Processing | 2007

Prefetching and Caching Strategies for Remote and Interactive Browsing of JPEG2000 Images

Antonin Descampe; C. De Vleeschouwer; Marcela Iregui; Benoît Macq; F. Marques

This paper considers the issues of scheduling and caching JPEG2000 data in client/server interactive browsing applications, under memory and channel bandwidth constraints. It analyzes how the conveyed data have to be selected at the server and managed within the client cache so as to maximize the reactivity of the browsing application. Formally, to render the dynamic nature of the browsing session, we assume the existence of a reaction model that defines when the user launches a novel command as a function of the image quality displayed at the client. As a main outcome, our work demonstrates that, due to the latency inherent to client/server exchanges, a priori expectation about future navigation commands may help to improve the overall reactivity of the system. In our study, the browsing session is defined by the evolution of a rectangular window of interest (WoI) along the time. At any given time, the WoI defines the position and the resolution of the image data to display at the client. The expectation about future navigation commands is then formalized based on a stochastic navigation model, which defines the probability that a given WoI is requested next, knowing previous WoI requests. Based on that knowledge, several scheduling scenarios are considered. The first scenario is conventional and transmits all the data corresponding to the current WoI before prefetching the most promising data outside the current WoI. Alternative scenarios are then proposed to anticipate prefetching, by scheduling data expected to be requested in the future before all the current WoI data have been sent out. Our results demonstrate that, for predictable navigation commands, anticipated prefetching improves the overall reactivity of the system by up to 30% compared to the conventional scheduling approach. They also reveal that an accurate knowledge of the reaction model is not required to get these significant improvements


international conference on multimedia and expo | 2005

Data prefetching for smooth navigation of large scale JPEG 2000 images

Antonin Descampe; Jihong Ou; Philippe Chevalier; Benoît Macq

Remote access to large scale images arouses a growing interest in fields such as medical imagery or remote sensing. This raises the need for algorithms guaranteeing navigation smoothness while minimizing the network resources used. In this paper, we present a model taking advantage of the JPEG 2000 scalability combined with a prefetching policy. The model uses the last user action to efficiently manage the cache and to prefetch the most probable data to be used next. Three different network configurations are considered. In each case, comparison with two more classic policies shows the improvement brought by our approach.


acm multimedia | 2006

Coarse-to-fine textures retrieval in the JPEG 2000 compressed domain for fast browsing of large image databases

Antonin Descampe; Pierre Vandergheynst; Christophe De Vleeschouwer; Benoît Macq

In many applications, the amount and resolution of digital images have significantly increased over the past few years. For this reason, there is a growing interest for techniques allowing to efficiently browse and seek information inside such huge data spaces. JPEG 2000, the latest compression standard from the JPEG committee, has several interesting features to handle very large images. In this paper, these features are used in a coarse-to-fine approach to retrieve specific information in a JPEG 2000 code-stream while minimizing the computational load required by such processing. Practically, a cascade of classifiers exploits the bit-depth and resolution scalability features intrinsically present in JPEG 2000 to progressively refine the classification process. Comparison with existing techniques is made in a texture-retrieval task and shows the efficiency of such approach.


Applications of Digital Image Processing XL | 2017

JPEG XS, a new standard for visually lossless low-latency lightweight image compression

Antonin Descampe; Joachim Keinert; Thomas Richter; Siegfried Fößel; Gaël Rouvroy

JPEG XS is an upcoming standard from the JPEG Committee (formally known as ISO/IEC SC29 WG1). It aims to provide an interoperable visually lossless low-latency lightweight codec for a wide range of applications including mezzanine compression in broadcast and Pro-AV markets. This requires optimal support of a wide range of implementation technologies such as FPGAs, CPUs and GPUs. Targeted use cases are professional video links, IP transport, Ethernet transport, real-time video storage, video memory buffers, and omnidirectional video capture and rendering. In addition to the evaluation of the visual transparency of the selected technologies, a detailed analysis of the hardware and software complexity as well as the latency has been done to make sure that the new codec meets the requirements of the above-mentioned use cases. In particular, the end-to-end latency has been constrained to a maximum of 32 lines. Concerning the hardware complexity, neither encoder nor decoder should require more than 50% of an FPGA similar to Xilinx Artix 7 or 25% of an FPGA similar to Altera Cyclon 5. This process resulted in a coding scheme made of an optional color transform, a wavelet transform, the entropy coding of the highest magnitude level of groups of coefficients, and the raw inclusion of the truncated wavelet coefficients. This paper presents the details and status of the standardization process, a technical description of the future standard, and the latest performance evaluation results.


IEEE Transactions on Image Processing | 2011

Scalable Feature Extraction for Coarse-to-Fine JPEG 2000 Image Classification

Antonin Descampe; C. De Vleeschouwer; Pierre Vandergheynst; Benoît Macq

In this paper, we address the issues of analyzing and classifying JPEG 2000 code-streams. An original representation, called integral volume, is first proposed to compute local image features progressively from the compressed code-stream, on any spatial image area, regardless of the code-blocks borders. Then, a JPEG 2000 classifier is presented that uses integral volumes to learn an ensemble of randomized trees. Several classification tasks are performed on various JPEG 2000 image databases and results are in the same range as the ones obtained in the literature with noncompressed versions of these databases. Finally, a cascade of such classifiers is considered, in order to specifically address the image retrieval issue, i.e., bi-class problems characterized by a highly skewed distribution. An efficient way to learn and optimize such cascade is proposed. We show that staying in a JPEG 2000 framework, initially seen as a constraint to avoid heavy decoding operations, is actually an advantage as it can benefit from the multiresolution and multilayer paradigms inherently present in this compression standard. In particular, unlike other existing cascaded retrieval systems, the features used along our cascade are increasingly discriminant and lead therefore to a better tradeoff of complexity versus performance.


IEEE Journal on Emerging and Selected Topics in Circuits and Systems | 2016

Quality and Error Robustness Assessment of Low-Latency Lightweight Intra-Frame Codecs for Screen Content Compression

Alexandre Willème; Antonin Descampe; Sébastien Lugan; Benoît Macq

Today, many existing types of video transmission and storage infrastructure are not able to handle UHD uncompressed video in real time. To reduce the required bit rates, a low-latency lightweight compression scheme is needed. To this end, several standardization efforts, such as Display Stream Compression, Advanced DSC, and JPEG XS, are currently being made. Focusing on screen content use cases, this paper provides a comparison of existing codecs suited for this field of application. In particular, the performance of DSC, VC-2, JPEG 2000 (in low-latency and low-complexity configurations), JPEG and HEVC Screen Content Coding Extension (SCC) in intra mode are evaluated. First, quality is assessed in single and multiple generations. Then, error robustness is evaluated by inserting one-bit errors at random positions in the compressed bitstreams. Unsurprisingly, the most complex algorithm, HEVC SCC intra, achieves the highest compression efficiency on screen content. JPEG 2000 performs well in the three experiments while HEVC SCC does not provide multi-generation robustness. DSC guarantees quality preservation in single generation at high bit rates and VC-2 provides very high error resilience. This work gives the reader an overview of the objective quality assessment that will be conducted as part of JPEG XS evaluation procedures.


Applications of Digital Image Processing XL SPIE Optical Engineering + Applications | 2017

JPEG XS-based frame buffer compression inside HEVC for power-aware video compression

Alexandre Willème; Antonin Descampe; Gaël Rouvroy; Pascal Pellegrin; Benoît Macq

With the emergence of Ultra-High Definition video, reference frame buffers (FBs) inside HEVC-like encoders and decoders have to sustain huge bandwidth. The power consumed by these external memory accesses accounts for a significant share of the codec’s total consumption. This paper describes a solution to significantly decrease the FB’s bandwidth, making HEVC encoder more suitable for use in power-aware applications. The proposed prototype consists in integrating an embedded lightweight, low-latency and visually lossless codec at the FB interface inside HEVC in order to store each reference frame as several compressed bitstreams. As opposed to previous works, our solution compresses large picture areas (ranging from a CTU to a frame stripe) independently in order to better exploit the spatial redundancy found in the reference frame. This work investigates two data reuse schemes namely Level-C and Level-D. Our approach is made possible thanks to simplified motion estimation mechanisms further reducing the FB’s bandwidth and inducing very low quality degradation. In this work, we integrated JPEG XS, the upcoming standard for lightweight low-latency video compression, inside HEVC. In practice, the proposed implementation is based on HM 16.8 and on XSM 1.1.2 (JPEG XS Test Model). Through this paper, the architecture of our HEVC with JPEG XS-based frame buffer compression is described. Then its performance is compared to HM encoder. Compared to previous works, our prototype provides significant external memory bandwidth reduction. Depending on the reuse scheme, one can expect bandwidth and FB size reduction ranging from 50% to 83.3% without significant quality degradation.


Physica Medica | 2015

Impact of motion induced artifacts on automatic registration of lung tumors in Tomotherapy

Samuel Goossens; Antonin Descampe; Jonathan Orban de Xivry; John Aldo Lee; Antoine Delor; Guillaume Janssens; Xavier Geets

PURPOSE Tomotherapy MV-CT acquisitions of lung tumors lead to artifacts due to breathing-related motion. This could preclude the reliability of tumor based positioning. We investigate the effect of these artifacts on automatic registration and determine conditions under which correct positioning can be achieved. MATERIALS AND METHODS MV-CT and 4D-CT scans of a dynamic thorax phantom were acquired with various motion amplitudes, directions, and periods. For each acquisition, the average kV-CT image was reconstructed from the 4D-CT data and rigidly registered with the corresponding MV-CT scan in a region of interest. Different kV-MV registration strategies have been assessed. RESULTS All tested registration methods led to acceptable registration errors (within 1.3 ± 1.2 mm) for motion periods of 3 and 6 s, regardless of the motion amplitude, direction, and phase difference. However, a motion period of 5 s, equal to half the Tomotherapy gantry period, induced asymmetric artifacts within MV-CT and significantly degraded the registration accuracy. CONCLUSIONS As long as the breathing period differs from 5 s, positioning based on averaged images of the tumor provides information about its daily baseline shift, and might therefore contribute to reducing margins, regardless of the registration method.


Archive | 2009

How does digital cinema compress images

Antonin Descampe; C. De Vleeschouwer; Laurent Jacques; Ferran Marqués

The development of digital technologies has drastically modified the requirements and constraints that a good image representation format should meet. Originally, the requirements were to achieve good compression efficiency while keeping the computational complexity low. This has led in 1992 to the standardization of the JPEG format, which is still widely used today (see Chapter 8). Over the years however, many things have evolved: more computing power is available and the development of Internet has required image representation formats to be more flexible and network- oriented, to enable efficient access to images through heterogeneous devices.

Collaboration


Dive into the Antonin Descampe's collaboration.

Top Co-Authors

Avatar

Benoît Macq

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar

Gaël Rouvroy

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexandre Willème

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar

C. De Vleeschouwer

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar

Christophe De Vleeschouwer

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar

François-Olivier Devaux

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar

Guillaume Janssens

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar

Jean-Didier Legat

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar

Jonathan Orban de Xivry

Université catholique de Louvain

View shared research outputs
Researchain Logo
Decentralizing Knowledge