Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christophe Parisot is active.

Publication


Featured researches published by Christophe Parisot.


international conference on image processing | 2005

Scene analysis for reducing motion JPEG 2000 video surveillance delivery bandwidth and complexity

Jerome Meessen; Christophe Parisot; Xavier Desurmont; Jean-Francois Delaigle

In this paper, we propose a new object-based video coding/transmission system using the emerging motion JPEG 2000 standard for the efficient storage and delivery of video surveillance over low bandwidth channels. Some recent papers deal with JPEG 2000 coding/transmission based on the region of interest (ROI) feature and the multi-layer capability provided by this coding system. Those approaches allow delivering more quality for mobile objects (or ROI) than for the background when bandwidth is too narrow for a sufficient video quality. The method proposed here provides the same features while significantly improving the average bitrate/quality ratio of delivered video when cameras are static. We transmit only ROIs of each frame as well as an automatic estimation of the background at a lower frame rate in two separate motion JPEG 2000 streams. The frames are then reconstructed at the client side without the need of other external data. Our method provides both better video quality and reduced client CPU usage with negligible storage overhead. Video surveillance streams stored on the server are fully compliant with existing motion JPEG 2000 decoders.


multimedia signal processing | 2001

3D scan based wavelet transform for video coding

Christophe Parisot; Marc Antonini; Michel Barlaud

Wavelet coding has been shown to be better than DCT coding. This method outperforms the DCT JPEG codec and moreover allows scalability. The 2D DWT can be easily extended to 3D and thus applied to video coding. However 3D subband coding of video suffers from two drawbacks. The first one is the memory complexity required for coding 3D blocks. The second one is the lack of temporal quality resulting in temporal blocking artifacts or flickering when the DWT is performed on temporal blocks. We propose a new temporal scan-based wavelet transform method for video coding combining the usual advantages of wavelet coding (performance, scalability), with acceptable reduced memory requirements and no additional CPU complexity.


international conference on image processing | 2002

Stripe-based MSE control in image coding

Christophe Parisot; Marc Antonini; Michel Barlaud

It is well known that compression of very large images (e.g. medical imaging, microscopy, satellite images) requires stripe-based or tiling processing. In some applications, the transmission of compressed data is performed through a rate constrained channel. Thus, rate allocation and control procedures have to be used to fit the channel characteristics. On the other hand, most applications require high quality image coding without any real time rate constraint (e.g. off-line compression for storage or for broadcasting over IP, ADSL, HDTV, etc.). Therefore, we propose a new stripe-based compression algorithm based on quality control. Our method computes first an optimal subband MSE allocation and then, the corresponding quantization steps. The proposed algorithm provides, both accurate local MSE control and a global rate-distortion improvement when compared to a rate constrained compression scheme. Furthermore, it performs better than JPEG2000.


Proceedings of SPIE, the International Society for Optical Engineering | 2000

On-board optical image compression for future high-resolution remote sensing systems

Catherine Lambert-Nebout; Christophe Latry; Gilles A. Moury; Christophe Parisot; Marc Antonini; Michel Barlaud

Future high resolution instruments planned by CNES to succeed SPOT5 will lead to higher bit rates because of the increase in both resolution and number of bits per pixel, not compensated by the reduced swatch. Data compression is then needed, with compression ratio goals higher than the 2.81 SPOT5 value obtained with a JPEG like algorithm. Compression ratio should rise typically to 4 - 6 values, with artifacts remaining unnoticeable: SPOT5 algorithm performances have clearly to be outdone. On another hand, in the framework of optimized and low cost instruments, noise level will increase. Furthermore, the Modulation Transfer Function (MTF) and the sampling grid will be fitted together, to -- at least roughly -- satisfy Shannon requirements. As with the Supermode sampling scheme of the SPOT5 Panchromatic band, the images will have to be restored (deconvolution and denoising) and that renders the compression impact assessment much more complex. This paper is a synthesis of numerous studies evaluating several data compression algorithms, some of them supposing that the adaptation between sampling grid and MTF is obtained by the quincunx Supermode scheme. The following points are analyzed: compression decorrelator (DCT, LOT, wavelet, lifting), comparison with JPEG2000 for images acquired on a square grid, compression fitting to the quincunx sampling and on board restoration (before compression) versus on ground restoration. For each of them, we describe the proposed solutions, underlining the associated complexity and comparing them from a quantitative and qualitative point of view, giving the results of experts analyses.


visual communications and image processing | 2002

Optimal nearly uniform scalar quantizer design for wavelet coding

Christophe Parisot; Marc Antonini; Michel Barlaud

Uniform scalar quantizers are widely used in image coding. They are known to be optimum entropy constrained scalar quantizers within the high resolution assumption. In this paper, we focus on the design of nearly uniform scalar quantizers for high performance coding of wavelet coefficients whatever the bitrate is. Some codecs use uniform scalar quantizers with a zero quantization bin size (deadzone) equal to two times the other quantization bin sizes (for example JPEG2000). We address the problem of deadzone size optimization using distortion rate considerations. The advantages of the proposed method are that the quantizer design is adapted to both the source statistics and the compression ratio. Our method is based on statistical information of the wavelet coefficients distribution. It provides experimental gains up to 0.19 dB.


EURASIP Journal on Advances in Signal Processing | 2003

3D scan-based wavelet transform and quality control for video coding

Christophe Parisot; Marc Antonini; Michel Barlaud


IWDC | 2002

Motion-Compensated Scan Based Wavelet Transform for Video Coding

Christophe Parisot; Marc Antonini; Michel Barlaud


international conference on image processing | 2001

Optimization of the joint coding/decoding structure

Christophe Parisot; Marc Antonini; Michel Barlaud; Stephane Tramini; Christophe Latry; Catherine Lambert-Nebout


european signal processing conference | 2002

High performance coding using a model-based bit allocation with EBCOT

Christophe Parisot; Marc Antonini; Michel Barlaud


electronic imaging | 2008

Real-time road traffic classification using mobile video cameras

Agnès Lapeyronnie; Christophe Parisot; Jerome Meessen; Xavier Desurmont; Jean-Francois Delaigle

Collaboration


Dive into the Christophe Parisot's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Catherine Lambert-Nebout

University of Nice Sophia Antipolis

View shared research outputs
Top Co-Authors

Avatar

Christophe Latry

University of Nice Sophia Antipolis

View shared research outputs
Top Co-Authors

Avatar

Jerome Meessen

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar

Stephane Tramini

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xavier Desurmont

Université catholique de Louvain

View shared research outputs
Researchain Logo
Decentralizing Knowledge