Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christine Guillemot is active.

Publication


Featured researches published by Christine Guillemot.


international conference on image processing | 2009

Sparse approximation with adaptive dictionary for image prediction

Mehmet Turkan; Christine Guillemot

The paper presents a dictionary construction method for spatial texture prediction based on sparse approximations. Sparse approximations have been recently considered for image prediction using static dictionaries such as a DCT or DFT dictionary. These approaches rely on the assumption that the texture is periodic, hence the use of a static dictionary formed by pre-defined waveforms. However, in real images, there are more complex and non-periodic textures. The main idea underlying the proposed spatial prediction technique is instead to consider a locally adaptive dictionary, A, formed by atoms derived from texture patches present in a causal neighborhood of the block to be predicted. The sparse spatial prediction method is assessed against the sparse prediction method based on a static DCT dictionary. The spatial prediction method is then assessed in a complete image coding scheme where the prediction residue is encoded using a coding approach similar to JPEG.


international conference on image processing | 2010

Image prediction: Template matching vs. sparse approximation

Mehmet Turkan; Christine Guillemot

The paper compares a sparse approximation based spatial texture prediction method with the template matching based prediction. Template matching algorithms have been widely considered for image prediction. These approaches rely on the assumption that the predicted texture contains a similar textural structure with the template in the sense of a simple distance metric between template and candidate. However, in real images, there are more complex textured areas where template matching fails. The basic idea instead is to consider sparse approximation algorithms. The proposed sparse spatial prediction is assessed against the prediction method based on template matching with a static and optimized dynamic templates. The spatial prediction method is then assessed in a coding scheme where the prediction residue is encoded with a coding approach similar to JPEG. Experimental observations show that the proposed method outperforms the conventional template matching based prediction.


multimedia signal processing | 2011

Epitome-based image compression using translational sub-pel mapping

Safa Cherigui; Christine Guillemot; Dominique Thoreau; Philippe Guillotel; Patrick Pérez

This paper addresses the problem of epitome construction for image compression. An optimized epitome construction method is first described, where the epitome and the associated image reconstruction, are both successively performed at full pel and sub-pel accuracy. The resulting complete still image compression scheme is then discussed with details on some innovative tools. The PSNR-rate performance achieved with this epitome-based compression method is significantly higher than the one obtained with H.264 Intra and with state of the art epitome construction method. A bit-rate saving up to 16% comparatively to H.264 Intra is achieved.


international conference on image processing | 2010

Hidden Markov Model for distributed video coding

Velotiaray Toto-Zarasoa; Aline Roumy; Christine Guillemot

This paper addresses the problem of asymmetric distributed coding of correlated binary Hidden Markov Sources, modeled as a Gilbert-Elliott process. The model parameters are estimated with an estimation-decoding Expectation-Maximization algorithm. The rate gain obtained by accounting for the memory of the sources is first assessed theoretically. The method is then shown to improve the PSNR versus rate performance of a Distributed Video Coding system, based on Low-Density Parity-Check codes.


IEEE Transactions on Signal Processing | 2006

Error-resilient first-order multiplexed source codes: performance bounds, design and decoding algorithms

Hervé Jégou; Christine Guillemot

This paper describes a new family of error-resilient variable-length source codes (VLCs). The codes introduced can be regarded as generalizations of the multiplexed codes described by Je/spl acute/gou and Guillemot. They allow to exploit first-order source statistics while, at the same time, being resilient to transmission errors. The design principle consists of creating a codebook of fixed-length codewords (FLCs) for high-priority information and in using the inherent codebook redundancy to describe low-priority information. The FLC codebook is partitioned into equivalence classes according to the conditional probabilities of the high-priority source. The error propagation phenomenon, which may result from the memory of the source coder, is controlled by choosing appropriate codebook partitions and index assignment strategies. In particular, a context-dependent index assignment strategy, called crossed-index assignment, is described. For the symbol error rate criterion, the approach turns out to maximize the cardinality of a set of codewords, called the code kernel, offering synchronization properties. The decoder resynchronization capability is shown to be further increased by periodic use of memoryless multiplexed codes. Theoretical and practical performances in terms of compression efficiency and error resilience are analyzed.


international symposium on turbo codes and iterative information processing | 2010

Non-asymmetric Slepian-Wolf coding of non-uniform Bernoulli sources

Velotiaray Toto-Zarasoa; Aline Roumy; Christine Guillemot

We address the problem of non-asymmetric Slepian-Wolf (SW) coding of two correlated non-uniform Bernoulli sources. We first show that the problem is not symmetric in the two sources, contrarily to the case of uniform sources, due to the asymmetry induced by two underlying channel models, namely additive and predictive Binary Symmetric Channels (BSC). That asymmetry has to be accounted for during the decoding. In view of that result, we describe the implementation of a joint non-asymmetric decoder of the two sources based on Low-Density Parity-Check (LDPC) codes and Message Passing (MP) decoding. We also give a necessary and sufficient condition for the recovery of the two sources, that imposes a triangular structure of a sub-part of the equivalent matrix representation of the code.


Journal of Visual Communication and Image Representation | 2011

Sparse representations for spatial prediction and texture refinement

Aurélie Martin; Jean-Jacques Fuchs; Christine Guillemot; Dominique Thoreau

In this work, we propose a novel approach for signal prediction based on the use of sparse signal representations and Matching Pursuit (MP) techniques. The paper first focuses on spatial texture prediction in a conventional block-based hybrid coding scheme and secondly addresses inter-layer prediction in a scalable video coding (SVC) framework. For spatial prediction the signal reconstruction of the block to predict is based on basis functions selected with the MP iterative algorithm, to best match a causal neighborhood. Inter-layer MP based prediction employs base layer upsampled components additionally to the causal neighborhood in order to improve the representation of high frequencies. New solutions are proposed for efficiently deriving and exploiting the atoms dictionary through phase refinement and mono-dimensional basis functions. Experimental results indicate noticeable improvement of rate/distortion performance compared to the standard prediction methods as specified in H.264/AVC and its extension SVC.


Archive | 2018

Fast Light Field Inpainting Propagation Using Angular Warping and Color-Guided Disparity Interpolation

Pierre Allain; Laurent Guillo; Christine Guillemot

This paper describes a method for fast and efficient inpainting of light fields. We first revisit disparity estimation based on smoothed structure tensors and analyze typical artefacts with their impact for the inpainting problem. We then propose an approach which is computationally fast while giving more coherent disparity in the masked region. This disparity is then used for propagating, by angular warping, the inpainted texture of one view to the entire light field. Performed experiments show the ability of our approach to yield appealing results while running considerably faster.


IEEE Transactions on Communications | 2009

Computation of posterior marginals on aggregated state models for soft source decoding

Simon Malinowski; Hervé Jégou; Christine Guillemot

Optimum soft decoding of sources compressed with variable length codes and quasi-arithmetic codes, transmitted over noisy channels, can be performed on a bit/symbol trellis. However, the number of states of the trellis is a quadratic function of the sequence length leading to a decoding complexity which is not tractable for practical applications. The decoding complexity can be significantly reduced by using an aggregated state model, while still achieving close to optimum performance in terms of bit error rate and frame error rate. However, symbol a posteriori probabilities can not be directly derived on these models and the symbol error rate (SER) may not be minimized. This paper describes a two-step decoding algorithm that achieves close to optimal decoding performance in terms of SER on aggregated state models. A performance and complexity analysis of the proposed algorithm is given.


Archive | 2004

Scalable encoding and decoding of interlaced digital video data

Gwenaelle Marquant; Guillaume Boisson; Edouard Francois; Jerome Vieron; Philippe Robert; Christine Guillemot

Collaboration


Dive into the Christine Guillemot's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aline Roumy

French Institute for Research in Computer Science and Automation

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge