Aline Roumy
French Institute for Research in Computer Science and Automation
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Aline Roumy.
british machine vision conference | 2012
Marco Bevilacqua; Aline Roumy; Christine Guillemot; Marie Line Alberi-Morel
This paper describes a single-image super-resolution (SR) algorithm based on nonnegative neighbor embedding. It belongs to the family of single-image example-based SR algorithms, since it uses a dictionary of low resolution (LR) and high resolution (HR) trained patch pairs to infer the unknown HR details. Each LR feature vector in the input image is expressed as the weighted combination of its K nearest neighbors in the dictionary; the corresponding HR feature vector is reconstructed under the assumption that the local LR embedding is preserved. Three key aspects are introduced in order to build a low-complexity competitive algorithm: (i) a compact but efficient representation of the patches (feature representation) (ii) an accurate estimation of the patches by their nearest neighbors (weight computation) (iii) a compact and already built (therefore external) dictionary, which allows a one-step upscaling. The neighbor embedding SR algorithm so designed is shown to give good visual results, comparable to other state-of-the-art methods, while presenting an appreciable reduction of the computational time.
international symposium on information theory | 2003
Aline Roumy; Souad Guemghar; Giuseppe Caire; Sergio Verdú
We optimize the random-like ensemble of irregular repeat-accumulate (IRA) codes for binary-input symmetric channels in the large block-length limit. Our optimization technique is based on approximating the evolution of the densities (DE) of the messages exchanged by the belief-propagation (BP) message-passing decoder by a one-dimensional dynamical system. In this way, the code ensemble optimization can be solved by linear programming. We propose four such DE approximation methods, and compare the performance of the obtained code ensembles over the binary-symmetric channel (BSC) and the binary-antipodal input additive white Gaussian noise channel (BIAWGNC). Our results clearly identify the best among the proposed methods and show that the IRA codes obtained by these methods are competitive with respect to the best known irregular low-density parity-check (LDPC) codes. In view of this and the very simple encoding structure of IRA codes, they emerge as attractive design choices.
IEEE Transactions on Information Theory | 2004
Giuseppe Caire; Souad Guemghar; Aline Roumy; Sergio Verdú
We investigate the spectral efficiency achievable by random synchronous code-division multiple access (CDMA) with quaternary phase-shift keying (QPSK) modulation and binary error-control codes, in the large system limit where the number of users, the spreading factor, and the code block length go to infinity. For given codes, we maximize spectral efficiency assuming a minimum mean-square error (MMSE) successive stripping decoder for the cases of equal rate and equal power users. In both cases, the maximization of spectral efficiency can be formulated as a linear program and admits a simple closed-form solution that can be readily interpreted in terms of power and rate control. We provide examples of the proposed optimization methods based on off-the-shelf low-density parity-check (LDPC) codes and we investigate by simulation the performance of practical systems with finite code block length.
Eurasip Journal on Wireless Communications and Networking | 2007
Aline Roumy; David Declercq
We address the problem of designing good LDPC codes for the Gaussian multiple access channel (MAC). The framework we choose is to design multiuser LDPC codes with joint belief propagation decoding on the joint graph of the 2-user case. Our main result compared to existing work is to express analytically EXIT functions of the multiuser decoder with two different approximations of the density evolution. This allows us to propose a very simple linear programming optimization for the complicated problem of LDPC code design with joint multiuser decoding. The stability condition for our case is derived and used in the optimization constraints. The codes that we obtain for the 2-user case are quite good for various rates, especially if we consider the very simple optimization procedure.
IEEE Transactions on Image Processing | 2014
Marco Bevilacqua; Aline Roumy; Christine Guillemot; Marie-Line Alberi Morel
This paper presents a novel example-based single-image superresolution procedure that upscales to high-resolution (HR) a given low-resolution (LR) input image without relying on an external dictionary of image examples. The dictionary instead is built from the LR input image itself, by generating a double pyramid of recursively scaled, and subsequently interpolated, images, from which self-examples are extracted. The upscaling procedure is multipass, i.e., the output image is constructed by means of gradual increases, and consists in learning special linear mapping functions on this double pyramid, as many as the number of patches in the current image to upscale. More precisely, for each LR patch, similar self-examples are found, and, because of them, a linear function is learned to directly map it into its HR version. Iterative back projection is also employed to ensure consistency at each pass of the procedure. Extensive experiments and comparisons with other state-of-the-art methods, based both on external and internal dictionaries, show that our algorithm can produce visually pleasant upscalings, with sharp edges and well reconstructed details. Moreover, when considering objective metrics, such as Peak signal-to-noise ratio and Structural similarity, our method turns out to give the best performance.
acm multimedia | 2011
Olivier Le Meur; Thierry Baccino; Aline Roumy
This paper proposes an automatic method for predicting the inter-observer visual congruency (IOVC). The IOVC reflects the congruence or the variability among different subjects looking at the same image. Predicting this congruence is of interest for image processing applications where the visual perception of a picture matters such as website design, advertisement, etc. This paper makes several new contributions. First, a computational model of the IOVC is proposed. This new model is a mixture of low-level visual features extracted from the input picture where models parameters are learned by using a large eye-tracking database. Once the parameters have been learned, it can be used for any new picture. Second, regarding low-level visual feature extraction, we propose a new scheme to compute the depth of field of a picture. Finally, once the training and the feature extraction have been carried out, a score ranging from 0 (minimal congruency) to 1 (maximal congruency) is computed. A value of 1 indicates that observers would focus on the same locations and suggests that the picture presents strong locations of interest. A second database of eye movements is used to assess the performance of the proposed model. Results show that our IOVC criterion outperforms the Feature Congestion measure \cite{Rosenholtz2007}. To illustrate the interest of the proposed model, we have used it to automatically rank personalized photograph.
international workshop on machine learning for signal processing | 2012
Raúl Martínez-Noriega; Aline Roumy; Gilles Blanchard
Greedy exemplar-based algorithms for inpainting face two main problems, decision of filling-in order and selection of good exemplars from which the missing region is synthesized. We propose an algorithm that tackle these problems with improvements in the preservation of linear edges, and reduction of error propagation compared to well-known algorithms from the literature. Our improvement in the filling-in order is based on a combination of priority terms, previously defined by Criminisi, that better encourages the early synthesis of linear structures. The second contribution helps reducing the error propagation thanks to a better detection of outliers from the candidate patches carried. This is obtained with a new metric that incorporates the whole information of the candidate patches. Moreover, our proposal has significant lower computational load than most of the algorithms used for comparison in this paper.
international conference on acoustics, speech, and signal processing | 2000
Inbar Fijalkow; Aline Roumy; S. Ronger; Didier Pirez; Pierre Vila
Sub-optimal joint equalization and decoding is performed by iterated equalization and decoding. This processing is named turbo-equalization with reference to the turbo-decoding of serially concatenated codes. We propose to optimize the equalizer structure (an interference canceler) thanks to the training sequence available to estimate the channel impulse response. The interference canceler, optimized at each iteration, permits one to reduce the number of iterations needed to achieve a given performance. The gain is all the more important that the channel is hard to equalize.
IEEE Communications Letters | 2011
Velotiaray Toto-Zarasoa; Aline Roumy; Christine Guillemot
In the context of Distributed Source Coding, we propose a low complexity algorithm for the estimation of the cross-over probability p of the Binary Symmetric Channel (BSC) modeling the correlation between two binary sources. The coding is done with linear block codes. We propose a novel method to estimate p prior to decoding and show that it is the Maximum Likelihood estimator of p with respect to the syndromes of the correlated sources. The method can be utilized for the parameter estimation for channel coding of binary sources over the BSC.
international conference on acoustics, speech, and signal processing | 2008
Velotiaray Toto-Zarasoa; Aline Roumy; Christine Guillemot
In this paper, we focus on the design of distributed source codes that can achieve any point in the Slepian-Wolf (SW) region and at the same time adapt to any correlation between the sources. A practical solution based on punctured accumulated LDPC codes extended to the non asymmetric case is described. The approach allows flexible rate allocation to the two sources with a gap of 0.0677 bits with respect to the minimum achievable rate.