François-Xavier Dupé
Centre national de la recherche scientifique
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by François-Xavier Dupé.
IEEE Transactions on Image Processing | 2009
François-Xavier Dupé; Jalal M. Fadili; Jean-Luc Starck
We propose an image deconvolution algorithm when the data is contaminated by Poisson noise. The image to restore is assumed to be sparsely represented in a dictionary of waveforms such as the wavelet or curvelet transforms. Our key contributions are as follows. First, we handle the Poisson noise properly by using the Anscombe variance stabilizing transform leading to a nonlinear degradation equation with additive Gaussian noise. Second, the deconvolution problem is formulated as the minimization of a convex functional with a data-fidelity term reflecting the noise properties, and a nonsmooth sparsity-promoting penalty over the image representation coefficients (e.g., lscr1 -norm). An additional term is also included in the functional to ensure positivity of the restored image. Third, a fast iterative forward-backward splitting algorithm is proposed to solve the minimization problem. We derive existence and uniqueness conditions of the solution, and establish convergence of the iterative algorithm. Finally, a GCV-based model selection procedure is proposed to objectively select the regularization parameter. Experimental results are carried out to show the striking benefits gained from taking into account the Poisson statistics of the noise. These results also suggest that using sparse-domain regularization may be tractable in many deconvolution applications with Poisson noise such as astronomy and microscopy.
Astronomy and Astrophysics | 2012
Adrienne Leonard; François-Xavier Dupé; Jean-Luc Starck
(Abridged) Weak gravitational lensing is an ideal probe of the dark universe. In recent years, several linear methods have been developed to reconstruct the density distribution in the Universe in three dimensions, making use of photometric redshift information to determine the radial distribution of lensed sources. In this paper, we aim to address three key issues seen in these methods; namely, the bias in the redshifts of detected objects, the line of sight smearing seen in reconstructions, and the damping of the amplitude of the reconstruction relative to the underlying density. We consider the problem under the framework of compressed sensing (CS). Under the assumption that the data are sparse in an appropriate dictionary, we construct a robust estimator and employ state-of-the-art convex optimisation methods to reconstruct the density contrast. For simplicity in implementation, and as a proof of concept of our method, we reduce the problem to one-dimension, considering the reconstruction along each line of sight independently. Despite the loss of information this implies, we demonstrate that our method is able to accurately reproduce cluster haloes up to a redshift of z=1, deeper than state-of-the-art linear methods. We directly compare our method with these linear methods, and demonstrate minimal radial smearing and redshift bias in our reconstructions, as well as a reduced damping of the reconstruction amplitude as compared to the linear methods. In addition, the CS framework allows us to consider an underdetermined inverse problem, thereby allowing us to reconstruct the density contrast at finer resolution than the input data.
international conference on image processing | 2011
François-Xavier Dupé; Mohamed-Jalal Fadili; J.-L. Starch
In this paper, we propose two algorithms for solving linear inverse problems when the observations are corrupted by Poisson noise. A proper data fidelity term (log-likelihood) is introduced to reflect the Poisson statistics of the noise. On the other hand, as a prior, the images to restore are assumed to be positive and sparsely represented in a dictionary of waveforms. Piecing together the data fidelity and the prior terms, the solution to the inverse problem is cast as the minimization of a non-smooth convex functional. We establish the well-posedness of the optimization problem, characterize the corresponding minimizers, and solve it by means of primal and primal-dual proximal splitting algorithms originating from the field of non-smooth convex optimization theory. Experimental results on deconvolution and comparison to prior methods are also reported.
international symposium on biomedical imaging | 2008
François-Xavier Dupé; Mohamed-Jalal Fadili; Jean-Luc Starck
We propose a deconvolution algorithm for images blurred and degraded by a Poisson noise. The algorithm uses a fast proximal backward-forward splitting iteration. This iteration minimizes an energy which combines a non-linear data fidelity term, adapted to Poisson noise, and a non- smooth sparsity-promoting regularization (e.g lscr1-norm) over the image representation coefficients in some dictionary of transforms (e.g. wavelets, curvelets). Our results on simulated microscopy images of neurons and cells are confronted to some state-of-the-art algorithms. They show that our approach is very competitive, and as expected, the importance of the non-linearity due to Poisson noise is more salient at low and medium intensities. Finally an experiment on real fluorescent confocal microscopy data is reported.
international conference on high performance computing and simulation | 2010
Amal Mahboubi; Luc Brun; François-Xavier Dupé
Automatic object recognition plays a central role in numerous applications, such as image retrieval and robot navigation. A now classical strategy consists to compute a bag of features within a sliding window and to compare this bag with precomputed models. One main drawback of this approach is the use of an unstructured bag of features which do not allow to take into account relationships which may be defined on structured objects. Graphs are natural data structures to model such relationships with nodes representing features and edges encoding relationships between them. However, usual distances between graphs such as the graph edit distance do not satisfy all the properties of a metric and classifiers defined on these distances are mainly restricted to the K nearest neighbors method. This article describes an image object classification method based on a definite positive graph kernel inducing a metric between graphs. This kernel may thus be combined with numerous classification algorithms.
Archive | 2012
Adrienne Leonard; François-Xavier Dupé; Jean-Luc Starck
Weak gravitational lensing is a powerful tool, which allows us to map the distribution of dark matter in the Universe. With the advent of large, high-resolution and multi-wavelength surveys, it has recently become possible to use photometric redshift information to reconstruct the matter distribution in three dimensions, rather than a two-dimensional projection. This is no easy task, as the inverse problem is ill posed, the data are noise-dominated, and the lensing efficiency kernel is very broad along the line of sight. State-of-the-art linear methods to recover the density distribution typically exhibit a line-of-sight bias in the location of detected peaks, and a broad smearing of the density distribution along the line of sight. We present here a non-linear proximal minimization method incorporating a sparse prior, which allows us to recover the underlying density distribution from lensing measurements with greatly reduced bias and smearing, thus allowing for more accurate mapping of the three-dimensional density distribution.
international conference on image processing | 2011
François-Xavier Dupé; Mohamed-Jalal Fadili; J.-L. Starch
The matter density is an important knowledge for today cosmology as many phenomena are linked to matter fluctuations. However, this density is not directly available, but estimated through lensing maps or galaxy surveys. In this article, we focus on galaxy surveys which are incomplete and noisy observations of the galaxy density. Incomplete, as part of the sky is unobserved or unreliable. Noisy as they are count maps degraded by Poisson noise. Using a data augmentation method, we propose a two-step method for recovering the density map, one step for inferring missing data and one for estimating the density. The results show that the missing areas are efficiently inferred and the statistical properties of the maps are preserved.
Statistical Methodology | 2012
François-Xavier Dupé; Mohamed-Jalal Fadili; Jean-Luc Starck
performance evaluation methodolgies and tools | 2011
François-Xavier Dupé; Jalal M. Fadili; Jean-Luc Starck
Archive | 2012
Adrienne Leonard; Jean-Luc Starck; Sandrine Pires; François-Xavier Dupé