Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Agnès Desolneux is active.

Publication


Featured researches published by Agnès Desolneux.


International Journal of Computer Vision - Special issue on statistical and computational theories of vision: modeling, learning, sampling and computing, Part I archive | 2000

Meaningful Alignments

Agnès Desolneux; Lionel Moisan; Jean-Michel Morel

We propose a method for detecting geometric structures in an image, without any a priori information. Roughly speaking, we say that an observed geometric event is “meaningful” if the expectation of its occurences would be very small in a random image. We discuss the apories of this definition, solve several of them by introducing “maximal meaningful events” and analyzing their structure. This methodology is applied to the detection of alignments in images.


Journal of Mathematical Imaging and Vision | 2001

Edge Detection by Helmholtz Principle

Agnès Desolneux; Lionel Moisan; Jean-Michel Morel

We apply to edge detection a recently introduced method for computing geometric structures in a digital image, without any a priori information. According to a basic principle of perception due to Helmholtz, an observed geometric structure is perceptually “meaningful” if its number of occurences would be very small in a random situation: in this context, geometric structures are characterized as large deviations from randomness. This leads us to define and compute edges and boundaries (closed edges) in an image by a parameter-free method. Maximal detectable boundaries and edges are defined, computed, and the results compared with the ones obtained by classical algorithms.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2003

Vanishing point detection without any a priori information

Andrés Almansa; Agnès Desolneux; Sébastien Vamech

Even though vanishing points in digital images result from parallel lines in the 3D scene, most of the proposed detection algorithms are forced to rely heavily either on additional properties (like orthogonality or coplanarity and equal distance) of the underlying 3D lines, or on knowledge of the camera calibration parameters, in order to avoid spurious responses. In this work, we develop a new detection algorithm that relies on the Helmoltz principle recently proposed for computer vision by Desolneux et al (2001; 2003), both at the line detection and line grouping stages. This leads to a vanishing point detector with a low false alarms rate and a high precision level, which does not rely on any a priori information on the image or calibration parameters, and does not require any parameter tuning.


IEEE Transactions on Image Processing | 2007

A Nonparametric Approach for Histogram Segmentation

Julie Delon; Agnès Desolneux; Jose Luis Lisani; Ana Belén Petro

In this work, we propose a method to segment a 1-D histogram without a priori assumptions about the underlying density function. Our approach considers a rigorous definition of an admissible segmentation, avoiding over and under segmentation problems. A fast algorithm leading to such a segmentation is proposed. The approach is tested both with synthetic and real data. An application to the segmentation of written documents is also presented. We shall see that this application requires the detection of very small histogram modes, which can be accurately detected with the proposed method


Journal of Mathematical Imaging and Vision | 2007

A Unified Framework for Detecting Groups and Application to Shape Recognition

Frédéric Cao; Julie Delon; Agnès Desolneux; Pablo Musé; Frédéric Sur

A unified a contrario detection method is proposed to solve three classical problems in clustering analysis. The first one is to evaluate the validity of a cluster candidate. The second problem is that meaningful clusters can contain or be contained in other meaningful clusters. A rule is needed to define locally optimal clusters by inclusion. The third problem is the definition of a correct merging rule between meaningful clusters, permitting to decide whether they should stay separate or unite. The motivation of this theory is shape recognition. Matching algorithms usually compute correspondences between more or less local features (called shape elements) between images to be compared. Each pair of matching shape elements leads to a unique transformation (similarity or affine map.) The present theory is used to group these shape elements into shapes by detecting clusters in the transformation space.


IEEE Transactions on Image Processing | 2002

Dequantizing image orientation

Agnès Desolneux; Saïd Ladjal; Lionel Moisan; Jean-Michel Morel

We address the problem of computing a local orientation map in a digital image. We show that standard image gray level quantization causes a strong bias in the repartition of orientations, hindering any accurate geometric analysis of the image. In continuation, a simple dequantization algorithm is proposed, which maintains all of the image information and transforms the quantization noise in a nearby Gaussian white noise (we actually prove that only Gaussian noise can maintain isotropy of orientations). Mathematical arguments are used to show that this results in the restoration of a high quality image isotropy. In contrast with other classical methods, it turns out that this property can be obtained without smoothing the image or increasing the signal-to-noise ratio (SNR). As an application, it is shown in the experimental section that, thanks to this dequantization of orientations, such geometric algorithms as the detection of nonlocal alignments can be performed efficiently. We also point out similar improvements of orientation quality when our dequantization method is applied to aliased images.


Journal of Mathematical Imaging and Vision | 2005

Image Denoising by Statistical Area Thresholding

David Coupier; Agnès Desolneux; Bernard Ycart

Area openings and closings are morphological filters which efficiently suppress impulse noise from an image, by removing small connected components of level sets. The problem of an objective choice of threshold for the area remains open. Here, a mathematical model for random images will be considered. Under this model, a Poisson approximation for the probability of appearance of any local pattern can be computed. In particular, the probability of observing a component with size larger than k in pure impulse noise has an explicit form. This permits the definition of a statistical test on the significance of connected components, thus providing an explicit formula for the area threshold of the denoising filter, as a function of the impulse noise probability parameter. Finally, using threshold decomposition, a denoising algorithm for grey level images is proposed.


Siam Journal on Imaging Sciences | 2013

A Patch-Based Approach for Removing Impulse or Mixed Gaussian-Impulse Noise

Julie Delon; Agnès Desolneux

In this paper, we address the problem of the restoration of images which have been affected by impulse noise or by a mixture of Gaussian and impulse noise. We rely on a patch-based approach, which requires careful choices for both the distance between patches and for the statistical estimator of the original patch. Experiments are run in the case of pure impulse noise and in the case of a mixture. The method proves to be particularly powerful, especially for the restoration of textured regions, and compares favorably to recent restoration methods.


IEEE Transactions on Image Processing | 2012

Bayesian Technique for Image Classifying Registration

Mohamed Hachama; Agnès Desolneux; Frédéric J. P. Richard

In this paper, we address a complex image registration issue arising while the dependencies between intensities of images to be registered are not spatially homogeneous. Such a situation is frequently encountered in medical imaging when a pathology present in one of the images modifies locally intensity dependencies observed on normal tissues. Usual image registration models, which are based on a single global intensity similarity criterion, fail to register such images, as they are blind to local deviations of intensity dependencies. Such a limitation is also encountered in contrast-enhanced images where there exist multiple pixel classes having different properties of contrast agent absorption. In this paper, we propose a new model in which the similarity criterion is adapted locally to images by classification of image intensity dependencies. Defined in a Bayesian framework, the similarity criterion is a mixture of probability distributions describing dependencies on two classes. The model also includes a class map which locates pixels of the two classes and weighs the two mixture components. The registration problem is formulated both as an energy minimization problem and as a maximum a posteriori estimation problem. It is solved using a gradient descent algorithm. In the problem formulation and resolution, the image deformation and the class map are estimated simultaneously, leading to an original combination of registration and classification that we call image classifying registration. Whenever sufficient information about class location is available in applications, the registration can also be performed on its own by fixing a given class map. Finally, we illustrate the interest of our model on two real applications from medical imaging: template-based segmentation of contrast-enhanced images and lesion detection in mammograms. We also conduct an evaluation of our model on simulated medical data and show its ability to take into account spatial variations of intensity dependencies while keeping a good registration accuracy.


Siam Journal on Imaging Sciences | 2010

Stabilization of Flicker-Like Effects in Image Sequences through Local Contrast Correction

Julie Delon; Agnès Desolneux

In this paper, we address the problem of the restoration of image sequences which have been affected by local intensity modifications (local contrast changes). Such artifacts can be encountered particularly in biological or archive film sequences, and are usually due to inconsistent exposures or sparse time sampling. In order to reduce such local artifacts, we introduce a local stabilization operator, called LStab, which acts as a time filter on image patches and relies on a similarity measure which is robust to contrast changes. Thereby, this operator is able to take motion into account without relying on a sophisticated motion estimation procedure. The efficiency of the stabilization is shown on various sequences. The experimental results compare favorably with state-of-the-art approaches.

Collaboration


Dive into the Agnès Desolneux's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Julie Delon

Paris Descartes University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hermine Biermé

Paris Descartes University

View shared research outputs
Top Co-Authors

Avatar

Lara Raad

École normale supérieure de Cachan

View shared research outputs
Top Co-Authors

Avatar

Mohamed Hachama

Paris Descartes University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge