Andrés Almansa
Télécom ParisTech
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Andrés Almansa.
IEEE Transactions on Image Processing | 2000
Andrés Almansa; Tony Lindeberg
This work presents two mechanisms for processing fingerprint images; shape-adapted smoothing based on second moment descriptors and automatic scale selection based on normalized derivatives. The shape adaptation procedure adapts the smoothing operation to the local ridge structures, which allows interrupted ridges to be joined without destroying essential singularities such as branching points and enforces continuity of their directional fields. The scale selection procedure estimates local ridge width and adapts the amount of smoothing to the local amount of noise. In addition, a ridgeness measure is defined, which reflects how well the local image structure agrees with a qualitative ridge model, and is used for spreading the results of shape adaptation into noisy areas. The combined approach makes it possible to resolve fine scale structures in clear areas while reducing the risk of enhancing noise in blurred or fragmented areas. The result is a reliable and adaptively detailed estimate of the ridge orientation field and ridge width, as well as a smoothed grey-level version of the input image. We propose that these general techniques should be of interest to developers of automatic fingerprint identification systems as well as in other applications of processing related types of imagery.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2003
Andrés Almansa; Agnès Desolneux; Sébastien Vamech
Even though vanishing points in digital images result from parallel lines in the 3D scene, most of the proposed detection algorithms are forced to rely heavily either on additional properties (like orthogonality or coplanarity and equal distance) of the underlying 3D lines, or on knowledge of the camera calibration parameters, in order to avoid spurious responses. In this work, we develop a new detection algorithm that relies on the Helmoltz principle recently proposed for computer vision by Desolneux et al (2001; 2003), both at the line detection and line grouping stages. This leads to a vanishing point detector with a low false alarms rate and a high precision level, which does not rely on any a priori information on the image or calibration parameters, and does not require any parameter tuning.
Journal of Scientific Computing | 2008
Andrés Almansa; Coloma Ballester; Vicent Caselles; Gloria Haro
Abstract We propose in this paper a total variation based restoration model which incorporates the image acquisition model z=h*U+n (where z represents the observed sampled image, U is the ideal undistorted image, h denotes the blurring kernel and n is a white Gaussian noise) as a set of local constraints. These constraints, one for each pixel of the image, express the fact that the variance of the noise can be estimated from the residuals z−h*U if we use a neighborhood of each pixel. This is motivated by the fact that the usual inclusion of the image acquisition model as a single constraint expressing a bound for the variance of the noise does not give satisfactory results if we wish to simultaneously recover textured regions and obtain a good denoising of the image. We use Uzawa’s algorithm to minimize the total variation subject to the proposed family of local constraints and we display some experiments using this model.
workshop on applications of computer vision | 2000
Andrés Almansa; Laurent D. Cohen
A common approach in fingerprint matching algorithms consists of minimizing a similarity measure between feature vectors of both images, over a set of linear transformations of one image to the other. In this work we propose the thin-plate spline as a more accurate model for the geometric transformations that arise in fingerprint images. In addition we show how such a model can be integrated into a matching algorithm by means of a two-step iterative minimization with auxiliary variables. Such a method allows to correct many of the false pairings of minutiae commonly found by matching algorithms based on linear transforms.
IEEE Transactions on Geoscience and Remote Sensing | 2002
Andrés Almansa; Frédéric Cao; Yann Gousseau; Bernard Rougé
Interpolation of digital elevation models becomes necessary in many situations, for instance, when constructing them from contour lines (available e.g., from nondigital cartography), or from disparity maps based on pairs of stereoscopic views, which often leaves large areas where point correspondences cannot be found reliably. The absolutely minimizing Lipschitz extension (AMLE) model is singled out as the simplest interpolation method satisfying a set of natural requirements. In particular, a maximum principle is proven, which guarantees not to introduce unnatural oscillations which is a major problem with many classical methods. The authors then discuss the links between the AMLE and other existing methods. In particular, they show its relation with geodesic distance transformation. They also relate the AMLE to the thin-plate method, that can be obtained by a prolongation of the axiomatic arguments leading to the AMLE, and addresses the major disadvantage of the AMLE model, namely its inability to interpolate slopes as it does for values. Nevertheless, in order to interpolate slopes, they have to give up the maximum principle and authorize the appearance of oscillations. They also discuss the possible link between the AMLE and Kriging methods that are the most widely used in the geoscience literature.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2012
Neus Sabater; Andrés Almansa; Jean-Michel Morel
This paper introduces a statistical method to decide whether two blocks in a pair of images match reliably. The method ensures that the selected block matches are unlikely to have occurred “just by chance.” The new approach is based on the definition of a simple but faithful statistical background model for image blocks learned from the image itself. A theorem guarantees that under this model, not more than a fixed number of wrong matches occurs (on average) for the whole image. This fixed number (the number of false alarms) is the only method parameter. Furthermore, the number of false alarms associated with each match measures its reliability. This a contrario block-matching method, however, cannot rule out false matches due to the presence of periodic objects in the images. But it is successfully complemented by a parameterless self-similarity threshold. Experimental evidence shows that the proposed method also detects occlusions and incoherent motions due to vehicles and pedestrians in nonsimultaneous stereo.
International Journal of Computer Vision | 2012
Mauricio Delbracio; Pablo Musé; Andrés Almansa; Jean-Michel Morel
Most medium to high quality digital cameras (dslrs) acquire images at a spatial rate which is several times below the ideal Nyquist rate. For this reason only aliased versions of the cameral point-spread function (psf) can be directly observed. Yet, it can be recovered, at a sub-pixel resolution, by a numerical method. Since the acquisition system is only locally stationary, this psf estimation must be local. This paper presents a theoretical study proving that the sub-pixel psf estimation problem is well-posed even with a single well chosen observation. Indeed, theoretical bounds show that a near-optimal accuracy can be achieved with a calibration pattern mimicking a Bernoulli(0.5) random noise. The physical realization of this psf estimation method is demonstrated in many comparative experiments. We use an algorithm to accurately estimate the pattern position and its illumination conditions. Once this accurate registration is obtained, the local psf can be directly computed by inverting a well conditioned linear system. The psf estimates reach stringent accuracy levels with a relative error of the order of 2% to 5%. To the best of our knowledge, such a regularization-free and model-free sub-pixel psf estimation scheme is the first of its kind.
Siam Journal on Imaging Sciences | 2011
Neus Sabater; Jean-Michel Morel; Andrés Almansa
This article explores the subpixel accuracy attainable for the disparity computed from a rectified stereo pair of images with small baseline. In this framework we consider translations as the local deformation model between patches in the images. A mathematical study first shows how discrete block-matching can be performed with arbitrary precision under Shannon-Whittaker conditions. This study leads to the specification of a block-matching algorithm which is able to refine disparities with subpixel accuracy. Moreover, a formula for the variance of the disparity error caused by the noise is introduced and proved. Several simulated and real experiments show a decent agreement between this theoretical error variance and the observed root mean squared error in stereo pairs with good signal-to-noise ratio and low baseline. A practical consequence is that under realistic sampling and noise conditions in optical imaging, the disparity map in stereo-rectified images can be computed for the majority of pixels (but only for those pixels with meaningful matches) with a
conference on visual media production | 2013
Alasdair Newson; Andrés Almansa; Matthieu Fradet; Yann Gousseau; Patrick Pérez
1/20
Multiscale Modeling & Simulation | 2006
Andrés Almansa; Vicent Caselles; Gloria Haro; Bernard Rougé
pixel precision.