Sylvain Fischer
Spanish National Research Council
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sylvain Fischer.
Information Fusion | 2009
Rafael Redondo; Filip Sroubek; Sylvain Fischer; Gabriel Cristóbal
Today, multiresolution (MR) transforms are a widespread tool for image fusion. They decorrelate the image into several scaled and oriented sub-bands, which are usually averaged over a certain neighborhood (window) to obtain a measure of saliency. First, this paper aims to evaluate log-Gabor filters, which have been successfully applied to other image processing tasks, as an appealing candidate for MR image fusion as compared to other wavelet families. Consequently, this paper also sheds further light on appropriate values for MR settings such as the number of orientations, number of scales, overcompleteness and noise robustness. Additionally, we revise the novel Multisize Windows (MW) technique as a general approach for MR frameworks that exploits advantages of different window sizes. For all of these purposes, the proposed techniques are firstly assessed on simulated noisy experiments of multifocus fusion and then on a real microscopy scenario.
International Journal of Computer Vision | 2007
Sylvain Fischer; Filip Sroubek; Laurent Perrinet; Rafael Redondo; Gabriel Cristóbal
Orthogonal and biorthogonal wavelets became very popular image processing tools but exhibit major drawbacks, namely a poor resolution in orientation and the lack of translation invariance due to aliasing between subbands. Alternative multiresolution transforms which specifically solve these drawbacks have been proposed. These transforms are generally overcomplete and consequently offer large degrees of freedom in their design. At the same time their optimization gets a challenging task. We propose here the construction of log-Gabor wavelet transforms which allow exact reconstruction and strengthen the excellent mathematical properties of the Gabor filters. Two major improvements on the previous Gabor wavelet schemes are proposed: first the highest frequency bands are covered by narrowly localized oriented filters. Secondly, the set of filters cover uniformly the Fourier domain including the highest and lowest frequencies and thus exact reconstruction is achieved using the same filters in both the direct and the inverse transforms (which means that the transform is self-invertible). The present transform not only achieves important mathematical properties, it also follows as much as possible the knowledge on the receptive field properties of the simple cells of the Primary Visual Cortex (V1) and on the statistics of natural images. Compared to the state of the art, the log-Gabor wavelets show excellent ability to segregate the image information (e.g. the contrast edges) from spatially incoherent Gaussian noise by hard thresholding, and then to represent image features through a reduced set of large magnitude coefficients. Such characteristics make the transform a promising tool for processing natural images.
IEEE Transactions on Image Processing | 2006
Sylvain Fischer; Gabriel Cristóbal; Rafael Redondo
Gabor representations present a number of interesting properties despite the fact that the basis functions are nonorthogonal and provide an overcomplete representation or a nonexact reconstruction. Overcompleteness involves an expansion of the number of coefficients in the transform domain and induces a redundancy that can be further reduced through computational costly iterative algorithms like Matching Pursuit. Here, a biologically plausible algorithm based on competitions between neighboring coefficients is employed for adaptively representing any source image by a selected subset of Gabor functions. This scheme involves a sharper edge localization and a significant reduction of the information redundancy, while, at the same time, the reconstruction quality is preserved. The method is characterized by its biological plausibility and promising results, but it still requires a more in depth theoretical analysis for completing its validation.
international conference on image analysis and processing | 1999
H. du Buf; Micha M. Bayer; Stephen J. M. Droop; R. Head; Steve Juggins; Sylvain Fischer; Horst Bunke; Michael H. F. Wilkinson; Jos B. T. M. Roerdink; Jose Luis Pech-Pacheco; Gabriel Cristóbal; H. Shahbazkia; A. Ciobanu
This paper introduces the project ADIAC (Automatic Diatom Identification and Classification), which started in May 1998 and which is financed by the European MAST (Marine Science and Technology) programme. The main goal is to develop algorithms for an automatic identification of diatoms using image information, both valve shape (contour) and ornamentation. The paper presents the goals of the project as well as first results on shape modeling and contour extraction. Public data are available in order to create student projects beyond the ADIAC partnership.
EURASIP Journal on Advances in Signal Processing | 2007
Sylvain Fischer; Rafael Redondo; Laurent Perrinet; Gabriel Cristóbal
Several drawbacks of critically sampled wavelets can be solved by overcomplete multiresolution transforms and sparse approximation algorithms. Facing the difficulty to optimize such nonorthogonal and nonlinear transforms, we implement a sparse approximation scheme inspired from the functional architecture of the primary visual cortex. The scheme models simple and complex cell receptive fields through log-Gabor wavelets. The model also incorporates inhibition and facilitation interactions between neighboring cells. Functionally these interactions allow to extract edges and ridges, providing an edge-based approximation of the visual information. The edge coefficients are shown sufficient for closely reconstructing the images, while contour representations by means of chains of edges reduce the information redundancy for approaching image compression. Additionally, the ability to segregate the edges from the noise is employed for image restoration.
Journal of The Optical Society of America A-optics Image Science and Vision | 2012
Anthony Bichler; Sylvain Lecler; Bruno Serio; Sylvain Fischer; Pierre Pfeiffer
A step index multimode optical fiber with a perturbation on a micrometer scale, inducing a periodic deformation of the fiber section along its propagation axis, is theoretically investigated. The studied microperturbation is mechanically achieved using two microstructured jaws squeezing the straight fiber. As opposed to optical fiber microbend sensors, the optical axis of the proposed transducer is not bended; only the optical fiber section is deformed. Further, the strain applied on the fiber produces a periodical elliptical modification of the core and a modulation of the index of refraction. As a consequence of the micrometer scale perturbation period, the resulting mode coupling occurs directly between guided and radiated modes. To simulate the transmission induced by these kinds of perturbations, simplified models considering only total mode couplings are often used. In order to investigate the range of validity of this approximation, results are compared to the electromagnetic mode couplings rigorously computed for the first time, to our knowledge, with a large multimode fiber (more than 6000 linear polarized modes) using the Marcuse model. In addition, in order to have a more complete modeling of the proposed transducer, the anisotropic elasto-optic effects in the stressed multimode fiber are considered. In this way, the transmission of the microperturbed optical fiber transmission and, therefore, the behavior of the transducer are physically explained and its applications as a future stretching sensor are discussed.
Journal of Visual Communication and Image Representation | 2008
Rafael Redondo; Sylvain Fischer; Filip Sroubek; Gabriel Cristóbal
We present a scheme for image fusion based on a 2D implementation of the Wigner Distribution (WD) combined with a multisize windows technique. The joint space-frequency distribution provided by the WD can be managed as a measure of saliency that indicates which regions among different sources (channels) should be preserved. However such a saliency measure varies significantly according to the local analysis (window) in which the WD is calculated. Hence, large windows provide high resolution and robustness against possible noise present in channels and small windows provide accurate localization. The multisize windows technique combines the saliency measures of different windows taking advantage of the benefit contributed by each size. The performance assessment was conducted in artificial multifocus images under different noise exposures as well as real multifocus scenarios.
international conference on image analysis and processing | 2001
Sylvain Fischer; Gabriel Cristóbal
Most image compression methods are based on the use of the DCT or (bi-)orthogonal wavelets. However, in many cases improved performance in terms of visual quality can be expected if we consider a human visual system based model. The aim of this paper is to explore the potential of image compression techniques based on the use of nonorthogonal filters such as Gabor wavelets. The compression scheme is performed by a linear wavelet transform with filters similar to 2D Gabor functions through a quantizer based on measurements of the contrast sensitivity function of the human visual system (HVS). The compression performance is evaluated by entropy and error measures. Because of the non-orthogonality property, different image decompositions will have the same reconstruction. Thus, between all possible decompositions, one can be interested specifically in a minimum entropy wavelet transform that minimizes the information redundancy. This process can be considered as a nonlinear Gabor-wavelet transform that can be employed for compression applications. The overall optimization procedure has been implemented as an iterative algorithm producing a significant reduction in the information redundancy.
european conference on computer vision | 2004
Sylvain Fischer; Pierre Bayerl; Heiko Neumann; Gabriel Cristóbal; Rafael Redondo
Tensor voting is an efficient algorithm for perceptual grouping and feature extraction, particularly for contour extraction. In this paper two studies on tensor voting are presented. First the use of iterations is investigated, and second, a new method for integrating curvature information is evaluated. In opposition to other grouping methods, tensor voting claims the advantage to be non-iterative. Although non-iterative tensor voting methods provide good results in many cases, the algorithm can be iterated to deal with more complex data configurations. The experiments conducted demonstrate that iterations substantially improve the process of feature extraction and help to overcome limitations of the original algorithm. As a further contribution we propose a curvature improvement for tensor voting. On the contrary to the curvature-augmented tensor voting proposed by Tang and Medioni, our method takes advantage of the curvature calculation already performed by the classical tensor voting and evaluates the full curvature, sign and amplitude. Some new curvature-modified voting fields are also proposed. Results show a lower degree of artifacts, smoother curves, a high tolerance to scale parameter changes and also more noise-robustness.
Signal Processing | 2007
Sylvain Fischer; Pierre Bayerl; Heiko Neumann; Rafael Redondo; Gabriel Cristóbal
Tensor voting (TV) methods have been developed in a series of papers by Medioni and coworkers during the last years. The method has been proved efficient for feature extraction and grouping and has been applied successfully in a diversity of applications such as contour and surface inferences, motion analysis, etc. We present here two studies on improvements of the method. The first one consists in iterating the TV process, and the second one integrates curvature information. In contrast to other grouping methods, TV claims the advantage to be non-iterative. Although non-iterative TV methods provide good results in many cases, the algorithm can be iterated to deal with more complex or more ambiguous data configurations. We present experiments that demonstrate that iterations substantially improve the process of feature extraction and help to overcome limitations of the original algorithm. As a further contribution, we propose a curvature improvement for TV. Unlike the curvature-augmented TV proposed by Tang and Medioni, our method evaluates the full curvature, sign and amplitude in the 2D case. Another advantage of the method is that it uses part of the curvature calculation already performed by the classical TV, limiting the computational costs. Curvature-modified voting fields are also proposed. Results show smoother curves, a lower degree of artifacts and a high tolerance against scale variations of the input. The methods are finally tested under noisy conditions showing that the proposed improvements preserve the noise robustness of the TV method.