Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where François Deschênes is active.

Publication


Featured researches published by François Deschênes.


Computer Vision and Image Understanding | 2001

Depth from Defocus Estimation in Spatial Domain

Djemel Ziou; François Deschênes

This paper presents an algorithm for a dense computation of the difference in blur between two images. The two images are acquired by varying the intrinsic parameters of the camera. The image formation system is assumed to be passive. Estimation of depth from the blur difference is straightforward. The algorithm is based on a local image decomposition technique using the Hermite polynomial basis. We show that any coefficient of the Hermite polynomial computed using the more blurred image is a function of the partial derivatives of the other image and the blur difference. Hence, the blur difference is computed by resolving a system of equations. The resulting estimation is dense and involves simple local operations carried out in the spatial domain. The mathematical developments underlying estimation of the blur in both 1D and 2D images are presented. The behavior of the algorithm is studied for constant images, step edges, line edges, and junctions. The selection of its parameters is discussed. The proposed algorithm is tested using synthetic and real images. The results obtained are accurate and dense. They are compared with those obtained using an existing algorithm.


Pattern Recognition Letters | 2000

Detection of line junctions and line terminations using curvilinear features

François Deschênes; Djemel Ziou

Abstract This paper describes an efficient approach for the detection of line junctions and line terminations. The algorithm is divided into two steps. First, given the lines extracted from the original image, a local measure of line curvature is estimated. Two different measures of curvature were tried out – the rate of change of direction of the orientation vector along the line and the mean of the dot products of orientation vectors within a given neighborhood. The second step involves the localization of junctions and terminations. The algorithm is validated on several synthetic and real images.


Image and Vision Computing | 2004

An unified approach for a simultaneous and cooperative estimation of defocus blur and spatial shifts

François Deschênes; Djemel Ziou; Philippe Fuchs

Abstract This paper presents an algorithm for a cooperative and simultaneous estimation of depth cues: defocus blur and spatial shifts (stereo disparities, two-dimensional (2D) motion, and/or zooming disparities). These cues are estimated from two images of the same scene acquired by a camera evolving in time and/or space and for which the intrinsic parameters are known. This algorithm is based on generalized moment expansion. We show that the more blurred image may be expressed as a function of the partial derivatives of the two images, the blur difference and the horizontal and vertical shifts. Hence, these depth cues can be computed by resolving a system of equations. The behavior of the algorithm is studied for constant and linear images, step edges, lines and junctions. The rules governing the choice of its parameters are then discussed. The proposed algorithm is tested using synthetic and real images. The results obtained are accurate and dense. They confirm that defocus blurs and spatial shifts (stereo disparities, 2D motion, and/or zooming disparities) can be simultaneously computed without using the epipolar geometry. They thus implicitly show that the unified approach allows: (1) blur estimation even if the spatial locations of corresponding pixels do not match perfectly; (2) spatial shift estimation even if some of the intrinsic parameters of the camera have been modified during the capture.


international conference on computer vision | 2007

A New Convolution Kernel for Atmospheric Point Spread Function Applied to Computer Vision

Samy Metari; François Deschênes

In this paper we introduce a new filter to approximate multiple scattering of light rays within a participating media. This filter is derived from the generalized Gaussian distribution GGD. It characterizes the Atmospheric Point Spread Function (APSF) and thus makes it possible to introduce three new approaches. First, it allows us to accurately simulate various weather conditions that induce multiple scattering including fog, haze, rain, etc. Second, it allows us to propose a new method for a cooperative and simultaneous estimation of visual cues, i.e., the identification of weather degradations and the estimation of optical thickness between two images of the same scene acquired under unknown weather conditions. Third, by combining this filter with two new sets of invariant features we recently developed, we obtain invariant features that can be used for the matching of atmospheric degraded images. The first set leads to atmospheric invariant features while the second one simultaneously provides atmospheric and geometric invariance.


Pattern Recognition | 2003

Improved Estimation of Defocus Blur and Spatial Shifts in Spatial Domain: A Homotopy-Based Approach

François Deschênes; Djemel Ziou; Philippe Fuchs

This paper presents a homotopy-based algorithm for the recovery of depth cues in the spatial domain. The algorithm specifically deals with defocus blur and spatial shifts, that is 2D motion, stereo disparities and/or zooming disparities. These cues are estimated from two images of the same scene acquired by a camera evolving in time and/or space. We show that they can be simultaneously computed by resolving a system of equations using a homotopy method. The proposed algorithm is tested using synthetic and real images. The results confirm that the use of a homotopy method leads to a dense and accurate estimation of depth cues. This approach has been integrated into an application for relief estimation from remotely sensed images.


IEEE Transactions on Image Processing | 2008

New Classes of Radiometric and Combined Radiometric-Geometric Invariant Descriptors

Samy Metari; François Deschênes

Real images can contain geometric distortions as well as photometric degradations. Analysis and characterization of those images without recourse to either restoration or geometric standardization is of great importance for the computer vision community as those two processes are often ill-posed problems. To this end, it is necessary to implement image descriptors that make it possible to identify the original image in a simple way independently of the imaging system and imaging conditions. Ideally, descriptors that capture image characteristics must be invariant to the whole range of geometric distortions and photometric degradations, such as blur, that may affect the image. In this paper, we introduce two new classes of radiometric and/or geometric invariant descriptors. The first class contains two types of radiometric invariant descriptors. The first of these type is based on the Mellin transform and the second one is based on central moments. Both descriptors are invariant to contrast changes and to convolution with any kernel having a symmetric form with respect to the diagonals. The second class contains two subclasses of combined invariant descriptors. The first subclass includes central-moment-based descriptors invariant simultaneously to horizontal and vertical translations, to uniform and anisotropic scaling, to stretching, to convolution, and to contrast changes. The second subclass contains central-complex-moment-based descriptors that are simultaneously invariant to similarity transformation and to contrast changes. We apply these invariant descriptors to the matching of geometric transformed and/or blurred images. Experimental results confirm both the robustness and the effectiveness of the proposed invariants.


international conference on image and graphics | 2007

A New Image Scaling Algorithm Based on the Sampling Theorem of Papoulis and Application to Color Images

Alain Horé; Djemel Ziou; François Deschênes

We present in this paper a new image scaling algorithm which is based on the generalized sampling theorem of Papoulis. The main idea consists in using the first and second derivatives of an image in the scaling process. The derivatives contain information about edges and discontinuities that should be preserved during resizing. The sampling theorem of Papoulis is used to combine this information. We compare our algorithm with nine of the most common scaling algorithms and two measures of quality are used: the standard deviation for evaluation of the blur, and the curvature for evaluation of the aliasing. The results presented here show that our algorithm gives the best images with very few aliasing, good contrast, good edge preserving and few blur. We also present how our algorithm applies to color images.


international conference on computer graphics and interactive techniques | 2005

Extraction of a representative tile from a near-periodic texture

Khalid Djado; Richard Egli; François Deschênes

Near-periodic textures are all around us, in brick walls, fabrics, mosaics, and many other manifestations. They are made up of a basic motif, which can be extracted, yielding a small image known as a tile. In particular, such textures are used in 3D scenes in virtual environments like games. However, the memory allocated for textures on a video card is limited. Generating texture by tiling allows scenes to be rendered in a realistic manner while using very little memory. This paper presents a simple method for extracting a representative tile from a near-periodic texture, working from a photo. The period of the texture is calculated to determine the size of the tile. Then the representative tile is chosen based on two criteria: avoiding color discontinuities at the junction of tiles, and recreating a texture that is as faithful as possible to the original.


international conference on pattern recognition | 2000

Detection of line junctions in gray-level images

François Deschênes; Djemel Ziou

This paper describes an efficient approach for the detection of line junctions in gray-level images. The algorithm is divided into two steps. First, given the lines extracted from the original image, local line curvature is estimated. For this purpose, two different measures of curvature are proposed: the rate of change of direction of the orientation vector along the line, and the mean of the dot products of orientation vectors within a given neighborhood. The second step involves the localization of junctions. Examples are provided based on experiments with synthetic and real images.


international conference on pattern recognition | 2008

Image mosaicing using local optical flow registration

Nadège Rebière; Marie Flavie Auclair-Fortier; François Deschênes

Cylindrical panoramic mosaics can be created by aligning and stitching images from a series, captured by a camera rotating around its optical center. The transformation between two images must then be found. Existing methods compute a global transformation for the whole image starting from pixels in the overlapping region. This global transformation, due to local distortions often results in ghost problems in the overlapping region which have to be handled with some deghosting algorithm. This paper proposes to directly use optical flow in order to find a pixelwise registration in the overlapping region, reducing ghost problems. The optical flow is computed using a multiresolution computational algebraic topology approach.

Collaboration


Dive into the François Deschênes's collaboration.

Top Co-Authors

Avatar

Djemel Ziou

Université de Sherbrooke

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alain Horé

Université de Sherbrooke

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Samy Metari

Université de Sherbrooke

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wei Pan

Université de Sherbrooke

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jonathan Dupont

Université de Sherbrooke

View shared research outputs
Top Co-Authors

Avatar

Khalid Djado

Université de Sherbrooke

View shared research outputs
Researchain Logo
Decentralizing Knowledge