Julien Michel
Centre National D'Etudes Spatiales
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Julien Michel.
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2011
Emmanuel Christophe; Julien Michel; Jordi Inglada
As the amount of data and the complexity of the processing rise, the demand for processing power in remote sensing applications is increasing. The processing speed is a critical aspect to enable a productive interaction between the human operator and the machine in order to achieve ever more complex tasks satisfactorily. Graphic processing units (GPU) are good candidates to speed up some tasks. With the recent developments, programming these devices became very simple. However, one source of complexity is on the frontier of this hardware: how to handle an image that does not have a convenient size as a power of 2, how to handle an image that is too big to fit the GPU memory? This paper presents a framework that has proven to be efficient with standard implementations of image processing algorithms and it is demonstrated that it also enables a rapid development of GPU adaptations. Several cases from the simplest to the more complex are detailed and illustrate speedups of up to 400 times.
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2012
Nathan Longbotham; Fabio Pacifici; Taylor C. Glenn; Alina Zare; Michele Volpi; Devis Tuia; Emmanuel Christophe; Julien Michel; Jordi Inglada; Jocelyn Chanussot; Qian Du
The 2009-2010 Data Fusion Contest organized by the Data Fusion Technical Committee of the IEEE Geoscience and Remote Sensing Society was focused on the detection of flooded areas using multi-temporal and multi-modal images. Both high spatial resolution optical and synthetic aperture radar data were provided. The goal was not only to identify the best algorithms (in terms of accuracy), but also to investigate the further improvement derived from decision fusion. This paper presents the four awarded algorithms and the conclusions of the contest, investigating both supervised and unsupervised methods and the use of multi-modal data for flood detection. Interestingly, a simple unsupervised change detection method provided similar accuracy as supervised approaches, and a digital elevation model-based predictive method yielded a comparable projected change detection map without using post-event data.
IEEE Transactions on Geoscience and Remote Sensing | 2015
Flora Dellinger; Julie Delon; Yann Gousseau; Julien Michel; Florence Tupin
The scale-invariant feature transform (SIFT) algorithm and its many variants are widely used in computer vision and in remote sensing to match features between images or to localize and recognize objects. However, mostly because of speckle noise, it does not perform well on synthetic aperture radar (SAR) images. In this paper, we introduce a SIFT-like algorithm specifically dedicated to SAR imaging, which is named SAR-SIFT. The algorithm includes both the detection of keypoints and the computation of local descriptors. A new gradient definition, yielding an orientation and a magnitude that are robust to speckle noise, is first introduced. It is then used to adapt several steps of the SIFT algorithm to SAR images. We study the improvement brought by this new algorithm, as compared with existing approaches. We present an application of SAR-SIFT to the registration of SAR images in different configurations, particularly with different incidence angles.
IEEE Transactions on Geoscience and Remote Sensing | 2009
Jordi Inglada; Julien Michel
High-resolution (HR) remote-sensing images allow us to access new kinds of information. Classical techniques for image analysis, such as pixel-based classifications or region-based segmentations, do not allow to fully exploit the richness of this kind of images. Indeed, for many applications, we are interested in complex objects which can only be identified and analyzed by studying the relationships between the elementary objects which compose them. In this paper, the use of a spatial reasoning technique called region connection calculus for the analysis of HR remote-sensing images is presented. A graph-based representation of the spatial relationships between the regions of an image is used within a graph-matching procedure in order to implement an object detection algorithm.
IEEE Transactions on Geoscience and Remote Sensing | 2015
Julien Michel; David Youssefi; Manuel Grizonnet
Segmentation of real-world remote sensing images is challenging because of the large size of those data, particularly for very high resolution imagery. However, a lot of high-level remote sensing methods rely on segmentation at some point and are therefore difficult to assess at full image scale, for real remote sensing applications. In this paper, we define a new property called stability of segmentation algorithms and demonstrate that piece- or tile-wise computation of a stable segmentation algorithm can be achieved with identical results with respect to processing the whole image at once. We also derive a technique to empirically estimate the stability of a given segmentation algorithm and apply it to four different algorithms. Among those algorithms, the mean-shift algorithm is found to be quite unstable. We propose a modified version of this algorithm enforcing its stability and thus allowing for tile-wise computation with identical results. Finally, we present results of this method and discuss the various trends and applications.
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2015
Minh Tân Pham; Grégoire Mercier; Julien Michel
A new method for local texture characterization in very high resolution (VHR) multispectral imagery is proposed based on a pointwise approach embedded into a graph model. Due to the fact that increasing the spatial resolution of satellite sensors leads to the lack of stationarity hypothesis in optical images, a pointwise approach based on a set of interest pixels only, not on the whole image pixels, seems to be relevant. Beside that no stationary condition is required, this approach could also provide the ability to deal with huge-size data as in case of VHR multispectral images. In this paper, our motivation is to exploit the radiometric, spectral as well as spatial information of characteristic pixels to describe textural features from a multispectral image. Then, a weighted graph is constructed to link these feature points based on the similarity between their previous pointwise-based descriptors. Finally, textural features can be characterized and extracted from the spectral domain of this graph. In order to evaluate the performance of the proposed method, a texture-based classification algorithm is implemented. Here, we propose to investigate both the spectral graph clustering and the spectral graph wavelet transform approaches for an unsupervised classification. Experimental results show the effectiveness of our method in terms of classification precision as well as low complexity requirement.
IEEE Transactions on Geoscience and Remote Sensing | 2016
Minh-Tan Pham; Grégoire Mercier; Julien Michel
This paper investigates the problem of change detection in multitemporal synthetic aperture radar (SAR) images. Our motivation is to avoid using a large-size dense neighborhood around each pixel to measure its change level, which is usually considered by classical methods in order to perform their accurate detectors. Therefore, we propose to develop a pointwise approach to detect land-cover changes between two SAR images employing the principle of signal processing on graphs. First, a set of characteristic points is extracted from one of the two images to capture the images significant contextual information. A weighted graph is then constructed to encode the interaction among these keypoints and hence capture the local geometric structure of this first image. With regard to this graph, the coherence of the information carried by the two images is considered for measuring changes between them. In other words, the change level will depend on how much the second image still conforms to the graph structure constructed from the first image. Additionally, due to the presence of speckle noise in SAR imaging, the log-ratio operator will be exploited to perform the image comparison measure. Experimental results performed on real SAR images show the effectiveness of the proposed algorithm, in terms of detection performance and computational complexity, compared to classical methods.
international geoscience and remote sensing symposium | 2012
Flora Dellinger; Julie Delon; Yann Gousseau; Julien Michel; Florence Tupin
The scale invariant feature transform (SIFT) algorithm, commonly used in computer vision, does not perform well on synthetic aperture radar (SAR) images, in particular because of the strong intensity and the multiplicative nature of the noise. We present an improvement of this algorithm for SAR images. First, a robust yet simple way to compute gradient on radar images is introduced. This step is first used to develop a new keypoints extraction algorithm, based on the Harris criterion. Second, we rely on this gradient definition to adapt the computation of both the main orientation and the geometric descriptor to SAR image specificities. We validate this new algorithm with different experiments and present an application of our new SAR-SIFT algorithm.
international geoscience and remote sensing symposium | 2014
Minh Tân Pham; Grégoire Mercier; Julien Michel
This paper proposes a texture-based segmentation method for very high spatial resolution imagery. Indeed, our main objective is to perform a sparse image representation modeled by a graph and then to exploit the wavelet transform on graph for the final purpose of image segmentation. Here, a set of pixels of interest, called representative pixels, is first extracted from the image and considered as vertices for constructing a weighted graph. Once the wavelet transform on graph is generated, their coefficients serve as textural features and will be exploited for unsupervised segmentation. Experimental results show the effectiveness of the proposed method when applied for very high spatial resolution multi-spectral images in terms of good segmentation precision as well as low complexity requirement.
IEEE Transactions on Geoscience and Remote Sensing | 2015
Pierre Lassalle; Jordi Inglada; Julien Michel; Manuel Grizonnet; Julien Malik
Processing large very high-resolution remote sensing images on resource-constrained devices is a challenging task because of the large size of these data sets. For applications such as environmental monitoring or natural resources management, complex algorithms have to be used to extract information from the images. The memory required to store the images and the data structures of such algorithms may be very high (hundreds of gigabytes) and therefore leads to unfeasibility on commonly available computers. Segmentation algorithms constitute an essential step for the extraction of objects of interest in a scene and will be the topic of the investigation in this paper. The objective of the present work is to adapt image segmentation algorithms for large amounts of data. To overcome the memory issue, large images are usually divided into smaller image tiles, which are processed independently. Region-merging algorithms do not cope well with image tiling since artifacts are present on the tile edges in the final result due to the incoherencies of the regions across the tiles. In this paper, we propose a scalable tile-based framework for region-merging algorithms to segment large images, while ensuring identical results, with respect to processing the whole image at once. We introduce the original concept of the stability margin for a tile. It allows ensuring identical results to those obtained if the whole image had been segmented without tiling. Finally, we discuss the benefits of this framework and demonstrate the scalability of this approach by applying it to real large images.