Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bertrand Le Saux is active.

Publication


Featured researches published by Bertrand Le Saux.


asian conference on computer vision | 2016

Semantic Segmentation of Earth Observation Data Using Multimodal and Multi-scale Deep Networks.

Nicolas Audebert; Bertrand Le Saux; Sébastien Lefèvre

This work investigates the use of deep fully convolutional neural networks (DFCNN) for pixel-wise scene labeling of Earth Observation images. Especially, we train a variant of the SegNet architecture on remote sensing data over an urban area and study different strategies for performing accurate semantic segmentation. Our contributions are the following: (1) we transfer efficiently a DFCNN from generic everyday images to remote sensing images; (2) we introduce a multi-kernel convolutional layer for fast aggregation of predictions at multiple scales; (3) we perform data fusion from heterogeneous sensors (optical and laser) using residual correction. Our framework improves state-of-the-art accuracy on the ISPRS Vaihingen 2D Semantic Labeling dataset.


IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2016

Processing of Extremely High-Resolution LiDAR and RGB Data: Outcome of the 2015 IEEE GRSS Data Fusion Contest - Part A: 2-D Contest

Manuel Campos-Taberner; Adriana Romero-Soriano; Carlo Gatta; Gustau Camps-Valls; Adrien Lagrange; Bertrand Le Saux; Anne Beaupère; Alexandre Boulch; Adrien Chan-Hon-Tong; Stephane Herbin; Hicham Randrianarivo; Marin Ferecatu; Michal Shimoni; Gabriele Moser; Devis Tuia

In this paper, we discuss the scientific outcomes of the 2015 data fusion contest organized by the Image Analysis and Data Fusion Technical Committee (IADF TC) of the IEEE Geoscience and Remote Sensing Society (IEEE GRSS). As for previous years, the IADF TC organized a data fusion contest aiming at fostering new ideas and solutions for multisource studies. The 2015 edition of the contest proposed a multiresolution and multisensorial challenge involving extremely high-resolution RGB images and a three-dimensional (3-D) LiDAR point cloud. The competition was framed in two parallel tracks, considering 2-D and 3-D products, respectively. In this paper, we discuss the scientific results obtained by the winners of the 2-D contest, which studied either the complementarity of RGB and LiDAR with deep neural networks (winning team) or provided a comprehensive benchmarking evaluation of new classification strategies for extremely high-resolution multimodal data (runner-up team). The data and the previously undisclosed ground truth will remain available for the community and can be obtained at http://www.grss-ieee.org/community/technical-committees/data-fusion/2015-ieee-grss-data-fusion-contest/. The 3-D part of the contest is discussed in the Part-B paper [1].


international geoscience and remote sensing symposium | 2015

Benchmarking classification of earth-observation data: From learning explicit features to convolutional networks

Adrien Lagrange; Bertrand Le Saux; Anne Beaupère; Alexandre Boulch; Adrien Chan-Hon-Tong; Stephane Herbin; Hicham Randrianarivo; Marin Ferecatu

In this paper, we address the task of semantic labeling of multisource earth-observation (EO) data. Precisely, we benchmark several concurrent methods of the last 15 years, from expert classifiers, spectral support-vector classification and high-level features to deep neural networks. We establish that (1) combining multisensor features is essential for retrieving some specific classes, (2) in the image domain, deep convolutional networks obtain significantly better overall performances and (3) transfer of learning from large generic-purpose image sets is highly effective to build EO data classifiers.


Remote Sensing | 2017

Segment-before-Detect: Vehicle Detection and Classification through Semantic Segmentation of Aerial Images

Nicolas Audebert; Bertrand Le Saux; Sébastien Lefèvre

Like computer vision before, remote sensing has been radically changed by the introduction of deep learning and, more notably, Convolution Neural Networks. Land cover classification, object detection and scene understanding in aerial images rely more and more on deep networks to achieve new state-of-the-art results. Recent architectures such as Fully Convolutional Networks can even produce pixel level annotations for semantic mapping. In this work, we present a deep-learning based segment-before-detect method for segmentation and subsequent detection and classification of several varieties of wheeled vehicles in high resolution remote sensing images. This allows us to investigate object detection and classification on a complex dataset made up of visually similar classes, and to demonstrate the relevance of such a subclass modeling approach. Especially, we want to show that deep learning is also suitable for object-oriented analysis of Earth Observation data as effective object detection can be obtained as a byproduct of accurate semantic segmentation. First, we train a deep fully convolutional network on the ISPRS Potsdam and the NZAM/ONERA Christchurch datasets and show how the learnt semantic maps can be used to extract precise segmentation of vehicles. Then, we show that those maps are accurate enough to perform vehicle detection by simple connected component extraction. This allows us to study the repartition of vehicles in the city. Finally, we train a Convolutional Neural Network to perform vehicle classification on the VEDAI dataset, and transfer its knowledge to classify the individual vehicle instances that we detected.


IEEE Geoscience and Remote Sensing Magazine | 2017

2017 IEEE GRSS Data Fusion Contest: Open Data for Global Multimodal Land Use Classification [Technical Committees]

Devis Tuia; Gabriele Moser; Bertrand Le Saux; Benjamin Bechtel; Linda See

The 2017 Data Fusion Contest, organized by the Image Analysis and Data Fusion Technical Committee (IADF TC) of the IEEE Geoscience and Remote Sensing Society (GRSS), aims at providing a challenging image analysis opportunity, including multiresolution and multimodal fusion. The 2017 contest focuses on the classification of local climate zones (LCZs) [1] in various urban environments. LCZs are a generic, climate-based typology of urban and natural landscapes that deliver information on basic physical properties of an area that can be used by land-use planners or climate modelers [2]. They are used as first-order discretization of urban areas by the World Urban Database and Access Portal Tools (WUDAPT) initiative, which aims to collect, store, and disseminate data on the form and function of cities around the world [3].


international geoscience and remote sensing symposium | 2012

Boosting for interactive man-made structure classification

Nicolas Chauffert; Jonathan Israel; Bertrand Le Saux

We describe an interactive framework for man-made structure classification. Our system is able to help an image analyst to define a query that is adapted to various image and geographic contexts. It offers a GIS-like interface for visually selecting the training region samples and a fast and efficient sample description by histogram of oriented gradients and local binary patterns. To learn a discrimination rule in this feature space, our system relies on the online gradient-boost learning algorithm for which we defined a new family of loss functions. We chose non-convex loss-functions in order to be robust to mislabelling and proposed a generic way to incorporate prior information about the training data. We show it achieves better performances than other state-of-the-art machine-learning methods on various man-structure detection problems.


urban remote sensing joint event | 2017

Fusion of heterogeneous data in convolutional networks for urban semantic labeling

Nicolas Audebert; Bertrand Le Saux; Sebastien Lefevrey

In this work, we present a novel module to perform fusion of heterogeneous data using fully convolutional networks for semantic labeling. We introduce residual correction as a way to learn how to fuse predictions coming out of a dual stream architecture. Especially, we perform fusion of DSM and IRRG optical data on the ISPRS Vaihingen dataset over a urban area and obtain new state-of-the-art results.


international geoscience and remote sensing symposium | 2016

How useful is region-based classification of remote sensing images in a deep learning framework?

Nicolas Audebert; Bertrand Le Saux; Sébastien Lefèvre

In this paper, we investigate the impact of segmentation algorithms as a preprocessing step for classification of remote sensing images in a deep learning framework. Especially, we address the issue of segmenting the image into regions to be classified using pre-trained deep neural networks as feature extractors for an SVM-based classifier. An efficient segmentation as a preprocessing step helps learning by adding a spatially-coherent structure to the data. Therefore, we compare algorithms producing superpixels with more traditional remote sensing segmentation algorithms and measure the variation in terms of classification accuracy. We establish that superpixel algorithms allow for a better classification accuracy as a homogenous and compact segmentation favors better generalization of the training samples.


international geoscience and remote sensing symposium | 2012

GPU-accelerated one-class SVM for exploration of remote sensing data

Fabien Giannesini; Bertrand Le Saux

We present a machine-learning based method for the exploration of remote sensing data. Our framework mixes an intuitive interface and a one-class support-vector machine to look for rare patterns in satellite images. It benefits from a fast implementation on the Graphics Process Unit that allows reasonable times for system-user interactions. We validate our approach with ground-truth experiments and demonstrate the method on real-world datasets. We achieve faster computations when compared with sequential implementations of the same methods (up to 80 times faster for feature extraction) and with other classification methods (such as local distribution comparison).


computer vision and pattern recognition | 2017

Joint Learning from Earth Observation and OpenStreetMap Data to Get Faster Better Semantic Maps

Nicolas Audebert; Bertrand Le Saux; Sébastien Lefèvre

In this work, we investigate the use of OpenStreetMap data for semantic labeling of Earth Observation images. Deep neural networks have been used in the past for remote sensing data classification from various sensors, including multispectral, hyperspectral, SAR and LiDAR data. While OpenStreetMap has already been used as ground truth data for training such networks, this abundant data source remains rarely exploited as an input information layer. In this paper, we study different use cases and deep network architectures to leverage OpenStreetMap data for semantic labeling of aerial and satellite images. Especially, we look into fusion based architectures and coarseto- fine segmentation to include the OpenStreetMap layer into multispectral-based deep fully convolutional networks. We illustrate how these methods can be successfully used on two public datasets: ISPRS Potsdam and DFC2017. We show that OpenStreetMap data can efficiently be integrated into the vision-based deep learning models and that it significantly improves both the accuracy performance and the convergence speed of the networks.

Collaboration


Dive into the Bertrand Le Saux's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marin Ferecatu

Conservatoire national des arts et métiers

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Frédéric Champagnat

Office National d'Études et de Recherches Aérospatiales

View shared research outputs
Top Co-Authors

Avatar

Anne Beaupère

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge