Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nicolas Audebert is active.

Publication


Featured researches published by Nicolas Audebert.


asian conference on computer vision | 2016

Semantic Segmentation of Earth Observation Data Using Multimodal and Multi-scale Deep Networks.

Nicolas Audebert; Bertrand Le Saux; Sébastien Lefèvre

This work investigates the use of deep fully convolutional neural networks (DFCNN) for pixel-wise scene labeling of Earth Observation images. Especially, we train a variant of the SegNet architecture on remote sensing data over an urban area and study different strategies for performing accurate semantic segmentation. Our contributions are the following: (1) we transfer efficiently a DFCNN from generic everyday images to remote sensing images; (2) we introduce a multi-kernel convolutional layer for fast aggregation of predictions at multiple scales; (3) we perform data fusion from heterogeneous sensors (optical and laser) using residual correction. Our framework improves state-of-the-art accuracy on the ISPRS Vaihingen 2D Semantic Labeling dataset.


3DOR | 2017

Unstructured Point Cloud Semantic Labeling Using Deep Segmentation Networks.

Alexandre Boulch; Bertrand Le Saux; Nicolas Audebert

In this work, we describe a new, general, and efficient method for unstructured point cloud labeling. As the question of efficiently using deep Convolutional Neural Networks (CNNs) on 3D data is still a pending issue, we propose a framework which applies CNNs on multiple 2D image views (or snapshots) of the point cloud. The approach consists in three core ideas. (i) We pick many suitable snapshots of the point cloud. We generate two types of images: a Red-Green-Blue (RGB) view and a depth composite view containing geometric features. (ii) We then perform a pixel-wise labeling of each pair of 2D snapshots using fully convolutional networks. Different architectures are tested to achieve a profitable fusion of our heterogeneous inputs. (iii) Finally, we perform fast back-projection of the label predictions in the 3D space using efficient buffering to label every 3D point. Experiments show that our method is suitable for various types of point clouds such as Lidar or photogrammetric data.


Remote Sensing | 2017

Segment-before-Detect: Vehicle Detection and Classification through Semantic Segmentation of Aerial Images

Nicolas Audebert; Bertrand Le Saux; Sébastien Lefèvre

Like computer vision before, remote sensing has been radically changed by the introduction of deep learning and, more notably, Convolution Neural Networks. Land cover classification, object detection and scene understanding in aerial images rely more and more on deep networks to achieve new state-of-the-art results. Recent architectures such as Fully Convolutional Networks can even produce pixel level annotations for semantic mapping. In this work, we present a deep-learning based segment-before-detect method for segmentation and subsequent detection and classification of several varieties of wheeled vehicles in high resolution remote sensing images. This allows us to investigate object detection and classification on a complex dataset made up of visually similar classes, and to demonstrate the relevance of such a subclass modeling approach. Especially, we want to show that deep learning is also suitable for object-oriented analysis of Earth Observation data as effective object detection can be obtained as a byproduct of accurate semantic segmentation. First, we train a deep fully convolutional network on the ISPRS Potsdam and the NZAM/ONERA Christchurch datasets and show how the learnt semantic maps can be used to extract precise segmentation of vehicles. Then, we show that those maps are accurate enough to perform vehicle detection by simple connected component extraction. This allows us to study the repartition of vehicles in the city. Finally, we train a Convolutional Neural Network to perform vehicle classification on the VEDAI dataset, and transfer its knowledge to classify the individual vehicle instances that we detected.


urban remote sensing joint event | 2017

Fusion of heterogeneous data in convolutional networks for urban semantic labeling

Nicolas Audebert; Bertrand Le Saux; Sebastien Lefevrey

In this work, we present a novel module to perform fusion of heterogeneous data using fully convolutional networks for semantic labeling. We introduce residual correction as a way to learn how to fuse predictions coming out of a dual stream architecture. Especially, we perform fusion of DSM and IRRG optical data on the ISPRS Vaihingen dataset over a urban area and obtain new state-of-the-art results.


international geoscience and remote sensing symposium | 2016

How useful is region-based classification of remote sensing images in a deep learning framework?

Nicolas Audebert; Bertrand Le Saux; Sébastien Lefèvre

In this paper, we investigate the impact of segmentation algorithms as a preprocessing step for classification of remote sensing images in a deep learning framework. Especially, we address the issue of segmenting the image into regions to be classified using pre-trained deep neural networks as feature extractors for an SVM-based classifier. An efficient segmentation as a preprocessing step helps learning by adding a spatially-coherent structure to the data. Therefore, we compare algorithms producing superpixels with more traditional remote sensing segmentation algorithms and measure the variation in terms of classification accuracy. We establish that superpixel algorithms allow for a better classification accuracy as a homogenous and compact segmentation favors better generalization of the training samples.


computer vision and pattern recognition | 2017

Joint Learning from Earth Observation and OpenStreetMap Data to Get Faster Better Semantic Maps

Nicolas Audebert; Bertrand Le Saux; Sébastien Lefèvre

In this work, we investigate the use of OpenStreetMap data for semantic labeling of Earth Observation images. Deep neural networks have been used in the past for remote sensing data classification from various sensors, including multispectral, hyperspectral, SAR and LiDAR data. While OpenStreetMap has already been used as ground truth data for training such networks, this abundant data source remains rarely exploited as an input information layer. In this paper, we study different use cases and deep network architectures to leverage OpenStreetMap data for semantic labeling of aerial and satellite images. Especially, we look into fusion based architectures and coarseto- fine segmentation to include the OpenStreetMap layer into multispectral-based deep fully convolutional networks. We illustrate how these methods can be successfully used on two public datasets: ISPRS Potsdam and DFC2017. We show that OpenStreetMap data can efficiently be integrated into the vision-based deep learning models and that it significantly improves both the accuracy performance and the convergence speed of the networks.


eurographics | 2017

Point-Cloud Shape Retrieval of Non-Rigid Toys

Frederico A. Limberger; Richard C. Wilson; Masaki Aono; Nicolas Audebert; Alexandre Boulch; Benjamin Bustos; Andrea Giachetti; Afzal Godil; B. Le Saux; Bo Li; Yijuan Lu; Hai-Dang Nguyen; Vinh-Tiep Nguyen; Viet-Khoi Pham; Ivan Sipiran; Atsushi Tatsuma; Minh-Triet Tran; Santiago Velasco-Forero

In this paper, we present the results of the SHREC’17 Track: Point-Cloud Shape Retrieval of Non-Rigid Toys. The aim of this track is to create a fair benchmark to evaluate the performance of methods on the non-rigid point-cloud shape retrieval problem. The database used in this task contains 100 3D point-cloud models which are classified into 10 different categories. All point clouds were generated by scanning each one of the models in their final poses using a 3D scanner, i.e., all models have been articulated before scanned. The retrieval performance is evaluated using seven commonly-used statistics (PR-plot, NN, FT, ST, E-measure, DCG, mAP). In total, there are 8 groups and 31 submissions taking part of this contest. The evaluation results shown by this work suggest that researchers are in the right way towards shape descriptors which can capture the main characteristics of 3D models, however, more tests still need to be made, since this is the first time we compare non-rigid signatures for point-cloud shape retrieval.


Computers & Graphics | 2017

SnapNet: 3D point cloud semantic labeling with 2D deep segmentation networks

Alexandre Boulch; Joris Guerry; Bertrand Le Saux; Nicolas Audebert

Abstract In this work, we describe a new, general, and efficient method for unstructured point cloud labeling. As the question of efficiently using deep Convolutional Neural Networks (CNNs) on 3D data is still a pending issue, we propose a framework which applies CNNs on multiple 2D image views (or snapshots) of the point cloud. The approach consists in three core ideas. (i) We pick many suitable snapshots of the point cloud. We generate two types of images: a Red-Green-Blue (RGB) view and a depth composite view containing geometric features. (ii) We then perform a pixel-wise labeling of each pair of 2D snapshots using fully convolutional networks. Different architectures are tested to achieve a profitable fusion of our heterogeneous inputs. (iii) Finally, we perform fast back-projection of the label predictions in the 3D space using efficient buffering to label every 3D point. Experiments show that our method is suitable for various types of point clouds such as Lidar or photogrammetric data.


arXiv: Neural and Evolutionary Computing | 2016

On the usability of deep networks for object-based image analysis

Nicolas Audebert; Bertrand Le Saux; Sébastien Lefèvre

As computer vision before, remote sensing has been radically changed by the introduction of Convolution Neural Networks. Land cover use, object detection and scene understanding in aerial images rely more and more on deep learning to achieve new state-of-the-art results. Recent architectures such as Fully Convolutional Networks (Long et al., 2015) can even produce pixel level annotations for semantic mapping. In this work, we show how to use such deep networks to detect, segment and classify different varieties of wheeled vehicles in aerial images from the ISPRS Potsdam dataset. This allows us to tackle object detection and classification on a complex dataset made up of visually similar classes, and to demonstrate the relevance of such a subclass modeling approach. Especially, we want to show that deep learning is also suitable for object-oriented analysis of Earth Observation data. First, we train a FCN variant on the ISPRS Potsdam dataset and show how the learnt semantic maps can be used to extract precise segmentation of vehicles, which allow us studying the repartition of vehicles in the city. Second, we train a CNN to perform vehicle classification on the VEDAI (Razakarivony and Jurie, 2016) dataset, and transfer its knowledge to classify candidate segmented vehicles on the Potsdam dataset.


urban remote sensing joint event | 2017

Deep learning for urban remote sensing

Nicolas Audebert; Alexandre Boulch; Hicham Randrianarivo; Bertrand Le Saux; Marin Ferecatu; Sébastien Lefèvre; Renaud Marlet

This work shows how deep learning techniques can benefit to remote sensing. We focus on tasks which are recurrent in Earth Observation data analysis. For classification and semantic mapping of aerial images, we present various deep network architectures and show that context information and dense labeling allow to reach better performances. For estimation of normals in point clouds, combining Hough transform with convolutional networks also improves the accuracy of previous frameworks by detecting hard configurations like corners. It shows that deep learning allows to revisit remote sensing and offers promising paths for urban modeling and monitoring.

Collaboration


Dive into the Nicolas Audebert's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Minh-Triet Tran

Information Technology University

View shared research outputs
Top Co-Authors

Avatar

Afzal Godil

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Bo Li

Texas State University

View shared research outputs
Top Co-Authors

Avatar

Yijuan Lu

Texas State University

View shared research outputs
Top Co-Authors

Avatar

Frederico A. Limberger

Universidade Federal do Rio Grande do Sul

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge