Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexandre Boulch is active.

Publication


Featured researches published by Alexandre Boulch.


Computer Graphics Forum | 2012

Fast and Robust Normal Estimation for Point Clouds with Sharp Features

Alexandre Boulch; Renaud Marlet

This paper presents a new method for estimating normals on unorganized point clouds that preserves sharp features. It is based on a robust version of the Randomized Hough Transform (RHT). We consider the filled Hough transform accumulator as an image of the discrete probability distribution of possible normals. The normals we estimate corresponds to the maximum of this distribution. We use a fixed‐size accumulator for speed, statistical exploration bounds for robustness, and randomized accumulators to prevent discretization effects. We also propose various sampling strategies to deal with anisotropy, as produced by laser scans due to differences of incidence. Our experiments show that our approach offers an ideal compromise between precision, speed, and robustness: it is at least as precise and noise‐resistant as state‐of‐the‐art methods that preserve sharp features, while being almost an order of magnitude faster. Besides, it can handle anisotropy with minor speed and precision losses.


IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2016

Processing of Extremely High-Resolution LiDAR and RGB Data: Outcome of the 2015 IEEE GRSS Data Fusion Contest - Part A: 2-D Contest

Manuel Campos-Taberner; Adriana Romero-Soriano; Carlo Gatta; Gustau Camps-Valls; Adrien Lagrange; Bertrand Le Saux; Anne Beaupère; Alexandre Boulch; Adrien Chan-Hon-Tong; Stephane Herbin; Hicham Randrianarivo; Marin Ferecatu; Michal Shimoni; Gabriele Moser; Devis Tuia

In this paper, we discuss the scientific outcomes of the 2015 data fusion contest organized by the Image Analysis and Data Fusion Technical Committee (IADF TC) of the IEEE Geoscience and Remote Sensing Society (IEEE GRSS). As for previous years, the IADF TC organized a data fusion contest aiming at fostering new ideas and solutions for multisource studies. The 2015 edition of the contest proposed a multiresolution and multisensorial challenge involving extremely high-resolution RGB images and a three-dimensional (3-D) LiDAR point cloud. The competition was framed in two parallel tracks, considering 2-D and 3-D products, respectively. In this paper, we discuss the scientific results obtained by the winners of the 2-D contest, which studied either the complementarity of RGB and LiDAR with deep neural networks (winning team) or provided a comprehensive benchmarking evaluation of new classification strategies for extremely high-resolution multimodal data (runner-up team). The data and the previously undisclosed ground truth will remain available for the community and can be obtained at http://www.grss-ieee.org/community/technical-committees/data-fusion/2015-ieee-grss-data-fusion-contest/. The 3-D part of the contest is discussed in the Part-B paper [1].


international geoscience and remote sensing symposium | 2015

Benchmarking classification of earth-observation data: From learning explicit features to convolutional networks

Adrien Lagrange; Bertrand Le Saux; Anne Beaupère; Alexandre Boulch; Adrien Chan-Hon-Tong; Stephane Herbin; Hicham Randrianarivo; Marin Ferecatu

In this paper, we address the task of semantic labeling of multisource earth-observation (EO) data. Precisely, we benchmark several concurrent methods of the last 15 years, from expert classifiers, spectral support-vector classification and high-level features to deep neural networks. We establish that (1) combining multisensor features is essential for retrieving some specific classes, (2) in the image domain, deep convolutional networks obtain significantly better overall performances and (3) transfer of learning from large generic-purpose image sets is highly effective to build EO data classifiers.


3DOR | 2017

Unstructured Point Cloud Semantic Labeling Using Deep Segmentation Networks.

Alexandre Boulch; Bertrand Le Saux; Nicolas Audebert

In this work, we describe a new, general, and efficient method for unstructured point cloud labeling. As the question of efficiently using deep Convolutional Neural Networks (CNNs) on 3D data is still a pending issue, we propose a framework which applies CNNs on multiple 2D image views (or snapshots) of the point cloud. The approach consists in three core ideas. (i) We pick many suitable snapshots of the point cloud. We generate two types of images: a Red-Green-Blue (RGB) view and a depth composite view containing geometric features. (ii) We then perform a pixel-wise labeling of each pair of 2D snapshots using fully convolutional networks. Different architectures are tested to achieve a profitable fusion of our heterogeneous inputs. (iii) Finally, we perform fast back-projection of the label predictions in the 3D space using efficient buffering to label every 3D point. Experiments show that our method is suitable for various types of point clouds such as Lidar or photogrammetric data.


symposium on geometry processing | 2016

Deep learning for robust normal estimation in unstructured point clouds

Alexandre Boulch; Renaud Marlet

Normal estimation in point clouds is a crucial first step for numerous algorithms, from surface reconstruction and scene understanding to rendering. A recurrent issue when estimating normals is to make appropriate decisions close to sharp features, not to smooth edges, or when the sampling density is not uniform, to prevent bias. Rather than resorting to manually‐designed geometric priors, we propose to learn how to make these decisions, using ground‐truth data made from synthetic scenes. For this, we project a discretized Hough space representing normal directions onto a structure amenable to deep learning. The resulting normal estimation method outperforms most of the time the state of the art regarding robustness to outliers, to noise and to point density variation, in the presence of sharp edges, while remaining fast, scaling up to millions of points.


symposium on geometry processing | 2014

Piecewise-Planar 3D Reconstruction with Edge and Corner Regularization

Alexandre Boulch; Martin de La Gorce; Renaud Marlet

This paper presents a method for the 3D reconstruction of a piecewise‐planar surface from range images, typically laser scans with millions of points. The reconstructed surface is a watertight polygonal mesh that conforms to observations at a given scale in the visible planar parts of the scene, and that is plausible in hidden parts. We formulate surface reconstruction as a discrete optimization problem based on detected and hypothesized planes. One of our major contributions, besides a treatment of data anisotropy and novel surface hypotheses, is a regularization of the reconstructed surface w.r.t. the length of edges and the number of corners. Compared to classical area‐based regularization, it better captures surface complexity and is therefore better suited for man‐made environments, such as buildings. To handle the underlying higher‐order potentials, that are problematic for MRF optimizers, we formulate minimization as a sparse mixed‐integer linear programming problem and obtain an approximate solution using a simple relaxation. Experiments show that it is fast and reaches near‐optimal solutions.


symposium on geometry processing | 2013

Semantizing complex 3D scenes using constrained attribute grammars

Alexandre Boulch; Simon Houllier; Renaud Marlet; Olivier Tournaire

We propose a new approach to automatically semantize complex objects in a 3D scene. For this, we define an expressive formalism combining the power of both attribute grammars and constraint. It offers a practical conceptual interface, which is crucial to write large maintainable specifications. As recursion is inadequate to express large collections of items, we introduce maximal operators, that are essential to reduce the parsing search space. Given a grammar in this formalism and a 3D scene, we show how to automatically compute a shared parse forest of all interpretations — in practice, only a few, thanks to relevant constraints. We evaluate this technique for building model semantization using CAD model examples as well as photogrammetric and simulated LiDAR data.


eurographics | 2017

Point-Cloud Shape Retrieval of Non-Rigid Toys

Frederico A. Limberger; Richard C. Wilson; Masaki Aono; Nicolas Audebert; Alexandre Boulch; Benjamin Bustos; Andrea Giachetti; Afzal Godil; B. Le Saux; Bo Li; Yijuan Lu; Hai-Dang Nguyen; Vinh-Tiep Nguyen; Viet-Khoi Pham; Ivan Sipiran; Atsushi Tatsuma; Minh-Triet Tran; Santiago Velasco-Forero

In this paper, we present the results of the SHREC’17 Track: Point-Cloud Shape Retrieval of Non-Rigid Toys. The aim of this track is to create a fair benchmark to evaluate the performance of methods on the non-rigid point-cloud shape retrieval problem. The database used in this task contains 100 3D point-cloud models which are classified into 10 different categories. All point clouds were generated by scanning each one of the models in their final poses using a 3D scanner, i.e., all models have been articulated before scanned. The retrieval performance is evaluated using seven commonly-used statistics (PR-plot, NN, FT, ST, E-measure, DCG, mAP). In total, there are 8 groups and 31 submissions taking part of this contest. The evaluation results shown by this work suggest that researchers are in the right way towards shape descriptors which can capture the main characteristics of 3D models, however, more tests still need to be made, since this is the first time we compare non-rigid signatures for point-cloud shape retrieval.


Computers & Graphics | 2017

SnapNet: 3D point cloud semantic labeling with 2D deep segmentation networks

Alexandre Boulch; Joris Guerry; Bertrand Le Saux; Nicolas Audebert

Abstract In this work, we describe a new, general, and efficient method for unstructured point cloud labeling. As the question of efficiently using deep Convolutional Neural Networks (CNNs) on 3D data is still a pending issue, we propose a framework which applies CNNs on multiple 2D image views (or snapshots) of the point cloud. The approach consists in three core ideas. (i) We pick many suitable snapshots of the point cloud. We generate two types of images: a Red-Green-Blue (RGB) view and a depth composite view containing geometric features. (ii) We then perform a pixel-wise labeling of each pair of 2D snapshots using fully convolutional networks. Different architectures are tested to achieve a profitable fusion of our heterogeneous inputs. (iii) Finally, we perform fast back-projection of the label predictions in the 3D space using efficient buffering to label every 3D point. Experiments show that our method is suitable for various types of point clouds such as Lidar or photogrammetric data.


international conference on pattern recognition | 2014

Statistical Criteria for Shape Fusion and Selection

Alexandre Boulch; Renaud Marlet

Surface reconstruction from point clouds often relies on a primitive extraction step, that may be followed by a merging step because of a possible over-segmentation. We present two statistical criteria to decide whether or not two surfaces are to be considered as the same, and thus can be merged. They are based on the statistical tests of Kolmogorov-Smirnov and Mann-Whitney for comparing distributions. Moreover, computation time can be significantly cut down using a reduced sampling based on the Dvoretzky-Keifer-Wolfowitz inequality. The strength of our approach is that it relies in practice on a single intuitive parameter (homogeneous to a distance) and that it can be applied to any shape, including meshes, not just geometric primitives. It also enables the comparison of shapes of different kinds, providing a way to choose between different shape candidates. We show several applications of our method, experimenting geometric primitive (plane and cylinder) detection, selection and fusion, both on precise laser scans and noisy photogrammetric 3D data.

Collaboration


Dive into the Alexandre Boulch's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marin Ferecatu

Conservatoire national des arts et métiers

View shared research outputs
Top Co-Authors

Avatar

Minh-Triet Tran

Information Technology University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Afzal Godil

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Bo Li

Texas State University

View shared research outputs
Top Co-Authors

Avatar

Yijuan Lu

Texas State University

View shared research outputs
Top Co-Authors

Avatar

Anne Beaupère

École Normale Supérieure

View shared research outputs
Researchain Logo
Decentralizing Knowledge