Despite the popularity of deep neural networks in various domains, the
extraction of digital terrain models (DTMs) from airborne laser scanning
(ALS) point clouds is still challenging. This might be due to the lack
of the dedicated large-scale annotated dataset and the data-structure
discrepancy between point clouds and DTMs. To promote data-driven DTM
extraction, this article collects from open sources a large-scale
dataset of ALS point clouds and corresponding DTMs with various urban,
forested, and mountainous scenes. A baseline method is proposed as the
first attempt to train a deep neural network to extract DTMs directly
from ALS point clouds via rasterization techniques, coined DeepTerRa.
Extensive studies with well-established methods are performed to
benchmark the dataset and analyze the challenges in learning to extract
DTM from point clouds. The experimental results show the interest of the
agnostic data-driven approach, with submetric error level compared to
methods designed for DTM extraction. The data and source code are
available online at https://lhoangan.github.io/deepterra/ for
reproducibility and further similar research.
This thesis evaluates the relevance of morphological hierarchies and
deep neural networks for analysing LiDAR data by means of several
discretization strategies. The quantity of data increases exponentially
in coverage and resolution. However, actual datasets are not yet fully
exploited due to the lack of efficient methodological tools for this
specific type of data. Morphological structures are known to extract
reliable multi-scale features while being extremely computationally
efficient. In the mean time, the tremendous breakthrough of deep
learning in computer vision has shaken up the remote sensing community.
To this end we define and evaluate different discretization strategies
of LiDAR data. In a first part, we re-organise the point clouds into 2D
regular grids. We propose to derive several LiDAR features, trying to
extract complete elevation description and spectral values along with
LiDAR specific information. In a second part we re-organise the point
clouds into 3D regular grids. The regular grids are sufficient to
provide the neighboring context needed for the morphological
hierarchies, and the proposed grids are also adapted to the input layers
of state-of-the-art deep neural networks. The different methods are
systematically validated in remote sensing scenarios.
Morphological attribute profiles (APs) are among the most prominent
methods for spatial–spectral pixel analysis of remote sensing images.
Since their introduction a decade ago to tackle land cover
classification, many studies have been contributed to the state of the
art, focusing not only on their application to a wider range of tasks
but also on their performance improvement and extension to more complex
Earth observation data.
We introduce Fat Pad cages for posing facial meshes. It combines cage
representation and facial anatomical elements, and enables users with no
artistic skill to quickly sketch realistic facial expressions. The model
relies on one or several cage(s) that deform(s) the mesh following the
human fat pads map. We propose a new function to filter Green
Coordinates using geodesic distances preventing global deformation while
ensuring smooth deformations at the borders. Lips, nostrils and eyelids
are processed slightly differently to allow folding up and opening.
Cages are automatically created and fit any new unknown facial mesh. To
validate our approach, we present a user study comparing our Fat Pad
cages to regular Green Coordinates. Results show that Fat Pad cages
bring a significant improvement in reproducing existing facial
expressions.
The use of high-resolution digital terrain model derived from airborne
LiDAR system becomes more and more prevalent. Effective multi-scale
structure characterization is of crucial importance for various domains
such as geosciences, archaeology and Earth observation. This paper deals
with structure detection in large datasets with little or no prior
knowledge. In a recent work, we have demonstrated the relevance of
hierarchical representations to enhance the description of digital
elevation models (Guiotte et al., 2019). In this paper, we proceed
further and use the pattern spectrum, a multi-scale tool originating
from mathematical morphology, further enhanced by hierarchical
representations. The pattern spectra allow to globally and efficiently
compute the distribution of size and shapes of the objects contained in
a digital elevation model. The tree-based pattern spectra used in this
paper allowed us to analyse and extract features of interest. We report
experiments in a natural environment with two use cases, related to gold
panning and dikes respectively. The process is fast enough to allow
interactive analysis.
LiDAR data are widely used in various domains related to geosciences
(flow, erosion, rock deformations, etc.), computer graphics (3D
reconstruction) or earth observation (detection of trees, roads,
buildings, etc.). Because of the unstructured nature of remaining 3D
points and because of the cost of acquisition, the LiDAR data processing
is still challenging (few learning data, difficult spatial neighboring
relationships, etc.). In practice, one can directly analyze the 3D
points using feature extraction and then classify the points via machine
learning techniques (Brodu, Lague, 2012, Niemeyer et al., 2014, Mallet
et al., 2011). In addition, recent neural network developments have
allowed precise point cloud segmentation, especially using the seminal
pointnet network and its extensions (Qi et al., 2017a, Riegler et al.,
2017). Other authors rather prefer to rasterize / voxelize the point
cloud and use more conventional computers vision strategies to analyze
structures (Lodha et al., 2006). In a recent work, we demonstrated that
Digital Elevation Models (DEM) is reductive of the vertical component
complexity describing objects in urban environments (Guiotte et al.,
2020). These results highlighted the necessity to preserve the 3D
structure of the point cloud as long as possible in the processing. In
this paper, we therefore rely on ortho-waveforms to compute a land cover
map. Ortho-waveforms are directly computed from the waveforms in a
regular 3D grid. This method provides volumes somehow “similar” to
hyperspectral data where each pixel is here associated with one
ortho-waveform. Then, we exploit efficient neural networks adapted to
the classification of hyperspectral data when few samples are available.
Our results, obtained on the 2018 Data Fusion Contest dataset (DFC),
demonstrate the efficiency of the approach.
GRSL
Semantic Segmentation of LiDAR Points Clouds: Rasterisation beyond
Digital Elevation Models
Florent
Guiotte, Minh-Tan
Pham, Romain Dambreville, and 2 more authors
LiDAR point clouds are receiving a growing interest in remote sensing as
they provide rich information to be used independently or together with
optical data sources such as aerial imagery. However, their
non-structured and sparse nature make them difficult to handle,
conversely to raw imagery for which many efficient tools are available.
To overcome this specific nature of LiDAR point clouds, standard
approaches often rely in converting the point cloud into a digital
elevation model, represented as a 2D raster. Such a raster can then be
used similarly as optical images, e.g. with 2D convolutional neural
networks for semantic segmentation. In this letter, we show that LiDAR
point clouds provide more information than only the DEM, and that
considering alternative rasterization strategies helps to achieve better
semantic segmentation results. We illustrate our findings on the IEEE
DFC 2018 dataset.
This paper deals with morphological characterization of un-structured 3D
point clouds issued from LiDAR data. A large majority of studies first
rasterize 3D point clouds onto regular 2D grids and then use standard 2D
image processing tools for characterizing data. In this paper, we
suggest instead to keep the 3D structure as long as possible in the
process. To this end, as raw LiDAR point clouds are unstructured, we
first propose some voxelization strategies and then extract some
morphological features on voxel data. The results obtained with
attribute filtering show the ability of this process to efficiently
extract useful information .
ORASIS
Filtrage et classification de nuage de points sur la base d’attributs
morphologiques
Cet article traite de l’analyse de données LiDAR via la caractérisation
morphologique des nuages de points qui en résultent. Tandis que la
majorité de travaux effectuent en premier lieu une «rasterisation»
(transformation du nuage de point en données 2D structurées en pixels)
et utilisent ensuite des outils d’analyse d’images, nous proposons ici
de garder le plus longtemps possible la structure 3D (en y calculant des
caractéristiques) et de structurer les données le plus tard possible. En
pratique, une étape de voxelisation des données brutes est opérée afin
d’utiliser des outils mathématiques définis sur des volumes réguliers.
Ensuite, nous utilisons des représentations hiérarchiques pour
caractériser ces voxels. Pour illustrer les intérêts d’une telle
approche, plusieurs applications sont proposées, notamment le
débruitage, le filtrage et la classification des nuages de points
This paper evaluates rasterization strategies and the benefit of
hierarchical representations, in particular attribute profiles, to
classify urban scenes issued from multispectral LiDAR acquisitions. In
recent years it has been found that rasterized LiDAR provides a reliable
source of information on its own or for fusion with
multispectral/hyperspectral imagery. However previous works using
attribute profiles on LiDAR rely on elevation data only. Our approach
focuses on several LiDAR features rasterized with multilevel description
to produce precise land cover maps over urban areas. Our experimental
results obtained with LiDAR data from university of Houston indicate
good classification results for alternative rasters and even more when
multilevel image descriptions are used.
Cet article traite de rastérisation par représentations hiérarchiques
(en particulier via les profils d’attributs mor-phologiques) de nuages
de points 3D. Lorsque ces données proviennent d’appareils LiDAR, il est
fréquent de les rastériser pour fournir une carte d’élévation (exploitée
seule ou combinée avec des images multi-ou hyperspectrales). Bien que
certains travaux utilisent des profils d’attributs sur de telles données
d’élévation, nous mettons ici l’accent sur plusieurs caractéristiques
LiDAR rastérisées (liées aux échos, retours d’onde, etc.) et sur une
description multi-échelle pour produire des cartes d’occupation du sol
précises sur des zones urbaines. Nos résultats expérimentaux obtenus
avec les données LiDAR de l’université de Houston indiquent de bons
résultats de classification en exploitant nos rasters.
This paper deals with strategies for LiDAR data analysis. While a large
majority of studies first rasterize 3D point clouds onto regular 2D
grids and then use 2D image processing tools for characterizing data,
our work rather suggests to keep as long as possible the 3D structure by
computing features on 3D data and rasterize later in the process. By
this way, the vertical component is still taken into account. In
practice, a voxelization step of raw data is performed in order to
exploit mathematical tools defined on regular volumes. More precisely,
we focus on attribute profiles that have been shown to be very efficient
features to characterize remote sensing scenes. They require the
computation of an underlying hierarchical structure (through a
Max-Tree). Experimental results obtained on urban LiDAR data
classification support the performances of this strategy compared with
an early rasterization process.