Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wenzhi Liao is active.

Publication


Featured researches published by Wenzhi Liao.


IEEE Geoscience and Remote Sensing Magazine | 2015

Hyperspectral Pansharpening: A Review

Laetitia Loncan; Luís B. Almeida; José M. Bioucas-Dias; Xavier Briottet; Jocelyn Chanussot; Nicolas Dobigeon; Sophie Fabre; Wenzhi Liao; Giorgio Licciardi; Miguel Simões; Jean-Yves Tourneret; Miguel Angel Veganzones; Gemine Vivone; Qi Wei; Naoto Yokoya

Pansharpening aims at fusing a panchromatic image with a multispectral one, to generate an image with the high spatial resolution of the former and the high spectral resolution of the latter. In the last decade, many algorithms have been presented in the literatures for pansharpening using multispectral data. With the increasing availability of hyperspectral systems, these methods are now being adapted to hyperspectral images. In this work, we compare new pansharpening techniques designed for hyperspectral data with some of the state-of-the-art methods for multispectral pansharpening, which have been adapted for hyperspectral data. Eleven methods from different classes (component substitution, multiresolution analysis, hybrid, Bayesian and matrix factorization) are analyzed. These methods are applied to three datasets and their effectiveness and robustness are evaluated with widely used performance indicators. In addition, all the pansharpening techniques considered in this paper have been implemented in a MATLAB toolbox that is made available to the community.


IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2014

Hyperspectral and LiDAR Data Fusion: Outcome of the 2013 GRSS Data Fusion Contest

Christian Debes; Andreas Merentitis; Roel Heremans; Jürgen T. Hahn; Nikolaos Frangiadakis; Tim Van Kasteren; Wenzhi Liao; Rik Bellens; Aleksandra Pizurica; Sidharta Gautama; Wilfried Philips; Saurabh Prasad; Qian Du; Fabio Pacifici

The 2013 Data Fusion Contest organized by the Data Fusion Technical Committee (DFTC) of the IEEE Geoscience and Remote Sensing Society aimed at investigating the synergistic use of hyperspectral and Light Detection And Ranging (LiDAR) data. The data sets distributed to the participants during the Contest, a hyperspectral imagery and the corresponding LiDAR-derived digital surface model (DSM), were acquired by the NSF-funded Center for Airborne Laser Mapping over the University of Houston campus and its neighboring area in the summer of 2012. This paper highlights the two awarded research contributions, which investigated different approaches for the fusion of hyperspectral and LiDAR data, including a combined unsupervised and supervised classification scheme, and a graph-based method for the fusion of spectral, spatial, and elevation information.


IEEE Transactions on Geoscience and Remote Sensing | 2013

Semisupervised Local Discriminant Analysis for Feature Extraction in Hyperspectral Images

Wenzhi Liao; Aleksandra Pizurica; Paul Scheunders; Wilfried Philips; Youguo Pi

We propose a novel semisupervised local discriminant analysis method for feature extraction in hyperspectral remote sensing imagery, with improved performance in both ill-posed and poor-posed conditions. The proposed method combines unsupervised methods (local linear feature extraction methods and supervised method (linear discriminant analysis) in a novel framework without any free parameters. The underlying idea is to design an optimal projection matrix, which preserves the local neighborhood information inferred from unlabeled samples, while simultaneously maximizing the class discrimination of the data inferred from the labeled samples. Experimental results on four real hyperspectral images demonstrate that the proposed method compares favorably with conventional feature extraction methods.


IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2015

Processing of Multiresolution Thermal Hyperspectral and Digital Color Data: Outcome of the 2014 IEEE GRSS Data Fusion Contest

Wenzhi Liao; Xin Huang; Frieke Van Coillie; Sidharta Gautama; Aleksandra Pizurica; Wilfried Philips; Hui Liu; Tingting Zhu; Michal Shimoni; Gabriele Moser; Devis Tuia

This paper reports the outcomes of the 2014 Data Fusion Contest organized by the Image Analysis and Data Fusion Technical Committee (IADF TC) of the IEEE Geoscience and Remote Sensing Society (IEEE GRSS). As for previous years, the IADF TC organized a data fusion contest aiming at fostering new ideas and solutions for multisource remote sensing studies. In the 2014 edition, participants considered multiresolution and multisensor fusion between optical data acquired at 20-cm resolution and long-wave (thermal) infrared hyperspectral data at 1-m resolution. The Contest was proposed as a double-track competition: one aiming at accurate landcover classification and the other seeking innovation in the fusion of thermal hyperspectral and color data. In this paper, the results obtained by the winners of both tracks are presented and discussed.


IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2012

Classification of Hyperspectral Data Over Urban Areas Using Directional Morphological Profiles and Semi-Supervised Feature Extraction

Wenzhi Liao; Rik Bellens; Aleksandra Pizurica; Wilfried Philips; Youguo Pi

When using morphological features for the classification of high resolution hyperspectral images from urban areas, one should consider two important issues. The first one is that classical morphological openings and closings degrade the object boundaries and deform the object shapes. Morphological openings and closings by reconstruction can avoid this problem, but this process leads to some undesirable effects. Objects expected to disappear at a certain scale remain present when using morphological openings and closings by reconstruction. The second one is that the morphological profiles (MPs) with different structuring elements and a range of increasing sizes of morphological operators produce high-dimensional data. These high-dimensional data may contain redundant information and create a new challenge for conventional classification methods, especially for the classifiers which are not robust to the Hughes phenomenon. In this paper, we first investigate morphological profiles with partial reconstruction and directional MPs for the classification of high resolution hyperspectral images from urban areas. Secondly, we develop a semi-supervised feature extraction to reduce the dimensionality of the generated morphological profiles for the classification. Experimental results on real urban hyperspectral images demonstrate the efficiency of the considered techniques.


IEEE Geoscience and Remote Sensing Letters | 2015

Generalized Graph-Based Fusion of Hyperspectral and LiDAR Data Using Morphological Features

Wenzhi Liao; Aleksandra Pizurica; Rik Bellens; Sidharta Gautama; Wilfried Philips

Nowadays, we have diverse sensor technologies and image processing algorithms that allow one to measure different aspects of objects on the Earth [e.g., spectral characteristics in hyperspectral images (HSIs), height in light detection and ranging (LiDAR) data, and geometry in image processing technologies, such as morphological profiles (MPs)]. It is clear that no single technology can be sufficient for a reliable classification, but combining many of them can lead to problems such as the curse of dimensionality, excessive computation time, and so on. Applying feature reduction techniques on all the features together is not good either, because it does not take into account the differences in structure of the feature spaces. Decision fusion, on the other hand, has difficulties with modeling correlations between the different data sources. In this letter, we propose a generalized graph-based fusion method to couple dimension reduction and feature fusion of the spectral information (of the original HSI) and MPs (built on both HS and LiDAR data). In the proposed method, the edges of the fusion graph are weighted by the distance between the stacked feature points. This yields a clear improvement over an older approach with binary edges in the fusion graph. Experimental results on real HSI and LiDAR data demonstrate effectiveness of the proposed method both visually and quantitatively.


IEEE Geoscience and Remote Sensing Letters | 2015

Improving Random Forest With Ensemble of Features and Semisupervised Feature Extraction

Junshi Xia; Wenzhi Liao; Jocelyn Chanussot; Peijun Du; Guanghan Song; Wilfried Philips

In this letter, we propose a novel approach for improving Random Forest (RF) in hyperspectral image classification. The proposed approach combines the ensemble of features and the semisupervised feature extraction (SSFE) technique. The main contribution of our approach is to construct an ensemble of RF classifiers. In this way, the feature space is divided into several disjoint feature subspaces. Then, the feature subspaces induced by the SSFE technique are used as the input space to an RF classifier. This method is compared with a regular RF and an RF with the reduced features by the SSFE on two real hyperspectral data sets, showing an improved performance in ill-posed, poor-posed, and well-posed conditions. An additional study shows that the proposed method is less sensitive to the parameters.


IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2016

Fusion of Spectral and Spatial Information for Classification of Hyperspectral Remote-Sensed Imagery by Local Graph

Wenzhi Liao; Mauro Dalla Mura; Jocelyn Chanussot; Aleksandra Pižurica

Hyperspectral imagery contains a wealth of spectral and spatial information that can improve target detection and recognition performance. Conventional feature extraction methods cannot fully exploit both spectral and spatial information. Data fusion by simply stacking different feature sources together does not take into account the differences between feature sources. In this paper, a local graph-based fusion (LGF) method is proposed to couple dimension reduction and feature fusion of the spectral information (i.e., the spectra in the HS image) and the spatial information [extracted by morphological profiles (MPs)]. In the proposed method, the fusion graph is built on the full data by moving a sliding window from the first pixel to the last one. This yields a clear improvement over a previous approach with fusion graph built on randomly selected samples. Experimental results on real hyperspectral images are very encouraging. Compared to the methods using only single feature and stacking all the features together, the proposed LGF method improves the overall classification accuracy on one of the data sets for more than 20% and 5%, respectively.


IEEE Transactions on Geoscience and Remote Sensing | 2016

Morphological Attribute Profiles With Partial Reconstruction

Wenzhi Liao; Mauro Dalla Mura; Jocelyn Chanussot; Rik Bellens; Wilfried Philips

Extended attribute profiles (EAPs) have been widely used for the classification of high-resolution hyperspectral images. EAPs are obtained by computing a sequence of attribute operators. Attribute filters (AFs) are connected operators, so they can modify an image by only merging its flat zones. These filters are effective when dealing with very high resolution images since they preserve the geometrical characteristics of the regions that are not removed from the image. However, AFs, being connected filters, suffer the problem of “leakage” (i.e., regions related to different structures in the image that happen to be connected by spurious links will be considered as a single object). Objects expected to disappear at a certain threshold remain present when they are connected with other objects in the image. The attributes of small objects will be mixed with their larger connected objects. In this paper, we propose a novel framework for morphological AFs with partial reconstruction and extend it to the classification of high-resolution hyperspectral images. The ultimate goal of the proposed framework is to be able to extract spatial features which better model the attributes of different objects in the remote sensed imagery, which enables better performances on classification. An important characteristic of the presented approach is that it is very robust to the ranges of rescaled principal components, as well as the selection of attribute values. Our experimental results, conducted using a variety of hyperspectral images, indicate that the proposed framework for AFs with partial reconstruction provides state-of-the-art classification results. Compared to the methods using only single EAP and stacking all EAPs computed by existing attribute opening and closing together, the proposed framework benefits significant improvements in overall classification accuracy.


IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2016

Feature Extraction of Hyperspectral Images With Semisupervised Graph Learning

Renbo Luo; Wenzhi Liao; Xin Huang; Youguo Pi; Wilfried Philips

We propose a semisupervised graph learning (SEGL) method for feature extraction of hyperspectral remote sensing imagery in this paper. The proposed SEGL method aims to build a semisupervised graph that can maximize the class discrimination and preserve the local neighborhood information by combining labeled and unlabeled samples. In our semisupervised graph, we connect labeled samples according to their label information and unlabeled samples by their nearest neighborhood information. By sorting the mean distance between a unlabeled sample and labeled samples of each class, we connect the unlabeled sample with all labeled samples belonging to its nearest neighborhood class. Moreover, the proposed SEGL better models the actual differences and similarities between samples, by setting different weights to the edges of connected samples. Experimental results on four real hyperspectral images (HSIs) demonstrate the advantages of our method compared to some related feature extraction methods.

Collaboration


Dive into the Wenzhi Liao's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Youguo Pi

South China University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jocelyn Chanussot

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bing Zhang

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge