Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Liangpei Zhang is active.

Publication


Featured researches published by Liangpei Zhang.


IEEE Transactions on Geoscience and Remote Sensing | 2013

An SVM Ensemble Approach Combining Spectral, Structural, and Semantic Features for the Classification of High-Resolution Remotely Sensed Imagery

Xin Huang; Liangpei Zhang

In recent years, the resolution of remotely sensed imagery has become increasingly high in both the spectral and spatial domains, which simultaneously provides more plentiful spectral and spatial information. Accordingly, the accurate interpretation of high-resolution imagery depends on effective integration of the spectral, structural and semantic features contained in the images. In this paper, we propose a new multifeature model, aiming to construct a support vector machine (SVM) ensemble combining multiple spectral and spatial features at both pixel and object levels. The features employed in this study include a gray-level co-occurrence matrix, differential morphological profiles, and an urban complexity index. Subsequently, three algorithms are proposed to integrate the multifeature SVMs: certainty voting, probabilistic fusion, and an object-based semantic approach, respectively. The proposed algorithms are compared with other multifeature SVM methods including the vector stacking, feature selection, and composite kernels. Experiments are conducted on the hyperspectral digital imagery collection experiment DC Mall data set and two WorldView-2 data sets. It is found that the multifeature model with semantic-based postprocessing provides more accurate classification results (an accuracy improvement of 1-4% for the three experimental data sets) compared to the voting and probabilistic models.


IEEE Transactions on Geoscience and Remote Sensing | 2012

On Combining Multiple Features for Hyperspectral Remote Sensing Image Classification

Lefei Zhang; Liangpei Zhang; Dacheng Tao; Xin Huang

In hyperspectral remote sensing image classification, multiple features, e.g., spectral, texture, and shape features, are employed to represent pixels from different perspectives. It has been widely acknowledged that properly combining multiple features always results in good classification performance. In this paper, we introduce the patch alignment framework to linearly combine multiple features in the optimal way and obtain a unified low-dimensional representation of these multiple features for subsequent classification. Each feature has its particular contribution to the unified representation determined by simultaneously optimizing the weights in the objective function. This scheme considers the specific statistical properties of each feature to achieve a physically meaningful unified low-dimensional representation of multiple features. Experiments on the classification of the hyperspectral digital imagery collection experiment and reflective optics system imaging spectrometer hyperspectral data sets suggest that this scheme is effective.


Remote Sensing | 2015

Transferring Deep Convolutional Neural Networks for the Scene Classification of High-Resolution Remote Sensing Imagery

Fan Hu; Gui-Song Xia; Jingwen Hu; Liangpei Zhang

Learning efficient image representations is at the core of the scene classification task of remote sensing imagery. The existing methods for solving the scene classification task, based on either feature coding approaches with low-level hand-engineered features or unsupervised feature learning, can only generate mid-level image features with limited representative ability, which essentially prevents them from achieving better performance. Recently, the deep convolutional neural networks (CNNs), which are hierarchical architectures trained on large-scale datasets, have shown astounding performance in object recognition and detection. However, it is still not clear how to use these deep convolutional neural networks for high-resolution remote sensing (HRRS) scene classification. In this paper, we investigate how to transfer features from these successfully pre-trained CNNs for HRRS scene classification. We propose two scenarios for generating image features via extracting CNN features from different layers. In the first scenario, the activation vectors extracted from fully-connected layers are regarded as the final image features; in the second scenario, we extract dense features from the last convolutional layer at multiple scales and then encode the dense features into global image features through commonly used feature coding approaches. Extensive experiments on two public scene classification datasets demonstrate that the image features obtained by the two proposed scenarios, even with a simple linear classifier, can result in remarkable performance and improve the state-of-the-art by a significant margin. The results reveal that the features from pre-trained CNNs generalize well to HRRS datasets and are more expressive than the low- and mid-level features. Moreover, we tentatively combine features extracted from different CNN models for better performance.


IEEE Transactions on Geoscience and Remote Sensing | 2012

Hyperspectral Image Denoising Employing a Spectral–Spatial Adaptive Total Variation Model

Qiangqiang Yuan; Liangpei Zhang; Huanfeng Shen

The amount of noise included in a hyperspectral image limits its application and has a negative impact on hyperspectral image classification, unmixing, target detection, and so on. In hyperspectral images, because the noise intensity in different bands is different, to better suppress the noise in the high-noise-intensity bands and preserve the detailed information in the low-noise-intensity bands, the denoising strength should be adaptively adjusted with the noise intensity in the different bands. Meanwhile, in the same band, there exist different spatial property regions, such as homogeneous regions and edge or texture regions; to better reduce the noise in the homogeneous regions and preserve the edge and texture information, the denoising strength applied to pixels in different spatial property regions should also be different. Therefore, in this paper, we propose a hyperspectral image denoising algorithm employing a spectral-spatial adaptive total variation (TV) model, in which the spectral noise differences and spatial information differences are both considered in the process of noise reduction. To reduce the computational load in the denoising process, the split Bregman iteration algorithm is employed to optimize the spectral-spatial hyperspectral TV model and accelerate the speed of hyperspectral image denoising. A number of experiments illustrate that the proposed approach can satisfactorily realize the spectral-spatial adaptive mechanism in the denoising process, and superior denoising results are produced.


IEEE Transactions on Geoscience and Remote Sensing | 2014

Hyperspectral Image Restoration Using Low-Rank Matrix Recovery

Hongyan Zhang; Wei He; Liangpei Zhang; Huanfeng Shen; Qiangqiang Yuan

Hyperspectral images (HSIs) are often degraded by a mixture of various kinds of noise in the acquisition process, which can include Gaussian noise, impulse noise, dead lines, stripes, and so on. This paper introduces a new HSI restoration method based on low-rank matrix recovery (LRMR), which can simultaneously remove the Gaussian noise, impulse noise, dead lines, and stripes. By lexicographically ordering a patch of the HSI into a 2-D matrix, the low-rank property of the hyperspectral imagery is explored, which suggests that a clean HSI patch can be regarded as a low-rank matrix. We then formulate the HSI restoration problem into an LRMR framework. To further remove the mixed noise, the “Go Decomposition” algorithm is applied to solve the LRMR problem. Several experiments were conducted in both simulated and real data conditions to verify the performance of the proposed LRMR-based HSI restoration method.


IEEE Geoscience and Remote Sensing Magazine | 2016

Deep Learning for Remote Sensing Data: A Technical Tutorial on the State of the Art

Liangpei Zhang; Lefei Zhang; Bo Du

Deep-learning (DL) algorithms, which learn the representative and discriminative features in a hierarchical manner from the data, have recently become a hotspot in the machine-learning area and have been introduced into the geoscience and remote sensing (RS) community for RS big data analysis. Considering the low-level features (e.g., spectral and texture) as the bottom level, the output feature representation from the top level of the network can be directly fed into a subsequent classifier for pixel-based classification. As a matter of fact, by carefully addressing the practical demands in RS applications and designing the input?output levels of the whole network, we have found that DL is actually everywhere in RS data analysis: from the traditional topics of image preprocessing, pixel-based classification, and target recognition, to the recent challenging tasks of high-level semantic feature extraction and RS scene understanding.In this technical tutorial, a general framework of DL for RS data is provided, and the state-of-the-art DL methods in RS are regarded as special cases of input-output data combined with various deep networks and tuning tricks. Although extensive experimental results confirm the excellent performance of the DL-based algorithms in RS big data analysis, even more exciting prospects can be expected for DL in RS. Key bottlenecks and potential directions are also indicated in this article, guiding further research into DL for RS data.


IEEE Transactions on Geoscience and Remote Sensing | 2015

Saliency-Guided Unsupervised Feature Learning for Scene Classification

Fan Zhang; Bo Du; Liangpei Zhang

Due to the rapid technological development of various different satellite sensors, a huge volume of high-resolution image data sets can now be acquired. How to efficiently represent and recognize the scenes from such high-resolution image data has become a critical task. In this paper, we propose an unsupervised feature learning framework for scene classification. By using the saliency detection algorithm, we extract a representative set of patches from the salient regions in the image data set. These unlabeled data patches are exploited by an unsupervised feature learning method to learn a set of feature extractors which are robust and efficient and do not need elaborately designed descriptors such as the scale-invariant-feature-transform-based algorithm. We show that the statistics generated from the learned feature extractors can characterize a complex scene very well and can produce excellent classification accuracy. In order to reduce overfitting in the feature learning step, we further employ a recently developed regularization method called “dropout,” which has proved to be very effective in image classification. In the experiments, the proposed method was applied to two challenging high-resolution data sets: the UC Merced data set containing 21 different aerial scene categories with a submeter resolution and the Sydney data set containing seven land-use categories with a 60-cm spatial resolution. The proposed method obtained results that were equal to or even better than the previous best results with the UC Merced data set, and it also obtained the highest accuracy with the Sydney data set, demonstrating that the proposed unsupervised-feature-learning-based scene classification method provides more accurate classification results than the other latent-Dirichlet-allocation-based methods and the sparse coding method.


IEEE Transactions on Geoscience and Remote Sensing | 2013

Tensor Discriminative Locality Alignment for Hyperspectral Image Spectral–Spatial Feature Extraction

Liangpei Zhang; Lefei Zhang; Dacheng Tao; Xin Huang

In this paper, we propose a method for the dimensionality reduction (DR) of spectral-spatial features in hyperspectral images (HSIs), under the umbrella of multilinear algebra, i.e., the algebra of tensors. The proposed approach is a tensor extension of conventional supervised manifold-learning-based DR. In particular, we define a tensor organization scheme for representing a pixels spectral-spatial feature and develop tensor discriminative locality alignment (TDLA) for removing redundant information for subsequent classification. The optimal solution of TDLA is obtained by alternately optimizing each mode of the input tensors. The methods are tested on three public real HSI data sets collected by hyperspectral digital imagery collection experiment, reflective optics system imaging spectrometer, and airborne visible/infrared imaging spectrometer. The classification results show significant improvements in classification accuracies while using a small number of features.


IEEE Transactions on Geoscience and Remote Sensing | 2008

An Adaptive Mean-Shift Analysis Approach for Object Extraction and Classification From Urban Hyperspectral Imagery

Xin Huang; Liangpei Zhang

In this paper, an adaptive mean-shift (MS) analysis framework is proposed for object extraction and classification of hyperspectral imagery over urban areas. The basic idea is to apply an MS to obtain an object-oriented representation of hyperspectral data and then use support vector machine to interpret the feature set. In order to employ MS for hyperspectral data effectively, a feature-extraction algorithm, nonnegative matrix factorization, is utilized to reduce the high-dimensional feature space. Furthermore, two bandwidth-selection algorithms are proposed for the MS procedure. One is based on the local structures, and the other exploits separability analysis. Experiments are conducted on two hyperspectral data sets, the DC Mall hyperspectral digital-imagery collection experiment and the Purdue campus hyperspectral mapper images. We evaluate and compare the proposed approach with the well-known commercial software eCognition (object-based analysis approach) and an effective spectral/spatial classifier for hyperspectral data, namely, the derivative of the morphological profile. Experimental results show that the proposed MS-based analysis system is robust and obviously outperforms the other methods.


EURASIP Journal on Advances in Signal Processing | 2007

A Total Variation Regularization Based Super-Resolution Reconstruction Algorithm for Digital Video

Michael K. Ng; Huanfeng Shen; Edmund Y. Lam; Liangpei Zhang

Super-resolution (SR) reconstruction technique is capable of producing a high-resolution image from a sequence of low-resolution images. In this paper, we study an efficient SR algorithm for digital video. To effectively deal with the intractable problems in SR video reconstruction, such as inevitable motion estimation errors, noise, blurring, missing regions, and compression artifacts, the total variation (TV) regularization is employed in the reconstruction model. We use the fixed-point iteration method and preconditioning techniques to efficiently solve the associated nonlinear Euler-Lagrange equations of the corresponding variational problem in SR. The proposed algorithm has been tested in several cases of motion and degradation. It is also compared with the Laplacian regularization-based SR algorithm and other TV-based SR algorithms. Experimental results are presented to illustrate the effectiveness of the proposed algorithm.

Collaboration


Dive into the Liangpei Zhang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge