Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Renlong Hang is active.

Publication


Featured researches published by Renlong Hang.


IEEE Transactions on Geoscience and Remote Sensing | 2016

Matrix-Based Discriminant Subspace Ensemble for Hyperspectral Image Spatial–Spectral Feature Fusion

Renlong Hang; Qingshan Liu; Huihui Song; Yubao Sun

Spatial-spectral feature fusion is well acknowledged as an effective method for hyperspectral (HS) image classification. Many previous studies have been devoted to this subject. However, these methods often regard the spatial-spectral high-dimensional data as 1-D vector and then extract informative features for classification. In this paper, we propose a new HS image classification method. Specifically, matrix-based spatial-spectral feature representation is designed for each pixel to capture the local spatial contextual and the spectral information of all the bands, which can well preserve the spatial-spectral correlation. Then, matrix-based discriminant analysis is adopted to learn the discriminative feature subspace for classification. To further improve the performance of discriminative subspace, a random sampling technique is used to produce a subspace ensemble for final HS image classification. Experiments are conducted on three HS remote sensing data sets acquired by different sensors, and experimental results demonstrate the efficiency of the proposed method.


IEEE Transactions on Geoscience and Remote Sensing | 2018

Learning Multiscale Deep Features for High-Resolution Satellite Image Scene Classification

Qingshan Liu; Renlong Hang; Huihui Song; Zhi Li

In this paper, we propose a multiscale deep feature learning method for high-resolution satellite image scene classification. Specifically, we first warp the original satellite image into multiple different scales. The images in each scale are employed to train a deep convolutional neural network (DCNN). However, simultaneously training multiple DCNNs is time-consuming. To address this issue, we explore DCNN with spatial pyramid pooling (SPP-net). Since different SPP-nets have the same number of parameters, which share the identical initial values, and only fine-tuning the parameters in fully connected layers ensures the effectiveness of each network, thereby greatly accelerating the training process. Then, the multiscale satellite images are fed into their corresponding SPP-nets, respectively, to extract multiscale deep features. Finally, a multiple kernel learning method is developed to automatically learn the optimal combination of such features. Experiments on two difficult data sets show that the proposed method achieves favorable performance compared with other state-of-the-art methods.


IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2017

Robust Matrix Discriminative Analysis for Feature Extraction From Hyperspectral Images

Renlong Hang; Qingshan Liu; Yubao Sun; Xiaotong Yuan; Hucheng Pei; Javier Plaza; Antonio Plaza

Linear discriminative analysis (LDA) is an effective feature extraction method for hyperspectral image (HSI) classification. Most of the existing LDA-related methods are based on spectral features, ignoring spatial information. Recently, a matrix discriminative analysis (MDA) model has been proposed to incorporate the spatial information into the LDA. However, due to sensor interferers, calibration errors, and other issues, HSIs can be noisy. These corrupted data easily degrade the performance of the MDA. In this paper, a robust MDA (RMDA) model is proposed to address this important issue. Specifically, based on the prior knowledge that the pixels in a small spatial neighborhood of the HSI lie in a low-rank subspace, a denoising model is first employed to recover the intrinsic components from the noisy HSI. Then, the MDA model is used to extract discriminative spatial–spectral features from the recovered components. Besides, different HSIs exhibit different spatial contextual structures, and even a single HSI may contain both large and small homogeneous regions simultaneously. To sufficiently describe these multiscale spatial structures, a multiscale RMDA model is further proposed. Experiments have been conducted using three widely used HSIs, and the obtained results show that the proposed method allows for a significant improvement in the classification performance when compared to other LDA-based methods.


IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2017

Graph Regularized Nonlinear Ridge Regression for Remote Sensing Data Analysis

Renlong Hang; Qingshan Liu; Huihui Song; Yubao Sun; Fuping Zhu; Hucheng Pei

In this paper, a graph regularized nonlinear ridge regression (RR) model is proposed for remote sensing data analysis, including hyper-spectral image classification and atmospheric aerosol retrieval. The RR is an efficient linear regression method, especially in handling cases with a small number of training samples or with correlated features. However, large amounts of unlabeled samples exist in remote sensing data analysis. To sufficiently explore the information in unlabeled samples, we propose a graph regularized RR (GRR) method, where the vertices denote labeled or unlabeled samples and the edges represent the similarities among different samples. A natural assumption is that the predict values of neighboring samples are close to each other. To further address the nonlinearly separable problem in remote sensing data caused by the complex acquisition process as well as the impacts of atmospheric and geometric distortions, we extend the proposed GRR into a kernelized nonlinear regression method, namely KGRR. To evaluate the proposed method, we apply it to both classification and regression tasks and compare with representative methods. The experimental results show that KGRR can achieve favorable performance in terms of predictability and computation time.


Remote Sensing | 2017

Bidirectional-Convolutional LSTM Based Spectral-Spatial Feature Learning for Hyperspectral Image Classification

Qingshan Liu; Feng Zhou; Renlong Hang; Xiaotong Yuan

This paper proposes a novel deep learning framework named bidirectional-convolutional long short term memory (Bi-CLSTM) network to automatically learn the spectral-spatial features from hyperspectral images (HSIs). In the network, the issue of spectral feature extraction is considered as a sequence learning problem, and a recurrent connection operator across the spectral domain is used to address it. Meanwhile, inspired from the widely used convolutional neural network (CNN), a convolution operator across the spatial domain is incorporated into the network to extract the spatial feature. In addition, to sufficiently capture the spectral information, a bidirectional recurrent connection is proposed. In the classification phase, the learned features are concatenated into a vector and fed to a Softmax classifier via a fully-connected operator. To validate the effectiveness of the proposed Bi-CLSTM framework, we compare it with six state-of-the-art methods, including the popular 3D-CNN model, on three widely used HSIs (i.e., Indian Pines, Pavia University, and Kennedy Space Center). The obtained results show that Bi-CLSTM can improve the classification performance by almost 1.5 % as compared to 3D-CNN.


Remote Sensing | 2017

Hypergraph Embedding for Spatial-Spectral Joint Feature Extraction in Hyperspectral Images

Yubao Sun; Sujuan Wang; Qingshan Liu; Renlong Hang; Guangcan Liu

The fusion of spatial and spectral information in hyperspectral images (HSIs) is useful for improving the classification accuracy. However, this approach usually results in features of higher dimension and the curse of the dimensionality problem may arise resulting from the small ratio between the number of training samples and the dimensionality of features. To ease this problem, we propose a novel algorithm for spatial-spectral feature extraction based on hypergraph embedding. Firstly, each HSI pixel is regarded as a vertex and the joint of extended morphological profiles (EMP) and spectral features is adopted as the feature associated with the vertex. A hypergraph is then constructed by the K-Nearest-Neighbor method, in which each pixel and its most K relevant pixels are linked as one hyperedge to represent the complex relationships between HSI pixels. Secondly, the hypergraph embedding model is designed to learn a low dimensional feature with the reservation of geometric structure of HSI. An adaptive hyperedge weight estimation scheme is also introduced to preserve the prominent hyperedges by the regularization constraint on the weight. Finally, the learned low-dimensional features are fed to the support vector machine (SVM) for classification. The experimental results on three benchmark hyperspectral databases are presented. They highlight the importance of spatial–spectral joint features embedding for the accurate classification of HSI data. The weight estimation is better for further improving the classification accuracy. These experimental results verify the proposed method.


IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2017

Spatial–Spectral Locality-Constrained Low-Rank Representation with Semi-Supervised Hypergraph Learning for Hyperspectral Image Classification

Qingshan Liu; Yubao Sun; Renlong Hang; Huihui Song

In this paper, we propose a novel hyperspectral image classification method based on spatial–spectral locality-constrained low-rank representation (LRR) and semi-supervised hypergraph learning. Specifically, we first represent the hyperspectral data via LRR due to its abilities in both recovering the low-rank structure of high-dimensional observations and dealing with the noises corrupted during imaging. Then, we incorporate a locality constraint based on spatial–spectral similarity into the LRR model to further preserve the spatial information and local manifold structure. Based on LRR features, a semi-supervised hypergraph learning algorithm is designed for final classification to fully exploit the rich information of unlabeled samples. Different from the conventional pair-wise graph model, the hypergraph model can effectively capture high-order relationships among samples. Experiments are conducted on three benchmark hyperspectral datasets, and the results show that the proposed method achieves superior classification performance over other state-of-the-art methods and possesses the robustness to noise.


International Journal of Remote Sensing | 2018

Correcting MODIS aerosol optical depth products using a ridge regression model

Renlong Hang; Qingshan Liu; Guiyu Xia; Huihui Song

ABSTRACT Aerosol optical depth (AOD) is an important metric for the concentration of aerosols in the atmosphere. Dark target (DT) algorithm is a widely used physical model to retrieve AOD over land from Moderate Resolution Imaging Spectroradiometer (MODIS) data. However, due to the limitation of surface ‘dark-target’ in some regions and over certain surface types, it does not work very well. In this paper, we propose two hybrid frameworks based on ridge regression (RR) to improve the retrieval accuracy. They are serial and parallel approaches. In both frameworks, the DT algorithm is used as a baseline to derive an initial result, and the bias between the derived AOD and the ground-truth is corrected by the RR model. To validate the effectiveness of the proposed methods, we apply them on 3093 collocated MODIS and Aerosol Robotic Network (AERONET) observations, covering 10 stations at all available time in China. The obtained results demonstrate that the proposed methods can improve retrieval performance compared to the corresponding DT algorithm and the RR model.


Neurocomputing | 2018

Multi-component group sparse RPCA model for motion object detection under complex dynamic background

Min Wu; Yubao Sun; Renlong Hang; Qingshan Liu; Guangcan Liu

Abstract Robust PCA model and its variants are promising tools for motion object detection, which decompose video or image sequences matrix into a low rank background component and a sparse moving object component. Although they can handle static background well, the background motion is usually mixed in the sparse components under the condition of complex dynamic background, such as fountains, ripples, and shaking leaves, etc. Meanwhile, the detected boundaries of foreground objects are usually inaccurate and incomplete. In this paper, a multi-component group sparse RPCA model is proposed to cope with all the difficulties mentioned above. With the aiming to separate foreground motion object from dynamic background, our model represents the observed video or image sequences as three components, i.e., a low-rank static background, a group sparse foreground, and a dynamic background. In order to integrate the object boundary prior, each frame is over-segmented into super-pixels which are taken as the group information to define a group sparse norm. Accordingly, the group sparse norm takes each super-pixels as a whole to measure the sparse foreground. Furthermore, an incoherence term is introduced to enhance the separability of sparse foreground motion from dynamic background component. We further apply alternating direction method of multipliers algorithm to solve the proposed model. Extensive experiment results demonstrate the superiority of our method over some representative methods.


Neurocomputing | 2018

Hyperspectral image classification using spectral-spatial LSTMs

Feng Zhou; Renlong Hang; Qingshan Liu; Xiaotong Yuan

Abstract In this paper, we propose a hyperspectral image (HSI) classification method using spectral-spatial long short term memory (LSTM) networks. Specifically, for each pixel, we feed its spectral values in different channels into Spectral LSTM one by one to learn the spectral feature. Meanwhile, we firstly use principle component analysis (PCA) to extract the first principle component from a HSI, and then select local image patches centered at each pixel from it. After that, we feed the row vectors of each image patch into Spatial LSTM one by one to learn the spatial feature for the center pixel. In the classification stage, the spectral and spatial features of each pixel are fed into softmax classifiers respectively to derive two different results, and a decision fusion strategy is further used to obtain a joint spectral-spatial results. Experimental results on three widely used HSIs (i.e., Indian Pines, Pavia University, and Kennedy Space Center) show that our method can improve the classification accuracy by at least 2.69%, 1.53% and 1.08% compared to other state-of-the-art methods.

Collaboration


Dive into the Renlong Hang's collaboration.

Top Co-Authors

Avatar

Qingshan Liu

Nanjing University of Information Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yubao Sun

Nanjing University of Information Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Huihui Song

Nanjing University of Information Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Xiaotong Yuan

Nanjing University of Information Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Feng Zhou

Nanjing University of Information Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Guangcan Liu

Nanjing University of Information Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Guiyu Xia

Nanjing University of Information Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Sujuan Wang

Nanjing University of Information Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Antonio Plaza

University of Extremadura

View shared research outputs
Top Co-Authors

Avatar

Javier Plaza

University of Extremadura

View shared research outputs
Researchain Logo
Decentralizing Knowledge