Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xiangtao Zheng is active.

Publication


Featured researches published by Xiangtao Zheng.


IEEE Transactions on Systems, Man, and Cybernetics | 2017

Joint Dictionary Learning for Multispectral Change Detection

Xiaoqiang Lu; Yuan Yuan; Xiangtao Zheng

Change detection is one of the most important applications of remote sensing technology. It is a challenging task due to the obvious variations in the radiometric value of spectral signature and the limited capability of utilizing spectral information. In this paper, an improved sparse coding method for change detection is proposed. The intuition of the proposed method is that unchanged pixels in different images can be well reconstructed by the joint dictionary, which corresponds to knowledge of unchanged pixels, while changed pixels cannot. First, a query image pair is projected onto the joint dictionary to constitute the knowledge of unchanged pixels. Then reconstruction error is obtained to discriminate between the changed and unchanged pixels in the different images. To select the proper thresholds for determining changed regions, an automatic threshold selection strategy is presented by minimizing the reconstruction errors of the changed pixels. Adequate experiments on multispectral data have been tested, and the experimental results compared with the state-of-the-art methods prove the superiority of the proposed method. Contributions of the proposed method can be summarized as follows: 1) joint dictionary learning is proposed to explore the intrinsic information of different images for change detection. In this case, change detection can be transformed as a sparse representation problem. To the authors’ knowledge, few publications utilize joint learning dictionary in change detection; 2) an automatic threshold selection strategy is presented, which minimizes the reconstruction errors of the changed pixels without the prior assumption of the spectral signature. As a result, the threshold value provided by the proposed method can adapt to different data due to the characteristic of joint dictionary learning; and 3) the proposed method makes no prior assumption of the modeling and the handling of the spectral signature, which can be adapted to different data.


IEEE Transactions on Image Processing | 2017

Latent Semantic Minimal Hashing for Image Retrieval

Xiaoqiang Lu; Xiangtao Zheng; Xuelong Li

Hashing-based similarity search is an important technique for large-scale query-by-example image retrieval system, since it provides fast search with computation and memory efficiency. However, it is a challenge work to design compact codes to represent original features with good performance. Recently, a lot of unsupervised hashing methods have been proposed to focus on preserving geometric structure similarity of the data in the original feature space, but they have not yet fully refined image features and explored the latent semantic feature embedding in the data simultaneously. To address the problem, in this paper, a novel joint binary codes learning method is proposed to combine image feature to latent semantic feature with minimum encoding loss, which is referred as latent semantic minimal hashing. The latent semantic feature is learned based on matrix decomposition to refine original feature, thereby it makes the learned feature more discriminative. Moreover, a minimum encoding loss is combined with latent semantic feature learning process simultaneously, so as to guarantee the obtained binary codes are discriminative as well. Extensive experiments on several well-known large databases demonstrate that the proposed method outperforms most state-of-the-art hashing methods.Hashing-based similarity search is an important technique for large-scale query-by-example image retrieval system, since it provides fast search with computation and memory efficiency. However, it is a challenge work to design compact codes to represent original features with good performance. Recently, a lot of unsupervised hashing methods have been proposed to focus on preserving geometric structure similarity of the data in the original feature space, but they have not yet fully refined image features and explored the latent semantic feature embedding in the data simultaneously. To address the problem, in this paper, a novel joint binary codes learning method is proposed to combine image feature to latent semantic feature with minimum encoding loss, which is referred as latent semantic minimal hashing. The latent semantic feature is learned based on matrix decomposition to refine original feature, thereby it makes the learned feature more discriminative. Moreover, a minimum encoding loss is combined with latent semantic feature learning process simultaneously, so as to guarantee the obtained binary codes are discriminative as well. Extensive experiments on several well-known large databases demonstrate that the proposed method outperforms most state-of-the-art hashing methods.


IEEE Transactions on Image Processing | 2017

Discovering Diverse Subset for Unsupervised Hyperspectral Band Selection

Yuan Yuan; Xiangtao Zheng; Xiaoqiang Lu

Band selection, as a special case of the feature selection problem, tries to remove redundant bands and select a few important bands to represent the whole image cube. This has attracted much attention, since the selected bands provide discriminative information for further applications and reduce the computational burden. Though hyperspectral band selection has gained rapid development in recent years, it is still a challenging task because of the following requirements: 1) an effective model can capture the underlying relations between different high-dimensional spectral bands; 2) a fast and robust measure function can adapt to general hyperspectral tasks; and 3) an efficient search strategy can find the desired selected bands in reasonable computational time. To satisfy these requirements, a multigraph determinantal point process (MDPP) model is proposed to capture the full structure between different bands and efficiently find the optimal band subset in extensive hyperspectral applications. There are three main contributions: 1) graphical model is naturally transferred to address band selection problem by the proposed MDPP; 2) multiple graphs are designed to capture the intrinsic relationships between hyperspectral bands; and 3) mixture DPP is proposed to model the multiple dependencies in the proposed multiple graphs, and offers an efficient search strategy to select the optimal bands. To verify the superiority of the proposed method, experiments have been conducted on three hyperspectral applications, such as hyperspectral classification, anomaly detection, and target detection. The reliability of the proposed method in generic hyperspectral tasks is experimentally proved on four real-world hyperspectral data sets.Band selection, as a special case of the feature selection problem, tries to remove redundant bands and select a few important bands to represent the whole image cube. This has attracted much attention since the selected bands provide discriminative information for further applications and reduce the computational burden. Though hyperspectral band selection has gained rapid development in recent years, it is still a challenging task because of the following requirements: 1) An effective model can capture the underlying relations between different high-dimensional spectral bands. 2) A fast and robust measure function can adapt to general hyperspectral tasks. 3) An efficient search strategy can find the desired selected bands in reasonable computational time. To satisfy these requirements, a multigraph determinantal point process (MDPP) model is proposed to capture the full structure between different bands and efficiently find the optimal band subset in extensive hyperspectral applications. There are three main contributions: 1) Graphical model is naturally transferred to address band selection problem by the proposed MDPP. 2) Multiple graphs are designed to capture the intrinsic relationships between hyperspectral bands. 3) Mixture determinantal point process (Mix-DPP) is proposed to model the multiple dependencies in the proposed multiple graphs, and offers an efficient search strategy to select the optimal bands. To verify the superiority of the proposed method, experiments have been conducted on three hyperspectral applications, such as hyperspectral classification, anomaly detection, and target detection. The reliability of the proposed method in generic hyperspectral tasks is experimentally proved on four real-world hyperspectral data sets.


IEEE Transactions on Geoscience and Remote Sensing | 2017

Remote Sensing Scene Classification by Unsupervised Representation Learning

Xiaoqiang Lu; Xiangtao Zheng; Yuan Yuan

With the rapid development of the satellite sensor technology, high spatial resolution remote sensing (HSR) data have attracted extensive attention in military and civilian applications. In order to make full use of these data, remote sensing scene classification becomes an important and necessary precedent task. In this paper, an unsupervised representation learning method is proposed to investigate deconvolution networks for remote sensing scene classification. First, a shallow weighted deconvolution network is utilized to learn a set of feature maps and filters for each image by minimizing the reconstruction error between the input image and the convolution result. The learned feature maps can capture the abundant edge and texture information of high spatial resolution images, which is definitely important for remote sensing images. After that, the spatial pyramid model (SPM) is used to aggregate features at different scales to maintain the spatial layout of HSR image scene. A discriminative representation for HSR image is obtained by combining the proposed weighted deconvolution model and SPM. Finally, the representation vector is input into a support vector machine to finish classification. We apply our method on two challenging HSR image data sets: the UCMerced data set with 21 scene categories and the Sydney data set with seven land-use categories. All the experimental results achieved by the proposed method outperform most state of the arts, which demonstrates the effectiveness of the proposed method.


IEEE Transactions on Geoscience and Remote Sensing | 2015

Spectral–Spatial Kernel Regularized for Hyperspectral Image Denoising

Yuan Yuan; Xiangtao Zheng; Xiaoqiang Lu

Noise contamination is a ubiquitous problem in hyperspectral images (HSIs), which is a challenging and promising theme in many remote sensing applications. A large number of methods have been proposed to remove noise. Unfortunately, most denoising methods fail to take full advantages of the high spectral correlation and to simultaneously consider the specific noise distributions in HSIs. Recently, a spectral-spatial adaptive hyperspectral total variation (SSAHTV) was proposed and obtained promising results. However, the SSAHTV model is insensitive to the image details, which makes the edges blur. To overcome all of these drawbacks, a spectral-spatial kernel method for HSI denoising is proposed in this paper. The proposed method is inspired by the observation that the spectral-spatial information is highly redundant in HSIs, which is sufficient to estimate the clear images. In this paper, a spectral-spatial kernel regularization is proposed to maintain the spectral correlations in spectral dimension and to match the original structure between two spatial dimensions. Moreover, an adaptive mechanism is developed to balance the fidelity term according to different noise distributions in each band. Therefore, it cannot only suppress noise in the high-noise band but also preserve information in the low-noise band. The reliability of the proposed method in removing noise is experimentally proved on both simulated data and real data.


IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2017

Hyperspectral Image Superresolution by Transfer Learning

Yuan Yuan; Xiangtao Zheng; Xiaoqiang Lu

Hyperspectral image superresolution is a highly attractive topic in computer vision and has attracted many researchers’ attention. However, nearly all the existing methods assume that multiple observations of the same scene are required with the observed low-resolution hyperspectral image. This limits the application of superresolution. In this paper, we propose a new framework to enhance the resolution of hyperspectral images by exploiting the knowledge from natural images: The relationship between low/high-resolution images is the same as that between low/high-resolution hyperspectral images. In the proposed framework, the mapping between low- and high-resolution images can be learned by deep convolutional neural network and be transferred to hyperspectral image by borrowing the idea of transfer learning. In addition, to study the spectral characteristic between low- and high-resolution hyperspectral image, collaborative nonnegative matrix factorization (CNMF) is proposed to enforce collaborations between the low- and high-resolution hyperspectral images, which encourages the estimated solution to extract the same endmembers with low-resolution hyperspectral image. The experimental results on ground based and remote sensing data suggest that the proposed method achieves comparable performance without requiring any auxiliary images of the same scene.


IEEE Transactions on Geoscience and Remote Sensing | 2017

Dimensionality Reduction by Spatial–Spectral Preservation in Selected Bands

Xiangtao Zheng; Yuan Yuan; Xiaoqiang Lu

Dimensionality reduction (DR) has attracted extensive attention since it provides discriminative information of hyperspectral images (HSI) and reduces the computational burden. Though DR has gained rapid development in recent years, it is difficult to achieve higher classification accuracy while preserving the relevant original information of the spectral bands. To relieve this limitation, in this paper, a different DR framework is proposed to perform feature extraction on the selected bands. The proposed method uses determinantal point process to select the representative bands and to preserve the relevant original information of the spectral bands. The performance of classification is further improved by performing multiple Laplacian eigenmaps (LEs) on the selected bands. Different from the traditional LEs, multiple Laplacian matrices in this paper are defined by encoding spatial–spectral proximity on each band. A common low-dimensional representation is generated to capture the joint manifold structure from multiple Laplacian matrices. Experimental results on three real-world HSIs demonstrate that the proposed framework can lead to a significant advancement in HSI classification compared with the state-of-the-art methods.


Neurocomputing | 2016

A target detection method for hyperspectral image based on mixture noise model

Xiangtao Zheng; Yuan Yuan; Xiaoqiang Lu

Abstract Subpixel hyperspectral detection is a kind of method which tries to locate targets in a hyperspectral image when the spectrum of the targets is given. Due to its subpixel nature, targets are often smaller than one pixel, which increases the difficulty of detection. Many algorithms have been proposed to tackle this problem, most of which model the noise in all spatial points of hyperspectral image by multivariate normal distribution. However, this model alone may not be an appropriate description of the noise distribution in hyperspectral image. After carefully studying the distribution of hyperspectral image, it is concluded that the gradient of noise also obeys normal distribution. In this paper two detectors are proposed: mixture gradient structured detector (MGSD) and mixture gradient unstructured detector (MGUD). These detectors are based on a new model which takes advantage of the distribution of the gradient of the noise. This makes the detectors more accordant with the practical situation. To evaluate the performance of the proposed detectors, three different data sets, including one synthesized data set and two real-world data sets, are used in the experiments. Results show that the proposed detectors have better performance than current subpixel detectors.


Pattern Recognition | 2016

A discriminative representation for human action recognition

Yuan Yuan; Xiangtao Zheng; Xiaoqiang Lu

Action recognition has been standing as an active research topic over the past years. Many efforts have been made and many methods have been proposed. However, there are still some challenges such as illumination condition, viewpoint, camera motion and cluttered background. In order to tackle these challenges, a discriminative representation is proposed by discovering key information of the input data. This task can be addressed by improvements of two major components: parameterized representation and discriminative classifier. The representation is parameterized with hidden variables and can be learned from training data. And the classifier can be trained to recognize actions based on the proposed representation. The contributions of this paper are as follows: (1) a novel probabilistic representation is utilized to capture the relative significant information of low level features; (2) a novel framework is proposed by combining the parameterized representation and discriminative classifier; (3) an alternating strategy is favorable to improve the performance of action recognition by updating the representation and the classifier alternatively. Experimental results on five well-known datasets demonstrate that the proposed method significantly improves the performance in action recognition. HighlightsA discriminative representation is proposed by discovering key information of the input data.The representation is parameterized with hidden variables and can be learned from training data.Human action is recognized by combining the parameterized representation and discriminative classifier.The performance of action recognition is improved by updating the representation and the classifier alternatively.


IEEE Transactions on Geoscience and Remote Sensing | 2018

Exploring Models and Data for Remote Sensing Image Caption Generation

Xiaoqiang Lu; Binqiang Wang; Xiangtao Zheng; Xuelong Li

Inspired by recent development of artificial satellite, remote sensing images have attracted extensive attention. Recently, notable progress has been made in scene classification and target detection. However, it is still not clear how to describe the remote sensing image content with accurate and concise sentences. In this paper, we investigate to describe the remote sensing images with accurate and flexible sentences. First, some annotated instructions are presented to better describe the remote sensing images considering the special characteristics of remote sensing images. Second, in order to exhaustively exploit the contents of remote sensing images, a large-scale aerial image data set is constructed for remote sensing image caption. Finally, a comprehensive review is presented on the proposed data set to fully advance the task of remote sensing caption. Extensive experiments on the proposed data set demonstrate that the content of the remote sensing image can be completely described by generating language descriptions. The data set is available at https://github.com/201528014227051/RSICD_optimal.

Collaboration


Dive into the Xiangtao Zheng's collaboration.

Top Co-Authors

Avatar

Xiaoqiang Lu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xuelong Li

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Binqiang Wang

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge