Lichao Mou
German Aerospace Center
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lichao Mou.
IEEE Geoscience and Remote Sensing Magazine | 2017
Xiao Xiang Zhu; Devis Tuia; Lichao Mou; Gui-Song Xia; Liangpei Zhang; Feng Xu; Friedrich Fraundorfer
Central to the looming paradigm shift toward data-intensive science, machine-learning techniques are becoming increasingly important. In particular, deep learning has proven to be both a major breakthrough and an extremely powerful tool in many fields. Shall we embrace deep learning as the key to everything? Or should we resist a black-box solution? These are controversial issues within the remote-sensing community. In this article, we analyze the challenges of using deep learning for remote-sensing data analysis, review recent advances, and provide resources we hope will make deep learning in remote sensing seem ridiculously simple. More importantly, we encourage remote-sensing scientists to bring their expertise into deep learning and use it as an implicit general model to tackle unprecedented, large-scale, influential challenges, such as climate change and urbanization.
IEEE Transactions on Geoscience and Remote Sensing | 2017
Lichao Mou; Pedram Ghamisi; Xiao Xiang Zhu
In recent years, vector-based machine learning algorithms, such as random forests, support vector machines, and 1-D convolutional neural networks, have shown promising results in hyperspectral image classification. Such methodologies, nevertheless, can lead to information loss in representing hyperspectral pixels, which intrinsically have a sequence-based data structure. A recurrent neural network (RNN), an important branch of the deep learning family, is mainly designed to handle sequential data. Can sequence-based RNN be an effective method of hyperspectral image classification? In this paper, we propose a novel RNN model that can effectively analyze hyperspectral pixels as sequential data and then determine information categories via network reasoning. As far as we know, this is the first time that an RNN framework has been proposed for hyperspectral image classification. Specifically, our RNN makes use of a newly proposed activation function, parametric rectified tanh (PRetanh), for hyperspectral sequential data analysis instead of the popular tanh or rectified linear unit. The proposed activation function makes it possible to use fairly high learning rates without the risk of divergence during the training procedure. Moreover, a modified gated recurrent unit, which uses PRetanh for hidden representation, is adopted to construct the recurrent layer in our network to efficiently process hyperspectral data and reduce the total number of parameters. Experimental results on three airborne hyperspectral images suggest competitive performance in the proposed mode. In addition, the proposed network architecture opens a new window for future research, showcasing the huge potential of deep recurrent networks for hyperspectral data analysis.
international geoscience and remote sensing symposium | 2016
Lichao Mou; Xiao Xiang Zhu
Spaceborne remote sensing videos are becoming indispensable resources, opening up opportunities for new remote sensing applications. To exploit this new type of data, we need sophisticated algorithms for semantic scene interpretation. The main difficulties are: 1) Due to the relatively poor spatial resolution of the video acquired from space, moving objects, like cars, are very difficult to detect, not to mention track; 2) camera movement handicaps scene interpretation. To address these challenges, in this paper we propose a novel framework that fuses multispectral images and space videos for spatiotemporal analysis. Taking a multispectral image and a spaceborne video as input, an innovative deep neural network is proposed to fuse them in order to achieve a fine-resolution spatial scene labeling map. Moreover, a sophisticated approach is proposed to analyze activities and estimate traffic density from 150,000+ tracklets produced by a Kanade-Lucas-Tomasi keypoint tracker. The proposed framework is validated using data provided for the 2016 IEEE GRSS data fusion contest, including a video acquired from the International Space Station and a DEIMOS-2 multispectral image. Both visual and quantitative analysis of the experimental results demonstrates the effectiveness of our approach.
IEEE Transactions on Geoscience and Remote Sensing | 2018
Lichao Mou; Pedram Ghamisi; Xiao Xiang Zhu
Supervised approaches classify input data using a set of representative samples for each class, known as training samples. The collection of such samples is expensive and time demanding. Hence, unsupervised feature learning, which has a quick access to arbitrary amounts of unlabeled data, is conceptually of high interest. In this paper, we propose a novel network architecture, fully Conv–Deconv network, for unsupervised spectral–spatial feature learning of hyperspectral images, which is able to be trained in an end-to-end manner. Specifically, our network is based on the so-called encoder–decoder paradigm, i.e., the input 3-D hyperspectral patch is first transformed into a typically lower dimensional space via a convolutional subnetwork (encoder), and then expanded to reproduce the initial data by a deconvolutional subnetwork (decoder). However, during the experiment, we found that such a network is not easy to be optimized. To address this problem, we refine the proposed network architecture by incorporating: 1) residual learning and 2) a new unpooling operation that can use memorized max-pooling indexes. Moreover, to understand the “black box,” we make an in-depth study of the learned feature maps in the experimental analysis. A very interesting discovery is that some specific “neurons” in the first residual block of the proposed network own good description power for semantic visual patterns in the object level, which provide an opportunity to achieve “free” object detection. This paper, for the first time in the remote sensing community, proposes an end-to-end fully Conv–Deconv network for unsupervised spectral–spatial feature learning. Moreover, this paper also introduces an in-depth investigation of learned features. Experimental results on two widely used hyperspectral data, Indian Pines and Pavia University, demonstrate competitive performance obtained by the proposed methodology compared with other studied approaches.
urban remote sensing joint event | 2017
Lichao Mou; Michael Schmitt; Yuanyuan Wang; Xiao Xiang Zhu
In this paper we propose a convolutional neural network (CNN), which allows to identify corresponding patches of very high resolution (VHR) optical and SAR imagery of complex urban scenes. Instead of a siamese architecture as conventionally used in CNNs designed for image matching, we resort to a pseudo-siamese configuration with no interconnection between the two streams for SAR and optical imagery. The network is trained with automatically generated training data and does not resort to any hand-crafted features. First evaluations show that the network is able to predict corresponding patches with high accuracy, thus indicating great potential for further development to a generalized multi-sensor matching procedure.
urban remote sensing joint event | 2017
Jingliang Hu; Lichao Mou; Andreas Schmitt; Xiao Xiang Zhu
Urban Scene classification using single source data is massively studied in remote sensing field. However, single source only provides one certain perspective of the complicated urban scene while the fusion of multimodal dataset can provide complementary knowledge. We aim at fusing the spectrum information of the hyperspectral image and the scattering mechanisms of PolSAR data for urban scene classification. For the joint usage of the two data sets, a simple concatenation would lead to extraction of insufficient information and weakens the influence of the lower dimensional data. In this work, the end-to-end convolutional neural network is utilized to automatically learn how to effectively extract features and to fuse the hyperspectral image and the PolSAR data. More specifically, we propose a novel two-stream convolutional network architecture. It creates identical but separated convolutional stream for each data. Subsequently, the two streams are merged with comparable numbers of dimensionality within the fusion layer. This architecture ensures the effectively extraction of informative features from both data for the classification purpose and the fusion of the two data in a balanced manner. Experimental results suggest significantly superior performance of the proposed framework, while comparing to other existing fusion methods. To our knowledge, it is the first time that deep convolutional neural network accomplishes the fusion of hyperspectral image and SAR data.
Remote Sensing | 2018
Chunping Qiu; Michael Schmitt; Lichao Mou; Pedram Ghamisi; Xiao Zhu
Global Local Climate Zone (LCZ) maps, indicating urban structures and land use, are crucial for Urban Heat Island (UHI) studies and also as starting points to better understand the spatio-temporal dynamics of cities worldwide. However, reliable LCZ maps are not available on a global scale, hindering scientific progress across a range of disciplines that study the functionality of sustainable cities. As a first step towards large-scale LCZ mapping, this paper tries to provide guidance about data/feature choice. To this end, we evaluate the spectral reflectance and spectral indices of the globally available Sentinel-2 and Landsat-8 imagery, as well as the Global Urban Footprint (GUF) dataset, the OpenStreetMap layers buildings and land use and the Visible Infrared Imager Radiometer Suite (VIIRS)-based Nighttime Light (NTL) data, regarding their relevance for discriminating different Local Climate Zones (LCZs). Using a Residual convolutional neural Network (ResNet), a systematic analysis of feature importance is performed with a manually-labeled dataset containing nine cities located in Europe. Based on the investigation of the data and feature choice, we propose a framework to fully exploit the available datasets. The results show that GUF, OSM and NTL can contribute to the classification accuracy of some LCZs with relatively few samples, and it is suggested that Landsat-8 and Sentinel-2 spectral reflectances should be jointly used, for example in a majority voting manner, as proven by the improvement from the proposed framework, for large-scale LCZ mapping.
IEEE Transactions on Geoscience and Remote Sensing | 2018
Lichao Mou; Pedram Ghamisi; Xiao Xiang Zhu
Here, we correct some errors caused by a programming bug (a data type error) in overall accuracies (OAs) reported in [1] . The corrected OAs are underlined and shown in bold in Tables I – III .
IEEE Geoscience and Remote Sensing Magazine | 2017
Xiao Xiang Zhu; Devis Tuia; Lichao Mou; Gui-Song Xia; Liangpei Zhang; Feng Xu; Friedrich Fraundorfer
Remote Sensing | 2018
Haobo Lyu; Hui Lu; Lichao Mou; Wenyu Li; Jonathon S. Wright; Xuecao Li; Xinlu Li; Xiao Xiang Zhu; Jie Wang; Le Yu; Peng Gong