Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Menglong Yan is active.

Publication


Featured researches published by Menglong Yan.


Remote Sensing Letters | 2015

Object recognition in remote sensing images using sparse deep belief networks

Wenhui Diao; Xian Sun; Fangzheng Dou; Menglong Yan; Hongqi Wang; Kun Fu

Object recognition has been one of the hottest issues in the field of remote sensing image analysis. In this letter, a new pixel-wise learning method based on deep belief networks (DBNs) for object recognition is proposed. The method is divided into two stages, the unsupervised pre-training stage and the supervised fine-tuning stage. Given a training set of images, a pixel-wise unsupervised feature learning algorithm is utilized to train a mixed structural sparse restricted Boltzmann machine (RBM). After that, the outputs of this RBM are put into the next RBM as inputs. By stacking several layers of RBM, the deep generative model of DBNs is built. At the fine-tuning stage, a supervised layer is attached to the top of the DBN and labels of the data are put into this layer. The whole network is then trained using the back-propagation (BP) algorithm with sparse penalty. Finally, the deep model generates good joint distribution of images and their labels. Comparative experiments are conducted on our dataset acquired by QuickBird with 60 cm resolution and the recognition results demonstrate the accuracy and efficiency of our proposed method.


Remote Sensing Letters | 2017

Vehicle detection in remote sensing images using denoizing-based convolutional neural networks

Hao Li; Kun Fu; Menglong Yan; Xian Sun; Hao Sun; Wenhui Diao

ABSTRACT Vehicle detection in remote sensing images is a tough task and of great significance due to the fast increasing number of vehicles occurring in big cities. Recently, convolutional neural network (CNN)-based methods have achieved excellent performance in classification task due to their powerful abilities in high-level feature extraction. However, overfitting is a serious problem in CNN when applying complicated fully-connected layers, especially when the quantity of training samples is limited. In order to tackle this problem, a denoizing-based CNN called DCNN is proposed in this letter. More specially, a CNN with one fully-connected layer is pre-trained first for feature extraction. After that, features of this fully-connected layer are corrupted and used to pre-train a stacked denoizing autoencoder (SDAE) in an unsupervised way. Then, the pre-trained SDAE is added into the CNN as the fully-connected layer. After fine-tuning, DCNN can make the extracted features more robust and the detecting rate higher. With the help of our proposed locating method, vehicles can be detected effectively even when they are parked in a residential area. Comparative experiments demonstrate that our method has achieved state-of-the-art performance.


Remote Sensing | 2018

Automatic Ship Detection in Remote Sensing Images from Google Earth of Complex Scenes Based on Multiscale Rotation Dense Feature Pyramid Networks

Xue Yang; Hao Sun; Kun Fu; Jirui Yang; Xian Sun; Menglong Yan; Zhi Guo

Ship detection has been playing a significant role in the field of remote sensing for a long time, but it is still full of challenges. The main limitations of traditional ship detection methods usually lie in the complexity of application scenarios, the difficulty of intensive object detection, and the redundancy of the detection region. In order to solve these problems above, we propose a framework called Rotation Dense Feature Pyramid Networks (R-DFPN) which can effectively detect ships in different scenes including ocean and port. Specifically, we put forward the Dense Feature Pyramid Network (DFPN), which is aimed at solving problems resulting from the narrow width of the ship. Compared with previous multiscale detectors such as Feature Pyramid Network (FPN), DFPN builds high-level semantic feature-maps for all scales by means of dense connections, through which feature propagation is enhanced and feature reuse is encouraged. Additionally, in the case of ship rotation and dense arrangement, we design a rotation anchor strategy to predict the minimum circumscribed rectangle of the object so as to reduce the redundant detection region and improve the recall. Furthermore, we also propose multiscale region of interest (ROI) Align for the purpose of maintaining the completeness of the semantic and spatial information. Experiments based on remote sensing images from Google Earth for ship detection show that our detection method based on R-DFPN representation has state-of-the-art performance.


IEEE Geoscience and Remote Sensing Letters | 2017

Change Detection Based on Deep Siamese Convolutional Network for Optical Aerial Images

Yang Zhan; Kun Fu; Menglong Yan; Xian Sun; Hongqi Wang; Xiaosong Qiu

In this letter, we propose a novel supervised change detection method based on a deep siamese convolutional network for optical aerial images. We train a siamese convolutional network using the weighted contrastive loss. The novelty of the method is that the siamese network is learned to extract features directly from the image pairs. Compared with hand-crafted features used by the conventional change detection method, the extracted features are more abstract and robust. Furthermore, because of the advantage of the weighted contrastive loss function, the features have a unique property: the feature vectors of the changed pixel pair are far away from each other, while the ones of the unchanged pixel pair are close. Therefore, we use the distance of the feature vectors to detect changes between the image pair. Simple threshold segmentation on the distance map can even obtain good performance. For improvement, we use a


IEEE Geoscience and Remote Sensing Letters | 2015

Building Reconstruction From High-Resolution Multiview Aerial Imagery

Bin Wu; Xian Sun; Qichang Wu; Menglong Yan; Hongqi Wang; Kun Fu

k


Remote Sensing Letters | 2018

Semantic pixel labelling in remote sensing images using a deep convolutional encoder-decoder model

Xin Wei; Kun Fu; Xin Gao; Menglong Yan; Xian Sun; Kaiqiang Chen; Hao Sun

-nearest neighbor approach to update the initial result. Experimental results show that the proposed method produces results comparable, even better, with the two state-of-the-art methods in terms of F-measure.


international geoscience and remote sensing symposium | 2017

Building extraction from remote sensing images with deep learning in a supervised manner

Kaiqiang Chen; Kun Fu; Xin Gao; Menglong Yan; Xian Sun; Huan Zhang

In this letter, we propose a novel method to reconstruct accurate building structures from high-resolution multiview aerial imagery, using layered contour fitting (LCF) with a density-based clustering algorithm. Initially, the complicated 3-D scene is reconstructed by a probabilistic volumetric modeling algorithm. Subsequently, the reconstructed 3-D scene model is projected into layer images based on the height information. At last, we combine an extended layered density-based clustering approach with a generative LCF approach to remove noise and extract accurate building contours in every layer image at the same time. The final accurate 3-D building model is generated from these contours in layer images with a smoothing operation. Experiments on the aerial image sets demonstrate effectiveness and precision of our method.


IEEE Geoscience and Remote Sensing Letters | 2017

Integrated Localization and Recognition for Inshore Ships in Large Scene Remote Sensing Images

Wenkai Li; Kun Fu; Hao Sun; Xian Sun; Zhi Guo; Menglong Yan; Xinwei Zheng

ABSTRACT In this letter, we propose a deep convolutional encoder-decoder model for remote sensing images semantic pixel labelling. Specifically, the encoder network is employed to extract the high-level semantic feature of hyperspectral images and the decoder network is employed to map the low resolution feature maps to full input resolution feature maps for pixel-wise labelling. Different from traditional convolutional layers we use a ‘dilated convolution’ which effectively enlarge the receptive field of filters in order to incorporate more context information. Also the fully connected conditional random field (CRF) is integrated into the model so that the network can be trained end-to-end. CRF can effectively improve the localization performance. Experiments on the Vaihingen and Potsdam dataset demonstrate that our model can make promising performance.


AIP Advances | 2016

Determination of dynamic shear strength of 2024 aluminum alloy under shock compression

Haifei Zhang; Menglong Yan; Haiying Wang; Li Shen; L.H. Dai

Building extraction from remote sensing images is a longstanding topic in land use analysis and applications of remote sensing. Variations in shape and appearance of buildings, occlusions and other unpredictable factors increase the hardness of automatic building extraction. Numerous methods have been proposed during the last several decays, but most of these works are task oriented and lack of generalization. This paper applys deep learning to building extraction in a supervised manner. A deep deconvolution neural network with 27 Convolution/Deconvolution weight layers is designed to realize building extraction in pixel level. As such a deep network is prone to overfitting, a data augment method that suits pixel-wise prediction tasks in remote sensing is suggested. Moreover, an overall training and inferencing architecture is proposed. Our methods are finally applied to building extraction tasks and get competitive results with other methods published.


IEEE Geoscience and Remote Sensing Letters | 2018

Cloud and Cloud Shadow Detection Using Multilevel Feature Fused Segmentation Network

Zhiyuan Yan; Menglong Yan; Hao Sun; Kun Fu; Jun Hong; Jun Sun; Yi Zhang; Xian Sun

Automatic inshore ship recognition, which includes target localization and type recognition, is an important and challenging task. However, existing ship recognition methods mainly focus on the classification of ship samples or clips. These methods rely deeply on the detection algorithm to complete localization and recognition in large scene images. In this letter, we present an integrated framework to automatically locate and recognize inshore ships in large scene satellite images. Different from traditional object recognition methods using two steps of detection–classification, the proposed framework could locate inshore ships and identify types without the detection step. Considering ship size is a useful feature, a novel multimodel method is proposed to utilize this feature. And an Euclidean-distance-based fusion strategy is used to combine candidates given by models. This fusion strategy could effectively separate side-by-side ships. To handle large scene images efficiently, scale-invariant feature transform registration is also integrated into the framework to utilize geographic information. All of these make the framework an end-to-end fashion which could automatically recognize inshore ships in large scene satellite images. Experiments on Quickbird images show that this framework could achieve the actual applied requirements.

Collaboration


Dive into the Menglong Yan's collaboration.

Top Co-Authors

Avatar

Xian Sun

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Kun Fu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Hao Sun

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Xin Gao

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Hongqi Wang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Kaiqiang Chen

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Xin Wei

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Zhi Guo

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Guangluan Xu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Wenhui Diao

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge