IEEE Transactions on Geoscience and Remote Sensing | 2021

Network Pruning for Remote Sensing Images Classification Based on Interpretable CNNs

 
 
 
 
 

Abstract


Convolutional neural network (CNN)-based research has been successfully applied in remote sensing image classification due to its powerful feature representation ability. However, these high-capacity networks bring heavy inference costs and are easily overparameterized, especially for the deep CNNs pretrained on natural image datasets. Network pruning is regarded as a prevalent approach for compressing networks, but most existing research ignores model interpretability while formulating pruning criterion. To address these issues, a network pruning method for remote sensing image classification based on interpretable CNNs is proposed. More specifically, an original interpretable CNN with a predefined pruning ratio is trained at first. The filters, namely channels in the high convolutional layer, are able to learn specific semantic meanings in proportion to the predefined pruning ratio. The filters without interpretability are supposed to be removed. As for other convolutional layers, a sensitivity function is designed to assess the risk of pruning channels for each layer, and furthermore, the pruning ratio for each layer is corrected adaptively. The pruning method based on the proposed sensitivity function is effective and requires little computational costs to search abandoned channels without damaging classification performance. To demonstrate the effectiveness, the proposed method is implemented on different scales of modern CNN models, including VGG-VD and AlexNet. The experimental results, obtained on the UC Merced dataset and NWPU-RESISC45 dataset, prove that our method significantly reduces the inference costs and improves the interpretability of networks.

Volume None
Pages 1-15
DOI 10.1109/TGRS.2021.3077062
Language English
Journal IEEE Transactions on Geoscience and Remote Sensing

Full Text