Multimedia Tools and Applications | 2021
Lightweight multi-scale aggregated residual attention networks for image super-resolution
Abstract
Recently, single image super-resolution (SISR) based on convolutional neural networks (CNNs) has represented great progress. However, due to the huge number of parameters, these models cannot work well in many real-world applications, most of which fail to exploit the multi-scale features and the hierarchical features for lightweight and accurate image SR. In this paper, a lightweight multi-scale aggregated residual attention network (MARAN) is proposed by exploring multi-scale contextual information and multi-level features. The network consists of shallow feature extraction, recursively stacked multiple multi-scale aggregated residual attention groups (MARAGs), multi-level feature fusion block (MLFFB), and reconstruction part. Specifically, the MARAGs produce the hierarchical multi-scale deep features, the MLFFB effectively fuses the hierarchical features with multi-scale aggregated residual attention. Each MARAG is composed of cascaded multi-scale aggregated residual attention blocks (MARABs) and each MARAB contains a multi-scale aggregated unit and a dual-attention unit. The multi-scale aggregated unit expands group convolution with cross-path connection. The dual-attention unit can adaptively modulate region-based information and channel-wise features. Qualitative and quantitative experiments on four benchmark datasets demonstrate that the proposed MARAN achieves better performance against state-of-the-art methods with fewer parameters.