高级检索
高丹丹, 周登文, 王婉君, 马钰, 李珊珊. 特征频率分组融合的轻量级图像超分辨率重建[J]. 计算机辅助设计与图形学学报, 2023, 35(7): 1020-1031. DOI: 10.3724/SP.J.1089.2023.19524
引用本文: 高丹丹, 周登文, 王婉君, 马钰, 李珊珊. 特征频率分组融合的轻量级图像超分辨率重建[J]. 计算机辅助设计与图形学学报, 2023, 35(7): 1020-1031. DOI: 10.3724/SP.J.1089.2023.19524
Gao Dandan, Zhou Dengwen, Wang Wanjun, Ma Yu, Li Shanshan. Lightweight Super-Resolution via Grouping Fusion of Feature Frequencies[J]. Journal of Computer-Aided Design & Computer Graphics, 2023, 35(7): 1020-1031. DOI: 10.3724/SP.J.1089.2023.19524
Citation: Gao Dandan, Zhou Dengwen, Wang Wanjun, Ma Yu, Li Shanshan. Lightweight Super-Resolution via Grouping Fusion of Feature Frequencies[J]. Journal of Computer-Aided Design & Computer Graphics, 2023, 35(7): 1020-1031. DOI: 10.3724/SP.J.1089.2023.19524

特征频率分组融合的轻量级图像超分辨率重建

Lightweight Super-Resolution via Grouping Fusion of Feature Frequencies

  • 摘要: 深度卷积神经网络规模越大(更深和更宽),性能越好,对计算和存储能力的要求也越高,限制了其在资源受限设备上的应用,迫切需要轻量级(参数量较小)超分辨率网络.为此,提出一个特征频率分组融合的轻量级图像超分辨率网络模型.首先使用残差拼接块传递和融合局部特征;然后通过混合注意力块组合不同线索的特征,提高特征的表达能力;最后利用高频和低频特征分组融合块,融合高频和低频特征信息,提高超分辨率图像的恢复质量.在Pytorch环境下,利用DIV2K数据集对网络模型进行训练,使用Set5,Set14,B100,Urban100和Manga109数据集进行实验的结果表明,无论是主观视觉质量还是客观度量,所提网络模型在PSNR,SSIM和LPIPS方面均显著优于对比网络模型.

     

    Abstract: The larger scale (deeper or wider) of the deep convolutional neural network, the better the performance, and the higher computing and storage capacity are required, which limits the application on resource-constrained devices. Lightweight (small number of parameters) super-resolution networks are highly needed. We propose a novel lightweight image super-resolution network based on grouping fusion of feature frequencies. Firstly, residual concatenation blocks are used to better transmit and fuse local features. Secondly, a hybrid attention block is used to combine the features of different cues and improve the expressiveness of features. Finally, a frequency feature grouping fusion block is applied to fuse the feature information of high frequency and low frequency, and improve the quality of super-resolution image restoration. The proposed network model has been trained on DIV2K dataset using the Pytorch environment and tested on standard Set5, Set14, B100, Urban100, and Manga109 test datasets. The experimental results show that the proposed network model is significantly superior to the other representative network models in terms of subjective visual quality and objective measurement of PSNR, SSIM, and LPIPS.

     

/

返回文章
返回