高级检索
林晓, 周云翔, 李大志, 黄伟, 盛斌. 利用多尺度特征联合注意力模型的图像修复[J]. 计算机辅助设计与图形学学报, 2022, 34(8): 1260-1271. DOI: 10.3724/SP.J.1089.2022.19172
引用本文: 林晓, 周云翔, 李大志, 黄伟, 盛斌. 利用多尺度特征联合注意力模型的图像修复[J]. 计算机辅助设计与图形学学报, 2022, 34(8): 1260-1271. DOI: 10.3724/SP.J.1089.2022.19172
Lin Xiao, Zhou Yunxiang, Li Dazhi, Huang Wei, Sheng Bin. Image Inpainting Using Multi-Scale Feature Joint Attention Model[J]. Journal of Computer-Aided Design & Computer Graphics, 2022, 34(8): 1260-1271. DOI: 10.3724/SP.J.1089.2022.19172
Citation: Lin Xiao, Zhou Yunxiang, Li Dazhi, Huang Wei, Sheng Bin. Image Inpainting Using Multi-Scale Feature Joint Attention Model[J]. Journal of Computer-Aided Design & Computer Graphics, 2022, 34(8): 1260-1271. DOI: 10.3724/SP.J.1089.2022.19172

利用多尺度特征联合注意力模型的图像修复

Image Inpainting Using Multi-Scale Feature Joint Attention Model

  • 摘要: 当前,基于深度学习的图像修复方法在获取深层特征时会造成信息丢失的现象,不利于纹理细节的修复;且往往忽略语义特征的修复,会生成具有不合理结构的修复结果.针对上述问题,提出基于多尺度特征联合注意力模型的图像修复网络.首先提出基于扩张卷积的多尺度融合模块,在获取图像深度特征时通过多尺度特征的融合减少卷积过程中信息的丢失;然后提出联合注意力机制,既加强了模型对图像语义修复的能力,又确保了模型可以生成纹理清晰的修复结果;为保证修复结果细节和风格的一致性,最后将风格损失与感知损失引入网络.在CelebA-HQ和Places2数据集上的定性实验结果和PSNR,SSIM等常用的评价指标验证了所提方法优于已有的图像修复方法.相较于对比方法,所提方法的PSNR和SSIM分别提升了0.4%~6%和0.4%~3%.

     

    Abstract: Nowadays,image inpainting methods based on deep learning would lead to information loss when acquiring deep features,which is not conducive to the restoration of texture details and ignores the inpainting of semantic features.Besides,great majority of them generate inpainting results with unreasonable structures.In response to the above problems,an image inpainting network based on a multi-scale feature joint attention model is proposed.First of all,an image inpainting network using multi-scale feature joint attention model is advanced.When acquiring image depth features,multi-scale feature fusion is used to reduce the loss of information in the convolution process.Afterwards,a joint attention mechanism not only strengthens the model’s ability to repair image semantics,but also ensures the model could generate inpainting results with distinct texture.Last but not least,the style loss and perceptual loss are introduced into the network for the purpose of ensuring the consistency of the detail and style of the inpainting results.The qualitative experimental results on the CelebA-HQ and Places2 datasets and commonly used evaluation indicators such as PSNR and SSIM verify the method is superior to the existing image inpainting methods.Compared with the comparison methods,the proposed method improves PSNR and SSIM by 0.4%-6%and 0.4%-3%respectively.

     

/

返回文章
返回