双路径双鉴别器生成对抗网络的红外与可见光图像融合
Infrared and Visible Image Fusion based on Dual-path and Dual-discriminator Generation Adversarial Network
-
摘要: 针对图像融合算法中存在源图像信息保留不够充分、细节信息不够丰富等问题, 提出一种基于双路径双鉴别器生成对抗网络的红外与可见光图像融合方法. 在生成器端构建基于源图像差异拼接的梯度路径和对比度路径, 提高融合图像的细节信息和对比度, 通过多尺度分解提取红外与可见光图像的特征信息, 解决单一尺度特征提取不全面的问题; 然后将源图像引入双路径密集网络的每一层, 在提升特征传递效率的同时可获取更多源图像信息; 在鉴别器端采用双鉴别器估计红外与可见光图像的区域分布, 避免单鉴别器网络丢失红外图像对比度信息的模态失衡问题; 最后构造主辅梯度和主辅强度损失函数, 提升网络模型的信息提取能力. 在3类标准融合数据集上的实验结果表明, 所提方法在多个客观评估指标上均取得最好的效果,且具有更好的视觉效果.Abstract: Aiming at the problem that the image fusion algorithms cannot preserve more information from the source images and are not rich enough in details, an infrared and visible image fusion method based on dual-path and dual-discriminator generation adversarial network (GAN) is proposed. In the generator, the gradient path and contrast path based on the difference connection of source images are constructed to improve the detail information and contrast of the fused images. the multi-scale decomposition is used to extract feature information from infrared and visible images and solve the problem of incomplete feature extraction on a single scale. Then, two source images are directly introduced into each layer of the dual-path dense connection network. As a result, the efficiency of feature transmission is improved, meanwhile obtaining more source image information. In the discriminator, to avoid the modal imbalance caused by the loss of contrast information in the single discriminator, double discriminators are used to estimate the region distribution of source images. The main-auxiliary gradient and main-auxiliary strength loss functions are constructed to improve the information extraction capability of the network model. Comparison experiments on three standard fusion data sets show that the proposed algorithm not only provides the best fusion results in multiple objective evaluation indicators, but also has better visual effect.