Abstract:
Aiming at the problem that the image fusion methods cannot preserve more information from the source images and are not rich enough in details, an infrared and visible image fusion method based on dual-path and dual-discriminator generation adversarial network (GAN) is proposed. In the generator, the gradient path and contrast path based on the difference connection of source images are constructed to improve the detail and contrast of the fused images. The multi-scale decomposition is used to extract feature information from infrared and visible images and solve the problem of incomplete feature extraction on a single scale. Then, two source images are introduced into each layer of the dual-path dense connection network. As a result, the efficiency of feature transmission is improved, meanwhile obtaining more source image information. In the discriminator, to avoid the modal imbalance caused by the loss of contrast information in the single discriminator, double discriminators are used to estimate the region distribution of source images. The main-auxiliary gradient and main-auxiliary strength loss functions are constructed to improve the information extraction capability of the network model. The experimental results on the TNO, RoadScene and MSRS datasets show the average gradient, spatial frequency, structural similarity and peak signal to noise ratio indicators of the proposed method are better than the eight state-of-the-art image fusion methods.