高级检索

结合Pix2Pix生成对抗网络的灰度图像着色方法

Pix2Pix-Based Grayscale Image Coloring Method

  • 摘要: 针对神经网络在进行图像着色时容易出现物体边界不明确、图像着色质量不高的问题,提出结合Pix2Pix生成对抗网络的灰度图像着色方法.首先改进U-Net结构,采用8个下采样层和8个上采样层对图像进行特征提取和颜色预测,提高网络模型对图像深层次特征的提取能力;然后使用L1损失和smoothL1损失度量生成图像与真实图像之间的差距,对比不同损失函数下的图像着色质量;最后加入梯度惩罚,在生成图像和真实图像分布之间构造新的数据分布,对每个输入数据进行梯度惩罚,改变判别器网络梯度限制方法,提高网络在训练过程中的稳定性.在相同实验环境下,使用Pix2Pix模型和summer2winter数据进行对比分析.实验结果表明,改进后的U-Net和使用smooth L1损失作为生成器损失可以生成更好的着色图像;而L1损失能更好地保持图像结构信息,使用梯度惩罚可以加速模型的收敛速度,提高模型稳定性和图像质量;该方法能更好地学习图像的深层次特征,减少图像着色模糊现象,在有效地保持图像结构相似性的同时提高图像着色质量.

     

    Abstract: In this study,a grayscale image coloring method combining the Pix2Pix model is proposed to solve the problem of unclear object boundaries and low image coloring quality in colorization neural net-works.First,an improved U-Net structure,using eight down-sampling and up-sampling layers,is adopted to extract features and predict the image color,which improves the network model’s ability to extract deep im-age features.Second,the coloring image quality is tested under different loss functions,L1 loss and smooth L1 loss,to measure the distance between the generated image and ground truth.Finally,gradient penalty is added to improve the network stability of the training process.The gradient of each input data is penalized by constructing a new data distribution between the generated and real image distribution to limit the dis-criminator gradient.In the same experimental environment,the Pix2Pix model and summer2winter data are utilized for comparative analysis.The experiments demonstrate that the improved U-Net using the smoothL1loss as generator loss generates better colored images,whereas the L1loss better maintains the struc-tural information of the image.Furthermore,the gradient penalty accelerates the model convergence speed,and improves the model stability and image quality.The proposed image coloring method learns deep image features and reduces the image blurs.The model raises the image quality while effectively maintaining the image structure similarity.

     

/

返回文章
返回