Pix2Pix-Based Grayscale Image Coloring Method
-
Graphical Abstract
-
Abstract
In this study,a grayscale image coloring method combining the Pix2Pix model is proposed to solve the problem of unclear object boundaries and low image coloring quality in colorization neural net-works.First,an improved U-Net structure,using eight down-sampling and up-sampling layers,is adopted to extract features and predict the image color,which improves the network model’s ability to extract deep im-age features.Second,the coloring image quality is tested under different loss functions,L1 loss and smooth L1 loss,to measure the distance between the generated image and ground truth.Finally,gradient penalty is added to improve the network stability of the training process.The gradient of each input data is penalized by constructing a new data distribution between the generated and real image distribution to limit the dis-criminator gradient.In the same experimental environment,the Pix2Pix model and summer2winter data are utilized for comparative analysis.The experiments demonstrate that the improved U-Net using the smoothL1loss as generator loss generates better colored images,whereas the L1loss better maintains the struc-tural information of the image.Furthermore,the gradient penalty accelerates the model convergence speed,and improves the model stability and image quality.The proposed image coloring method learns deep image features and reduces the image blurs.The model raises the image quality while effectively maintaining the image structure similarity.
-
-