Abstract:
With the rapid development of deep learning,image style transfer is currently one of the most actively explored fields in computer vision.Aiming at the problem that the existing methods are difficult to transfer the style of the local similar area in the content image,a novel area diversified style transfer method is proposed.Firstly,the image features are extracted through the encoder.Then,the content features,style features and style features sampled from the Gaussian distribution of the style image are fused in the feature space.Finally,the stylized image is reconstructed through the decoder.Experiments are conducted on WikiArt and Microsoft COCO datasets,and the content loss and multi-scale style losses are used as the quantitative measurement.The experimental results show that,compared with the existing methods,this method can effectively reduce the style loss of the generated images,make the overall style of the generated images more unified,and present a better visual effect.