Abstract:
As a typical serif font, Cuan font is different from the sans serif fonts such as "Heiti", "Microsoft "Yahei" and "Isoline", and its glyph structure is very diverse. In order to overcome the phenomenon of artifacts and blurring at the bends of the strokes in the generation process of Cuan fonts, a style transfer model of Cuan fonts based on a dense adaptive generation adversarial network is proposed. Firstly, the generator extracts style features and content features more fully through dense adaptive convolution blocks. Secondly, the pixel discriminator distinguishes the real picture from the generated picture. Thirdly, use adversarial loss, migration loss, gradient loss and edge loss to adjust the parameters of the generation network. Finally, the self-collected Cuan font data set is used as a training set and sent to the model for training. The experimental results prove that the style transfer model of Cuan font can effectively learn style features to achieve better generation effect. The generated results are better than the Zi-to-zi model in terms of font size retention, and better than the StarGANv2 and CycleGAN models in terms of the retention of stroke detail features, and are verified on the SSIM and
L1 loss indicators.