Application of Improved PF-AFN in Virtual Try-on
-
Graphical Abstract
-
Abstract
An improved virtual try-on method is proposed to solve the problems of insufficient accuracy of the appearance flow predicted and poor generalization ability in PF-AFN. Firstly, to decouple the shape and style of clothing, we synthesize a human parsing map aligned with the human in target clothes by a human body prediction module. Then, based on the collinearity of the affine transformation and the characteristics of the appearance flow, the collinearity loss term and the distance loss term are added to constrain the deformation process and on local regions accordingly. Finally, the human parsing map and the original input are concated by channel as the input of the generation network and the UNet++-like generation network based on ResNet is used to obtain the ultimate virtual try-on images. A comparative experiment is executed on the VITON dataset with other 4 state-of-the-art methods. It shows that the method proposed improves the SSIM, FID and LPIPS by 1.2%, 11.1% and 5.8% respectively compared with the optimal method. The image clarity and inception score are comparable to the current state-of-the-art methods. On the whole, the proposed method solves the original problems and achieves better results.
-
-