Abstract:
For the problem of texture loss caused by head pose variation and self-occlusion in the 3D face reconstruc-tion task, in order to effectively fuse structural and texture information to improve the completion quality, a texture completion method based on structure guidance and dynamic feature fusion is proposed. Firstly, the left-right symmetry constraint of the face is introduced, and the visible texture under the offset pose is mapped to the symmetric texture space, and the dilated convolution is combined to extract multi-scale context semantic features; Secondly, a dual-branch generation network based on gated convolution is con-structed, which encodes the face geometric structure and local texture features, providing key structural guidance information for texture generation; Subsequently, a dynamic feature fusion module with local cross-attention mechanism is designed to establish semantic associations between structural features and texture features in the local region, and adaptively adjust the feature fusion weights according to the re-gional features to enhance the guiding ability of structural information on texture generation; Finally, a multi-scale discriminative network is constructed from three dimensions: global texture integrity, edge structure rationality, and local detail continuity, to strengthen the discriminative constraints on the gener-ated results. Experimental results on the CelebA and FFHQ datasets show that the proposed method achieves a structural similarity index of 0.81 and a peak signal-to-noise ratio of 29.56 dB, achieving ap-proximately 3.8% and 5.2% improvements in SSIM and PSNR indicators respectively, and demonstrates stable texture completion quality in multi-pose scenarios; The texture mapping results on the FLAME mod-el present more realistic lighting effects and detail expressions, verifying its effectiveness and robustness in the 3D face reconstruction rendering task.