高级检索

一种基于对比学习的对抗式无监督领域自适应图像分类方法

An Adversarial Unsupervised Domain Adaptation Image Classification Method Based On Contrastive Learning

  • 摘要: 本文提出一种基于对比学习的对抗式无监督领域自适应图像分类方法(CADA), 旨在将从标记良好的源域训练得到的模型推广到未标记的目标域时仍然保持良好的泛化性能. 针对于以往的基于对抗的无监督领域自适应方法中存在的只在整体上对齐源域和目标域的特征, 而忽略了两个领域在全局分布对齐的同时属于同一类的特征是否对齐的问题, 以及对无标签的目标域样本利用不充分的问题, 本文将对比学习的思想引入到基于对抗的无监督领域自适应方法中来, 通过不断拉近目标域中相似样本在特征空间中的距离, 同时不断推离不相似的样本, 使得无标签的目标域样本的分类边界更加清晰, 从而使得源域和目标域样本在全局对齐的同时也实现类内对齐; 将目标域的样本经过数据增强后送入对比学习模块也使得目标域无标记的样本得到了更充分的利用. 与原有的基于对抗的无监督领域自适应方法相比, 本文提出的CADA在Office-31等3个数据集上的平均准确率比原有方法提高了2%-6%.

     

    Abstract: In this paper, we propose an adversarial unsupervised domain adaptation image classification method  based on contrastive learning(CADA), which aims to extend the model trained from a well-labeled source domain to an unlabeled target domain while maintaining good generalization performance. In the previous adversarial unsupervised domain adaptive methods, it simply aligns the features of the source domains and target domains globally, while ignoring whether features belonging to the same class are aligned when two domains are aligned globally and it also doesn't take full advantage of the unlabeled target domain sample. In this paper, the idea of contrastive learning is introduced into the adversarial unsupervised domain adaptation image classification method. By constantly narrowing the distance of similar samples in the target domain in the feature space, and constantly pushing away the dissimilar samples, the classification boundary of the samples in the target domain without labels is clearer, so that the source domain and target domain samples can be aligned globally as well as within the class. The target domain samples after data augmentation are sent into the contrastive learning module, which makes the unlabeled samples of the target domain more fully utilized. Compared with the original adversarial unsupervised domain adaptation method, the average accuracy of the proposed CADA on three data sets, such as Office-31, is about 2%-6% higher than that of the original methods.

     

/

返回文章
返回