高级检索
许国良, 侯振东, 雒江涛, 刘杨, 刘立竹. 联合可靠实例挖掘和特征优化的无监督行人重识别[J]. 计算机辅助设计与图形学学报, 2024, 36(3): 368-378. DOI: 10.3724/SP.J.1089.2024.19835
引用本文: 许国良, 侯振东, 雒江涛, 刘杨, 刘立竹. 联合可靠实例挖掘和特征优化的无监督行人重识别[J]. 计算机辅助设计与图形学学报, 2024, 36(3): 368-378. DOI: 10.3724/SP.J.1089.2024.19835
Xu Guoliang, Hou Zhendong, Luo Jiangtao, Liu Yang, Liu Lizhu. Joint Reliable Instance Mining and Feature Optimization for Unsupervised Person Re-ID[J]. Journal of Computer-Aided Design & Computer Graphics, 2024, 36(3): 368-378. DOI: 10.3724/SP.J.1089.2024.19835
Citation: Xu Guoliang, Hou Zhendong, Luo Jiangtao, Liu Yang, Liu Lizhu. Joint Reliable Instance Mining and Feature Optimization for Unsupervised Person Re-ID[J]. Journal of Computer-Aided Design & Computer Graphics, 2024, 36(3): 368-378. DOI: 10.3724/SP.J.1089.2024.19835

联合可靠实例挖掘和特征优化的无监督行人重识别

Joint Reliable Instance Mining and Feature Optimization for Unsupervised Person Re-ID

  • 摘要: 针对无监督行人重识别方法的伪标签中包含大量噪声的问题,提出一种联合可靠实例挖掘和特征优化的行人重识别方法.首先设计一种衡量伪标签可靠度的指标,利用不同参数下DBSCAN聚类结果的稳定性衡量伪标签的质量;然后提出可靠实例挖掘策略进行伪标签去噪,伪标签可靠度大于预设阈值的实例保留其原伪标签,反之则修正其伪标签;再提出融合全局和局部特征的二重动量更新策略,每个batch对涉及的样本进行即时特征更新,每个epoch对存储字典中所有样本特征进行更新;最后利用统一对比损失对骨干神经网络进行训练优化.在2个大型公共数据集Market-1501和DukeMTMC-reID上的实验结果表明,mAP分别达到77.9%和67.4%,Rank-1分别达到90.2%和88.2%.

     

    Abstract: To address the problem that the clustering results of unsupervised person re-identification methods contain a large amount of noise in the pseudo labels, a person re-identification method combining reliable instance mining and features optimization is proposed. Firstly, an indicator is designed to measure the reliability of pseudo labels by using the stability of DBSCAN clustering results under different parameters. Secondly, a reliable instance mining strategy is proposed to denoise the pseudo labels. Instances with pseudo label credibility greater than the preset threshold retain their original pseudo labels, otherwise, corrected their pseudo labels. Thirdly, a dual momentum update strategy is adopted to update the global and local features, i.e., each batch updates the features of the involved samples instantly, and each epoch updates the features of all samples in the memory bank.Finally, unified contrast loss is used to train and optimize the backbone neural network. Experimental results on two large public datasets, Market-1501 and DukeMTMC-reID, showed that mAP reached 77.9% and 67.4% respectively, and Rank-1 reached 90.2% and 88.2% respectively.

     

/

返回文章
返回