高级检索
公绪超, 李宗民. 无参注意力结合自监督改善音频分类方法[J]. 计算机辅助设计与图形学学报, 2023, 35(3): 434-440. DOI: 10.3724/SP.J.1089.2023.19353
引用本文: 公绪超, 李宗民. 无参注意力结合自监督改善音频分类方法[J]. 计算机辅助设计与图形学学报, 2023, 35(3): 434-440. DOI: 10.3724/SP.J.1089.2023.19353
Gong Xuchao, and Li Zongmin. An Improved Audio Classification Method Based on Parameter-Free Attention Combined with Self-Supervision[J]. Journal of Computer-Aided Design & Computer Graphics, 2023, 35(3): 434-440. DOI: 10.3724/SP.J.1089.2023.19353
Citation: Gong Xuchao, and Li Zongmin. An Improved Audio Classification Method Based on Parameter-Free Attention Combined with Self-Supervision[J]. Journal of Computer-Aided Design & Computer Graphics, 2023, 35(3): 434-440. DOI: 10.3724/SP.J.1089.2023.19353

无参注意力结合自监督改善音频分类方法

An Improved Audio Classification Method Based on Parameter-Free Attention Combined with Self-Supervision

  • 摘要: 基于transformer端到端音频分类方法在许多场景下证明可以达到优于二维卷积的效果.针对目前常用的transformer音频分类方法只关注不同时序间的特征重要性,而对同时序间的特征重要程度刻画程度不足的问题,提出一种无参注意力结合自监督特征构建的方法改善音频分类效果.通过在同时序特征中构造无参多局部极值注意力机制,拟合特征多局部极值分布,刻画同时序间的特征重要性;通过对输入的音频频谱图在时域和频域上随机掩码,加入自监督信息,有效地学习音频频谱细节及分类信息.采用audioset数据集,esc50数据集以及Speech Command数据集进行对比实验,实验结果表明,该算法比基准方法在识别准确率指标上提升了0.46%~1.20%.

     

    Abstract: The end-to-end audio classification method based on transformer is proved to be better than two-dimensional convolution in multiple scenes. In view of the current popular audio recognition method based on serialization learning transformer, which focuses on the importance of current features in time sequence, and the lack of feature description of simultaneous sequence, a method of parameter-free attention combined with self supervised feature construction is proposed to further improve audio classification. In this method, the parameter-free attention mechanism is constructed in the simultaneous order feature to fit the multi-local extreme value distribution. At the same time, in the process of model learning, the input spectrum is randomly masked in time domain and frequency domain, and self-supervision information is added to effectively learn the audio spectrum details and classification information. The experimental results using audio set, esc50 and Speech Command show that the accuracy of algorithm in this paper improves by 0.46%~1.20%, compared with the current state of the art method.

     

/

返回文章
返回