高级检索
刘汝涵, 徐丹. 视频放大和深度学习在微表情识别任务上的应用[J]. 计算机辅助设计与图形学学报, 2019, 31(9): 1535-1541. DOI: 10.3724/SP.J.1089.2019.17568
引用本文: 刘汝涵, 徐丹. 视频放大和深度学习在微表情识别任务上的应用[J]. 计算机辅助设计与图形学学报, 2019, 31(9): 1535-1541. DOI: 10.3724/SP.J.1089.2019.17568
Liu Ruhan, Xu Dan. Video Amplification and Deep Learning in Micro-Expression Recognition[J]. Journal of Computer-Aided Design & Computer Graphics, 2019, 31(9): 1535-1541. DOI: 10.3724/SP.J.1089.2019.17568
Citation: Liu Ruhan, Xu Dan. Video Amplification and Deep Learning in Micro-Expression Recognition[J]. Journal of Computer-Aided Design & Computer Graphics, 2019, 31(9): 1535-1541. DOI: 10.3724/SP.J.1089.2019.17568

视频放大和深度学习在微表情识别任务上的应用

Video Amplification and Deep Learning in Micro-Expression Recognition

  • 摘要: 针对微表情动作过于微弱不利于识别和目前主流方法合并情绪类别不利于微表情在现实任务中的应用2个问题,提出一种基于眼部干扰消除的视频放大方法,并利用卷积神经网络实现微表情识别任务.首先,利用基于相位的视频动作处理技术对微表情数据集CASME和CASME II中的视频数据进行放大;然后利用特征点定位获取眼部坐标,并将原始眼部视频替换到放大视频中进行图像融合,以实现对眼部干扰的消除操作;最后利用VGG16的思想设计卷积神经网络模型网络,实现对放大后的微表情数据情绪类别的识别.实验在不同方法下分别对2个数据集的准确率进行对比,并用几种调优策略下的模型分别就原始数据集和放大数据集的准确率进行对比.结果表明,文中方法能够更好地提升真实情绪分类状态下的微表情识别准确率.

     

    Abstract: To address the difficulty in identification of micro-expressions and low practical significance of consolidated emotional categories,a video amplification method based on eye interference elimination is proposed,and convolutional neural network(CNN)is used to realize micro-expression recognition.First,we amplify the video data of CASME,CASME II using phase-based video motion processing technology.Then,to eliminate eye interference,feature point location is used to obtain eye coordinates,and replace the original eye video into an enlarged video with fusion processing.Finally,the idea of VGG16 is used to construct CNN model,and identify emotion categories in the enlarged micro-expression data.Experiments compared the accuracy of two datasets under different methods as well as the accuracy of original and enlarged datasets under several models with different tuning strategies.The results show that the method can effectively improve the recognition accuracy under real emotion categories.

     

/

返回文章
返回