Temporal Segment Networks Based on Feature Propagation for Action Recognition
-
Graphical Abstract
-
Abstract
In order to extract human action category and motion information efficiently and reduce the computational complexity from the video,an algorithm combining feature propagation and temporal segment networks for action recognition is proposed.Firstly,the video is divided into three small segments,and the key frames are extracted from the corresponding segments to implement the modeling of long-term video.Secondly,designing a propagation of temporal segment networks(P-TSN)that includes the appearance information and motion information,using feature propagation and FlowNet respectively and takes the RGB key frames,RGB non-key frames,and the optical flow images as input to extract the appearance information and motion information of the video.Finally,the BN-Inception descriptors of the improved temporal segment networks are averagely weighted and sent to the Softmax layer for action recognition.The experiments on UCF101 and HMDB51 datasets have obtained recognition accuracy of 94.6%and 69.4%respectively,indicating that the proposed algorithm can improves the accuracy of video action recognition by using the spatial information and the temporal motion information fully.
-
-