Abstract:
Aiming at the poor recognition performance caused by insufficient two-dimensional information, a human action recognition method by fusing multiple depth information is proposed. Firstly, the depth images are used to capture the behavior clues and extract gradient and related directional features. Then, it uses mutual information to extract key frames of skeleton images. The static attitude model, the current motion model and the dynamic offset model based on the key frames are established to characterize the underlying features of human action. Finally, weights are assigned to different kinds of features through weighted voting mechanism, which realizes multiple weighted fusion with multiple features. Experiments conducted on MSR_Action3D depth action dataset show the accuracy of this proposed method is 1.5% higher than the state-of-the-art action recognition methods.