Abstract:
In order to solve the problems that insufficient feature extraction, poor generalization of fall discrimination conditions, and poor real-time performance in traditional algorithms, a fall detection algorithm based on convolutional neural network and multi-discriminant features is proposed. In order to complete the extraction of richer feature information and ensure real-time performance, firstly, the MobileNetV3 lightweight network is used to complete the accurate and fast extraction of the character feature information in the in-put image. Secondly, the superposition of three small convolution kernels and the residual network are used to reduce the number of parameters of the network model in the case of the same receptive field, so as to guarantee the real-time detection of human key points in the image. In order to improve the accuracy of fall state discrimination, the angle between human torso, limbs and the ground, and the change of the height-to-width ratio of the human calibration frame, are used as fall discrimination features. Finally, an internet of things system based on cloud server is designed to alleviate the problem of poor real-time performance caused by insufficient computing power of user terminals. A large number of experiments on the URFD dataset and self-built dataset show that the accuracy of the proposed algorithm is 99.0% and 98.5%, respectively, and the proposed algorithm has higher accuracy and better universality than the traditional fall detection algorithms.