Abstract:
Aiming at the problem of catastrophic forgetting in incremental learning of neural network models, a regularized class of incremental learning method based on self-supervision with hidden layer distillation constraints is proposed, including pseudo label prediction, knowledge distillation and parameter regularization. First, a regularization constraint method based on Bayesian and information theory is proposed for the importance evaluation of model parameters, and then the characterization ability of the model is enhanced by using self-supervised pseudo label prediction, and the hidden layer features are preserved by adding Gaussian noise to improve the generalization ability of the features. The hidden layer features and output layer features of the historical task are trained using a distillation constraint method with cross-entropy classification loss. The experimental results show that better results are achieved on the CIFAR-10 and CIFAR-100 datasets, where the average accuracy and forgetting rates reach 64.16% and 15.95%, respectively, on the CIFAR-100 dataset. The proposed method is effective in reducing the effects of catastrophic forgetting.