Advanced Search
Li Jiajun, Xu Haobo, Wang Yujie, Xiao Hang, Wang Ying, Han Yinhe, Li Xiaowei. Design and Training of Binarized Neural Networks for Highly Efficient Accelerators[J]. Journal of Computer-Aided Design & Computer Graphics, 2023, 35(6): 961-969. DOI: 10.3724/SP.J.1089.2023.19461
Citation: Li Jiajun, Xu Haobo, Wang Yujie, Xiao Hang, Wang Ying, Han Yinhe, Li Xiaowei. Design and Training of Binarized Neural Networks for Highly Efficient Accelerators[J]. Journal of Computer-Aided Design & Computer Graphics, 2023, 35(6): 961-969. DOI: 10.3724/SP.J.1089.2023.19461

Design and Training of Binarized Neural Networks for Highly Efficient Accelerators

  • Aiming at the problem of computation overflow and multiplier dependence on the binarized neural network accelerator, a set of design and training methods of binarized neural networks (BNN) are proposed. Firstly, an accurate simulator is designed to ensure that BNN does not lose accuracy after deployment. Secondly, the convolutional layer and activation functions of the BNN are optimized to alleviate the total amount of overflow. Thirdly, an operator named Shift-based Batch Normalization is proposed to make the BNN get rid of the dependence on multiplication and reduce memory access. Finally, for the improved BNN, a collaborative training framework based on overflow heuristics is proposed to ensure that the model training converges. The experimental results show that, compared with 10 keyword spotting methods, the accelerator reduces the amount of on-chip computation by more than 49.1% and increases the speed at least 21.0% without significant loss of accuracy.
  • loading

Catalog

    Turn off MathJax
    Article Contents

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return