Advanced Search
Xiao Hang, Xu Haobo, Wang Ying, Li Jiajun, Wang Yujie, Han Yinhe. Energy-Efficient Bit-Sparse Accelerator Design for Convolutional Neural Network[J]. Journal of Computer-Aided Design & Computer Graphics, 2023, 35(7): 1122-1131. DOI: 10.3724/SP.J.1089.2023.19478
Citation: Xiao Hang, Xu Haobo, Wang Ying, Li Jiajun, Wang Yujie, Han Yinhe. Energy-Efficient Bit-Sparse Accelerator Design for Convolutional Neural Network[J]. Journal of Computer-Aided Design & Computer Graphics, 2023, 35(7): 1122-1131. DOI: 10.3724/SP.J.1089.2023.19478

Energy-Efficient Bit-Sparse Accelerator Design for Convolutional Neural Network

  • The high energy-efficient bit-sparse accelerator design is proposed to address the performance bottleneck of current bit-sparse architectures. Firstly, a coding method and corresponding circuit are proposed to enhance the bit-sparsity of convolutional neural networks, and employ the bit-serial circuit to eliminate computations of zero bits on the fly and accelerate neural networks. Secondly, a column shared scheme is proposed to address the synchronization issue of bit-sparse architectures for further acceleration with small area and power overhead. Finally, the energy efficiency of different bit-sparse architectures is evaluated with SMIC 40 nm technology at 1 GHz. The experimental results show that the energy efficiency of the proposed accelerator is 544% and 179% higher than dense accelerator (VAA) and bit-sparse accelerator (LS-PRA), respectively.
  • loading

Catalog

    Turn off MathJax
    Article Contents

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return