PIMSYN: Synthesizing Processing-in-memory CNN Accelerators (2402.18114v1)
Abstract: Processing-in-memory architectures have been regarded as a promising solution for CNN acceleration. Existing PIM accelerator designs rely heavily on the experience of experts and require significant manual design overhead. Manual design cannot effectively optimize and explore architecture implementations. In this work, we develop an automatic framework PIMSYN for synthesizing PIM-based CNN accelerators, which greatly facilitates architecture design and helps generate energyefficient accelerators. PIMSYN can automatically transform CNN applications into execution workflows and hardware construction of PIM accelerators. To systematically optimize the architecture, we embed an architectural exploration flow into the synthesis framework, providing a more comprehensive design space. Experiments demonstrate that PIMSYN improves the power efficiency by several times compared with existing works. PIMSYN can be obtained from https://github.com/lixixi-jook/PIMSYN-NN.
- X. Qiao et al., “Atomlayer: A universal reram-based cnn accelerator with atomic layer computation,” in DAC, 2018, pp. 1–6.
- A. Shafiee et al., “Isaac: A convolutional neural network accelerator with in-situ analog arithmetic in crossbars,” in ISCA, 2016, pp. 14–26.
- L. Song et al., “Pipelayer: A pipelined reram-based accelerator for deep learning,” in HPCA, 2017, pp. 541–552.
- P. Chi et al., “Prime: A novel processing-in-memory architecture for neural network computation in reram-based main memory,” in ISCA, 2016.
- A. Ankit et al., “Puma: A programmable ultra-efficient memristor-based accelerator for machine learning inference,” in ASPLOS, 2019.
- H. Sun et al., “Gibbon: An efficient co-exploration framework of nn model and processing-in-memory architecture,” IEEE TCAD, 2023.
- W. Jiang et al., “Device-circuit-architecture co-exploration for computing-in-memory neural accelerators,” IEEE TC.
- J. Chen et al., “Autodcim: An automated digital cim compiler,” in DAC, 2023.
- Y. Zhu et al., “Pim-hls: An automatic hardware generation tool for heterogeneous processing-in-memory-based neural network accelerators,” in DAC, 2023.
- J. Bai et al., “Onnx: Open neural network exchange,” https://github.com/onnx/onnx, 2019.
- K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in ICLR, 2015, pp. 1–14.
- A. Krizhevsky et al., “Imagenet classification with deep convolutional neural networks,” Commun. ACM, vol. 60, pp. 84–90, may 2017.
- K. He et al., “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in ICCV, 2015, pp. 1026–1034.
- ——, “Deep residual learning for image recognition,” in CVPR, 2016.
- Z. Zhu et al., “Mnsim 2.0: A behavior-level modeling tool for processing-in-memory architectures,” IEEE TCAD, pp. 1–1, 2023.
- Wanqian Li (3 papers)
- Xiaotian Sun (10 papers)
- Xinyu Wang (186 papers)
- Lei Wang (977 papers)
- Yinhe Han (23 papers)
- Xiaoming Chen (141 papers)