CAMEL: Co-Designing AI Models and Embedded DRAMs for Efficient On-Device Learning (2305.03148v3)
Abstract: On-device learning allows AI models to adapt to user data, thereby enhancing service quality on edge platforms. However, training AI on resource-limited devices poses significant challenges due to the demanding computing workload and the substantial memory consumption and data access required by deep neural networks (DNNs). To address these issues, we propose utilizing embedded dynamic random-access memory (eDRAM) as the primary storage medium for transient training data. In comparison to static random-access memory (SRAM), eDRAM provides higher storage density and lower leakage power, resulting in reduced access cost and power leakage. Nevertheless, to maintain the integrity of the stored data, periodic power-hungry refresh operations could potentially degrade system performance. To minimize the occurrence of expensive eDRAM refresh operations, it is beneficial to shorten the lifetime of stored data during the training process. To achieve this, we adopt the principles of algorithm and hardware co-design, introducing a family of reversible DNN architectures that effectively decrease data lifetime and storage costs throughout training. Additionally, we present a highly efficient on-device training engine named \textit{CAMEL}, which leverages eDRAM as the primary on-chip memory. This engine enables efficient on-device training with significantly reduced memory usage and off-chip DRAM traffic while maintaining superior training accuracy. We evaluate our CAMEL system on multiple DNNs with different datasets, demonstrating a $2.5\times$ speedup of the training process and $2.8\times$ training energy savings than the other baseline hardware platforms.
- “Glue benchmark,” available at: "https://gluebenchmark.com.
- “Model zoo for vision transformer,” "https://modelzoo.co/model/transformers.
- “Tiny imagenet dataset,” https://www.kaggle.com/c/tiny-imagenet.
- R. Banner, I. Hubara, E. Hoffer, and D. Soudry, “Scalable methods for 8-bit training of neural networks,” in NeurIPS, 2018, pp. 5151–5159.
- I. Bhati, M.-T. Chang, Z. Chishti, S.-L. Lu, and B. Jacob, “Dram refresh mechanisms, penalties, and trade-offs,” IEEE Transactions on Computers, vol. 65, no. 1, pp. 108–121, 2015.
- A. Biswas and A. P. Chandrakasan, “Conv-ram: An energy-efficient sram with embedded convolution computation for low-power cnn-based machine learning applications,” in 2018 IEEE International Solid-State Circuits Conference-(ISSCC). IEEE, 2018, pp. 488–490.
- T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language models are few-shot learners,” Advances in neural information processing systems, vol. 33, pp. 1877–1901, 2020.
- Y. Chen, T. Luo, S. Liu, S. Zhang, L. He, J. Wang, L. Li, T. Chen, Z. Xu, N. Sun, and O. Temam, “Dadiannao: A machine-learning supercomputer,” in Proceedings of the 47th Annual IEEE/ACM International Symposium on Microarchitecture. IEEE Computer Society, 2014, pp. 609–622.
- S. Choi, J. Sim, M. Kang, Y. Choi, H. Kim, and L.-S. Kim, “An energy-efficient deep convolutional neural network training accelerator for in situ personalization on smart devices,” IEEE Journal of Solid-State Circuits, vol. 55, no. 10, pp. 2691–2702, 2020.
- K. C. Chun, P. Jain, J. H. Lee, and C. H. Kim, “A 3t gain cell embedded dram utilizing preferential boosting for high density and low power on-die caches,” IEEE Journal of Solid-State Circuits, vol. 46, no. 6, pp. 1495–1505, 2011.
- C. Coleman, D. Narayanan, D. Kang, T. Zhao, J. Zhang, L. Nardi, P. Bailis, K. Olukotun, C. Ré, and M. Zaharia, “Dawnbench: An end-to-end deep learning benchmark and competition,” Training, vol. 100, no. 101, p. 102, 2017.
- M. Courbariaux, Y. Bengio, and J.-P. David, “Binaryconnect: Training deep neural networks with binary weights during propagations,” in Advances in neural information processing systems, 2015, pp. 3123–3131.
- J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 2009, pp. 248–255.
- J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
- A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
- R. Giterman, A. Shalom, A. Burg, A. Fish, and A. Teman, “A 1-mbit fully logic-compatible 3t gain-cell embedded dram in 16-nm finfet,” IEEE Solid-State Circuits Letters, vol. 3, pp. 110–113, 2020.
- A. N. Gomez, M. Ren, R. Urtasun, and R. B. Grosse, “The reversible residual network: Backpropagation without storing activations,” Advances in neural information processing systems, vol. 30, 2017.
- “Bfloat16: The secret to high performance on cloud tpus,” https://cloud.google.com/blog/products/ai-machine-learning/bfloat16-the-secret-to-high-performance-on-cloud-tpus, Google, accessed: 2021-03-29.
- D. Han, D. Im, G. Park, Y. Kim, S. Song, J. Lee, and H.-J. Yoo, “Hnpu: An adaptive dnn training processor utilizing stochastic dynamic fixed-point and active bit-precision searching,” IEEE Journal of Solid-State Circuits, vol. 56, no. 9, pp. 2858–2869, 2021.
- S. Han, H. Mao, and W. J. Dally, “Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding,” arXiv preprint arXiv:1510.00149, 2015.
- A. Hatamizadeh, Y. Tang, V. Nath, D. Yang, A. Myronenko, B. Landman, H. R. Roth, and D. Xu, “Unetr: Transformers for 3d medical image segmentation,” in Proceedings of the IEEE/CVF winter conference on applications of computer vision, 2022, pp. 574–584.
- K. He, X. Chen, S. Xie, Y. Li, P. Dollár, and R. Girshick, “Masked autoencoders are scalable vision learners,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 16 000–16 009.
- K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
- I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio, “Quantized neural networks: Training neural networks with low precision weights and activations,” The Journal of Machine Learning Research, vol. 18, no. 1, pp. 6869–6898, 2017.
- S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in International conference on machine learning. PMLR, 2015, pp. 448–456.
- B. Jacob, S. Kligys, B. Chen, M. Zhu, M. Tang, A. Howard, H. Adam, and D. Kalenichenko, “Quantization and training of neural networks for efficient integer-arithmetic-only inference,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 2704–2713.
- S. M. A. H. Jafri, H. Hassan, A. Hemani, and O. Mutlu, “Refresh triggered computation: Improving the energy efficiency of convolutional neural network accelerators,” ACM Trans. Archit. Code Optim., vol. 18, no. 1, dec 2021. [Online]. Available: https://doi.org/10.1145/3417708
- P. Judd, J. Albericio, T. Hetherington, T. M. Aamodt, and A. Moshovos, “Stripes: Bit-serial deep neural network computing,” in Microarchitecture (MICRO), 2016 49th Annual IEEE/ACM International Symposium on. IEEE, 2016, pp. 1–12.
- S. Kapur, A. Mishra, and D. Marr, “Low precision rnns: Quantizing rnns without losing accuracy,” arXiv preprint arXiv:1710.07706, 2017.
- D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
- A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo et al., “Segment anything,” arXiv preprint arXiv:2304.02643, 2023.
- S. Koppula, L. Orosa, A. G. Yağlıkçı, R. Azizi, T. Shahroodi, K. Kanellopoulos, and O. Mutlu, “Eden: Enabling energy-efficient, high-performance deep neural network inference using approximate dram,” in Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture, ser. MICRO ’52. New York, NY, USA: Association for Computing Machinery, 2019, p. 166–181. [Online]. Available: https://doi.org/10.1145/3352460.3358280
- A. Krizhevsky, V. Nair, and G. Hinton, “The cifar-10 dataset,” 2014.
- H. T. Kung, B. McDanel, and S. Q. Zhang, “Term revealing: Furthering quantization at run time on quantized dnns,” Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, 2020.
- J. Lee, J. Lee, D. Han, J. Lee, G. Park, and H.-J. Yoo, “7.7 lnpu: A 25.3 tflops/w sparse deep-neural-network learning processor with fine-grained mixed precision of fp8-fp16,” in 2019 IEEE International Solid-State Circuits Conference-(ISSCC). IEEE, 2019, pp. 142–144.
- M. Mahmoud, I. Edo, A. H. Zadeh, O. M. Awad, G. Pekhimenko, J. Albericio, and A. Moshovos, “Tensordash: Exploiting sparsity to accelerate deep neural network training,” in 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO). IEEE, 2020, pp. 781–795.
- B. McDanel, S. Q. Zhang, H. Kung, and X. Dong, “Full-stack optimization for accelerating cnns using powers-of-two weights with fpga validation,” in Proceedings of the ACM International Conference on Supercomputing, 2019, pp. 449–460.
- P. Micikevicius, S. Narang, J. Alben, G. Diamos, E. Elsen, D. Garcia, B. Ginsburg, M. Houston, O. Kuchaiev, G. Venkatesh, and H. Wu, “Mixed precision training,” in International Conference on Learning Representations, 2018. [Online]. Available: https://openreview.net/forum?id=r1gs9JgRZ
- “Micron ddr4 sdram,” https://www.micron.com/products/dram/ddr4-sdram, Micron.
- J. Narinx, R. Giterman, A. Bonetti, N. Frigerio, C. Aprile, A. Burg, and Y. Leblebici, “A 24 kb single-well mixed 3t gain-cell edram with body-bias in 28 nm fd-soi for refresh-free dsp applications,” in 2019 IEEE Asian Solid-State Circuits Conference (A-SSCC), 2019, pp. 219–222.
- D.-T. Nguyen, N.-M. Ho, and I.-J. Chang, “St-drc: Stretchable dram refresh controller with no parity-overhead error correction scheme for energy-efficient dnns,” in Proceedings of the 56th Annual Design Automation Conference 2019, 2019, pp. 1–6.
- “Accelerating ai training with nvidia tf32 tensor cores,” https://developer.nvidia.com/blog/accelerating-ai-training-with-tf32-tensor-cores/, Nvidia, accessed: 2021-03-29.
- E. Park, J. Ahn, and S. Yoo, “Weighted-entropy-based quantization for deep neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 5456–5464.
- E. Qin, A. Samajdar, H. Kwon, V. Nadella, S. Srinivasan, D. Das, B. Kaul, and T. Krishna, “Sigma: A sparse and irregular gemm accelerator with flexible interconnects for dnn training,” in 2020 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEE, 2020, pp. 58–70.
- J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” arXiv preprint arXiv:1804.02767, 2018.
- Semiconductor Research Corporation, “The decadal plan for semiconductors,” https://www.src.org/about/decadal-plan/, accessed: 2022-11-04.
- K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
- X. Sun, J. Choi, C.-Y. Chen, N. Wang, S. Venkataramani, V. V. Srinivasan, X. Cui, W. Zhang, and K. Gopalakrishnan, “Hybrid 8-bit floating point (hfp8) training and inference for deep neural networks,” Advances in neural information processing systems, vol. 32, pp. 4900–4909, 2019.
- C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2818–2826.
- T. Tambe, E.-Y. Yang, Z. Wan, Y. Deng, V. Janapa Reddi, A. Rush, D. Brooks, and G.-Y. Wei, “Algorithm-hardware co-design of adaptive floating-point encodings for resilient deep learning inference,” in 2020 57th ACM/IEEE Design Automation Conference (DAC), 2020, pp. 1–6.
- H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar et al., “Llama: Open and efficient foundation language models,” arXiv preprint arXiv:2302.13971, 2023.
- F. Tu, W. Wu, S. Yin, L. Liu, and S. Wei, “Rana: Towards efficient neural acceleration with refresh-optimized embedded dram,” in 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA). IEEE, 2018, pp. 340–352.
- A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” arXiv preprint arXiv:1706.03762, 2017.
- W. Wang, J. Dai, Z. Chen, Z. Huang, Z. Li, X. Zhu, X. Hu, T. Lu, L. Lu, H. Li et al., “Internimage: Exploring large-scale vision foundation models with deformable convolutions,” arXiv preprint arXiv:2211.05778, 2022.
- D. Yang, A. Ghasemazar, X. Ren, M. Golub, G. Lemieux, and M. Lis, “Procrustes: a dataflow and accelerator for sparse deep neural network training,” in 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO). IEEE, 2020, pp. 711–724.
- C. Yu, T. Yoo, H. Kim, T. T.-H. Kim, K. C. T. Chuan, and B. Kim, “A logic-compatible edram compute-in-memory with embedded adcs for processing neural networks,” IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 68, no. 2, pp. 667–679, 2020.
- C. Zhang, D. Han, Y. Qiao, J. U. Kim, S.-H. Bae, S. Lee, and C. S. Hong, “Faster segment anything: Towards lightweight sam for mobile applications,” arXiv preprint arXiv:2306.14289, 2023.
- J. Zhang, X. Chen, M. Song, and T. Li, “Eager pruning: algorithm and architecture support for fast training of deep neural networks,” in 2019 ACM/IEEE 46th Annual International Symposium on Computer Architecture (ISCA). IEEE, 2019, pp. 292–303.
- J. Zhang, Z. Wang, and N. Verma, “A machine-learning classifier implemented in a standard 6t sram array,” in 2016 ieee symposium on vlsi circuits (vlsi-circuits). IEEE, 2016, pp. 1–2.
- S. Q. Zhang, B. McDanel, and H. Kung, “Fast: Dnn training under variable precision block floating point with stochastic rounding,” in 2022 IEEE International Symposium on High-Performance Computer Architecture (HPCA). IEEE, 2022, pp. 846–860.
- S. Q. Zhang, B. McDanel, H. Kung, and X. Dong, “Training for multi-resolution inference using reusable quantization terms,” in Proceedings of the 26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, 2021, pp. 845–860.
- Y. Zheng, H. Yang, Y. Shu, Y. Jia, and Z. Huang, “Optimizing off-chip memory access for deep neural network accelerator,” IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 69, no. 4, pp. 2316–2320, 2022.
- C. Zhu, S. Han, H. Mao, and W. J. Dally, “Trained ternary quantization,” arXiv preprint arXiv:1612.01064, 2016.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.