Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Interconnect Parasitics and Partitioning in Fully-Analog In-Memory Computing Architectures (2201.12480v1)

Published 29 Jan 2022 in cs.AR, cs.ET, and cs.LG

Abstract: Fully-analog in-memory computing (IMC) architectures that implement both matrix-vector multiplication and non-linear vector operations within the same memory array have shown promising performance benefits over conventional IMC systems due to the removal of energy-hungry signal conversion units. However, maintaining the computation in the analog domain for the entire deep neural network (DNN) comes with potential sensitivity to interconnect parasitics. Thus, in this paper, we investigate the effect of wire parasitic resistance and capacitance on the accuracy of DNN models deployed on fully-analog IMC architectures. Moreover, we propose a partitioning mechanism to alleviate the impact of the parasitic while keeping the computation in the analog domain through dividing large arrays into multiple partitions. The SPICE circuit simulation results for a 400 X 120 X 84 X 10 DNN model deployed on a fully-analog IMC circuit show that a 94.84% accuracy could be achieved for MNIST classification application with 16, 8, and 8 horizontal partitions, as well as 8, 8, and 1 vertical partitions for first, second, and third layers of the DNN, respectively, which is comparable to the ~97% accuracy realized by digital implementation on CPU. It is shown that accuracy benefits are achieved at the cost of higher power consumption due to the extra circuitry required for handling partitioning.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (20)
  1. D. Ielmini and H.-S. P. Wong, “In-memory computing with resistive switching devices,” Nature Electronics, vol. 1, no. 6, pp. 333–343, 2018.
  2. S. Yin, X. Sun, S. Yu, and J. sun Seo, “High-throughput in-memory computing for binary deep neural networks with monolithically integrated rram and 90-nm cmos,” IEEE Transactions on Electron Devices, vol. 67, pp. 4185–4192, 2020.
  3. K. Spoon, S. Ambrogio, P. Narayanan, H. Tsai, C. Mackin, A. Chen, A. Fasoli, A. Friz, and G. W. Burr, “Accelerating deep neural networks with analog memory devices,” in 2020 IEEE International Memory Workshop (IMW), 2020.
  4. V. Ostwal, R. Zand, R. DeMara, and J. Appenzeller, “A novel compound synapse using probabilistic spin–orbit-torque switching for mtj-based deep neural networks,” IEEE Journal on Exploratory Solid-State Computational Devices and Circuits, vol. 5, no. 2, pp. 182–187, 2019.
  5. G. Molas, M. Harrand, C. Nail, and P. Blaise, “Advances in oxide-based conductive bridge memory (cbram) technology for computing systems,” 2019.
  6. S. Jain, A. Ranjan, K. Roy, and A. Raghunathan, “Computing in memory with spin-transfer torque magnetic ram,” IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 26, no. 3, pp. 470–483, 2018.
  7. S. Angizi, Z. He, A. Awad, and D. Fan, “Mrima: An mram-based in-memory accelerator,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 39, no. 5, pp. 1123–1136, 2020.
  8. S. Cosemans, B. Verhoef, J. Doevenspeck, I. A. Papistas, F. Catthoor, P. Debacker, A. Mallik, and D. Verkest, “Towards 10000tops/w dnn inference with analog in-memory computing – a circuit blueprint, device options and requirements,” in 2019 IEEE International Electron Devices Meeting (IEDM), 2019, pp. 22.2.1–22.2.4.
  9. A. Ankit, I. E. Hajj, S. R. Chalamalasetti, G. Ndu, M. Foltin, R. S. Williams, P. Faraboschi, W.-m. W. Hwu, J. P. Strachan, K. Roy, and D. S. Milojicic, “Puma: A programmable ultra-efficient memristor-based accelerator for machine learning inference,” in Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems, ser. ASPLOS ’19.   New York, NY, USA: Association for Computing Machinery, 2019, p. 715–731. [Online]. Available: https://doi.org/10.1145/3297858.3304049
  10. M. Elbtity, A. Singh, B. Reidy, X. Guo, and R. Zand, “An in-memory analog computing co-processor for energy-efficient cnn inference on mobile devices,” 2021 IEEE Computer Society Annual Symposium on VLSI (ISVLSI), 2021, 2021.
  11. M. Hu, J. P. Strachan, Z. Li, E. M. Grafals, N. Davila, C. Graves, S. Lam, N. Ge, J. J. Yang, and R. S. Williams, “Dot-product engine for neuromorphic computing: Programming 1t1m crossbar to accelerate matrix-vector multiplication,” in ACM/EDAC/IEEE Design Automation Conference (DAC), 2016, pp. 1–6.
  12. F. L. Aguirre, S. M. Pazos, F. Palumbo, J. Suñé, and E. Miranda, “Application of the quasi-static memdiode model in cross-point arrays for large dataset pattern recognition,” IEEE Access, vol. 8, pp. 202 174–202 193, 2020.
  13. D. Josell, S. H. Brongersma, and Z. Tőkei, “Size-dependent resistivity in nanoscale interconnects,” Annual Review of Materials Research, vol. 39, no. 1, pp. 231–254, 2009. [Online]. Available: https://doi.org/10.1146/annurev-matsci-082908-145415
  14. K. Fuchs, “The conductivity of thin metallic films according to the electron theory of metals,” Mathematical Proceedings of the Cambridge Philosophical Society, vol. 34, no. 1, p. 100–108, 1938.
  15. A. F. Mayadas and M. Shatzkes, “Electrical-resistivity model for polycrystalline films: the case of arbitrary reflection at external surfaces,” Phys. Rev. B, vol. 1, pp. 1382–1389, Feb 1970. [Online]. Available: https://link.aps.org/doi/10.1103/PhysRevB.1.1382
  16. S. Rossnagel and T.-S. Kuan, “Alteration of cu conductivity in the size effect regime,” Journal of Vacuum Science & Technology B: Microelectronics and Nanometer Structures, vol. 22, 01 2004.
  17. W. Steinhögl, G. Schindler, G. Steinlesberger, M. Traving, and M. Engelhardt, “Comprehensive study of the resistivity of copper wires with lateral dimensions of 100 nm and smaller,” Journal of Applied Physics, vol. 97, pp. 023 706–023 706, 12 2004.
  18. T. Sun, B. Yao, A. P. Warren, K. Barmak, M. F. Toney, R. E. Peale, and K. R. Coffey, “Surface and grain-boundary scattering in nanometric cu films,” Phys. Rev. B, vol. 81, p. 155454, Apr 2010. [Online]. Available: https://link.aps.org/doi/10.1103/PhysRevB.81.155454
  19. T. Sakurai and K. Tamaru, “Simple formulas for two- and three-dimensional capacitances,” IEEE Transactions on Electron Devices, vol. 30, no. 2, pp. 183–185, 1983.
  20. Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Md Hasibul Amin (12 papers)
  2. Mohammed Elbtity (6 papers)
  3. Ramtin Zand (38 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.