Integration of Physics-Derived Memristor Models with Machine Learning Frameworks (2403.06746v1)
Abstract: Simulation frameworks such MemTorch, DNN+NeuroSim, and aihwkit are commonly used to facilitate the end-to-end co-design of memristive ML accelerators. These simulators can take device nonidealities into account and are integrated with modern ML frameworks. However, memristors in these simulators are modeled with either lookup tables or simple analytic models with basic nonlinearities. These simple models are unable to capture certain performance-critical aspects of device nonidealities. For example, they ignore the physical cause of switching, which induces errors in switching timings and thus incorrect estimations of conductance states. This work aims at bringing physical dynamics into consideration to model nonidealities while being compatible with GPU accelerators. We focus on Valence Change Memory (VCM) cells, where the switching nonlinearity and SET/RESET asymmetry relate tightly with the thermal resistance, ion mobility, Schottky barrier height, parasitic resistance, and other effects. The resulting dynamics require solving an ODE that captures changes in oxygen vacancies. We modified a physics-derived SPICE-level VCM model, integrated it with the aihwkit simulator and tested the performance with the MNIST dataset. Results show that noise that disrupts the SET/RESET matching affects network performance the most. This work serves as a tool for evaluating how physical dynamics in memristive devices affect neural network accuracy and can be used to guide the development of future integrated devices.
- C. Lammie and M. R. Azghadi, “Memtorch: A simulation framework for deep memristive cross-bar architectures,” in 2020 IEEE international symposium on circuits and systems (ISCAS). IEEE, 2020, pp. 1–5.
- C. Lammie, W. Xiang, B. Linares-Barranco, and M. R. Azghadi, “Memtorch: An open-source simulation framework for memristive deep learning systems,” Neurocomputing, vol. 485, pp. 124–133, 2022.
- X. Peng, S. Huang, H. Jiang, A. Lu, and S. Yu, “Dnn+ neurosim v2. 0: An end-to-end benchmarking framework for compute-in-memory accelerators for on-chip training,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 40, no. 11, pp. 2306–2319, 2020.
- S. Yu, H. Jiang, S. Huang, X. Peng, and A. Lu, “Compute-in-memory chips for deep learning: Recent trends and prospects,” IEEE Circuits and Systems Magazine, vol. 21, no. 3, pp. 31–56, 2021.
- M. J. Rasch, D. Moreda, T. Gokmen, M. Le Gallo, F. Carta, C. Goldberg, K. El Maghraoui, A. Sebastian, and V. Narayanan, “A flexible and fast pytorch toolkit for simulating training and inference on analog crossbar arrays,” in 2021 IEEE 3rd international conference on artificial intelligence circuits and systems (AICAS). IEEE, 2021, pp. 1–4.
- F. Cüppers, S. Menzel, C. Bengel, A. Hardtdegen, M. Von Witzleben, U. Böttger, R. Waser, and S. Hoffmann-Eifert, “Exploiting the switching dynamics of hfo2-based reram devices for reliable analog memristive behavior,” APL materials, vol. 7, no. 9, p. 091105, 2019.
- C. Bengel, A. Siemon, F. Cüppers, S. Hoffmann-Eifert, A. Hardtdegen, M. von Witzleben, L. Hellmich, R. Waser, and S. Menzel, “Variability-aware modeling of filamentary oxide-based bipolar resistive switching cells using spice level compact models,” IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 67, no. 12, pp. 4618–4630, 2020.
- V. Ntinas, A. Ascoli, I. Messaris, Y. Wang, V. Rana, S. Menzel, and R. Tetzlaff, “Towards simplified physics-based memristor modeling of valence change mechanism devices,” IEEE Transactions on Circuits and Systems II: Express Briefs, 2022.
- D. Ielmini and H.-S. P. Wong, “In-memory computing with resistive switching devices,” Nature electronics, vol. 1, no. 6, pp. 333–343, 2018.
- C. Lee, K. Noh, W. Ji, T. Gokmen, and S. Kim, “Impact of asymmetric weight update on neural network training with tiki-taka algorithm,” Frontiers in neuroscience, p. 1554, 2022.
- C. Lammie, W. Xiang, and M. R. Azghadi, “Modeling and simulating in-memory memristive deep learning systems: An overview of current efforts,” Array, p. 100116, 2021.
- X. Fu, Q. Li, W. Wang, H. Xu, Y. Wang, W. Wang, H. Yu, and Z. Li, “High speed memristor-based ripple carry adders in 1t1r array structure,” IEEE Transactions on Circuits and Systems II: Express Briefs, 2022.
- M. Mayahinia, A. Singh, C. Bengel, S. Wiefels, M. A. Lebdeh, S. Menzel, D. J. Wouters, A. Gebregiorgis, R. Bishnoi, R. Joshi et al., “A voltage-controlled, oscillation-based adc design for computation-in-memory architectures using emerging rerams,” ACM Journal on Emerging Technologies in Computing Systems (JETC), vol. 18, no. 2, pp. 1–25, 2022.
- A. Hardtdegen, C. La Torre, F. Cüppers, S. Menzel, R. Waser, and S. Hoffmann-Eifert, “Improved switching stability and the effect of an internal series resistor in hfo 2/tio x bilayer reram cells,” IEEE transactions on electron devices, vol. 65, no. 8, pp. 3229–3236, 2018.
- Y. LeCun and C. Cortes, “The mnist database of handwritten digits,” 2005.
- T. Gokmen and W. Haensch, “Algorithm for training neural networks on resistive device arrays,” Frontiers in Neuroscience, vol. 14, p. 103, 2020.