The Ouroboros of Memristors: Neural Networks Facilitating Memristor Programming (2403.06712v1)
Abstract: Memristive devices hold promise to improve the scale and efficiency of machine learning and neuromorphic hardware, thanks to their compact size, low power consumption, and the ability to perform matrix multiplications in constant time. However, on-chip training with memristor arrays still faces challenges, including device-to-device and cycle-to-cycle variations, switching non-linearity, and especially SET and RESET asymmetry. To combat device non-linearity and asymmetry, we propose to program memristors by harnessing neural networks that map desired conductance updates to the required pulse times. With our method, approximately 95% of devices can be programmed within a relative percentage difference of +-50% from the target conductance after just one attempt. Our approach substantially reduces memristor programming delays compared to traditional write-and-verify methods, presenting an advantageous solution for on-chip training scenarios. Furthermore, our proposed neural network can be accelerated by memristor arrays upon deployment, providing assistance while reducing hardware overhead compared with previous works. This work contributes significantly to the practical application of memristors, particularly in reducing delays in memristor programming. It also envisions the future development of memristor-based machine learning accelerators.
- Z. Yu, S. Menzel, J. P. Strachan, and E. Neftci, “Integration of physics-derived memristor models with machine learning frameworks,” in 2022 56th Asilomar Conference on Signals, Systems, and Computers, 2022, pp. 1142–1146.
- C. Lee, K. Noh, W. Ji, T. Gokmen, and S. Kim, “Impact of asymmetric weight update on neural network training with tiki-taka algorithm,” Frontiers in neuroscience, vol. 15, p. 767953, 2022.
- T. Gokmen and W. Haensch, “Algorithm for training neural networks on resistive device arrays,” Frontiers in neuroscience, vol. 14, p. 103, 2020.
- T. Gokmen, “Enabling training of neural networks on noisy hardware,” Frontiers in Artificial Intelligence, vol. 4, p. 699148, 2021.
- M. Onen, T. Gokmen, T. K. Todorov, T. Nowicki, J. A. Del Alamo, J. Rozen, W. Haensch, and S. Kim, “Neural network training with asymmetric crosspoint elements,” Frontiers in Artificial Intelligence, vol. 5, p. 891624, 2022.
- M. J. Rasch, F. Carta, O. Fagbohungbe, and T. Gokmen, “Fast offset corrected in-memory training,” arXiv preprint arXiv:2303.04721, 2023.
- M. Hu, C. E. Graves, C. Li, Y. Li, N. Ge, E. Montgomery, N. Davila, H. Jiang, R. S. Williams, J. J. Yang et al., “Memristor-based analog computation and neural network classification with a dot product engine,” Advanced Materials, vol. 30, no. 9, p. 1705914, 2018.
- M. Le Gallo, R. Khaddam-Aljameh, M. Stanisavljevic, A. Vasilopoulos, B. Kersting, M. Dazzi, G. Karunaratne, M. Brändli, A. Singh, S. M. Mueller et al., “A 64-core mixed-signal in-memory compute chip based on phase-change memory for deep neural network inference,” Nature Electronics, vol. 6, no. 9, pp. 680–693, 2023.
- M. Rao, H. Tang, J. Wu, W. Song, M. Zhang, W. Yin, Y. Zhuo, F. Kiani, B. Chen, X. Jiang et al., “Thousands of conductance levels in memristors integrated on cmos,” Nature, vol. 615, no. 7954, pp. 823–829, 2023.
- M. J. Rasch, D. Moreda, T. Gokmen, M. Le Gallo, F. Carta, C. Goldberg, K. El Maghraoui, A. Sebastian, and V. Narayanan, “A flexible and fast pytorch toolkit for simulating training and inference on analog crossbar arrays,” in 2021 IEEE 3rd international conference on artificial intelligence circuits and systems (AICAS). IEEE, 2021, pp. 1–4.
- F. Cüppers, S. Menzel, C. Bengel, A. Hardtdegen, M. Von Witzleben, U. Böttger, R. Waser, and S. Hoffmann-Eifert, “Exploiting the switching dynamics of hfo2-based reram devices for reliable analog memristive behavior,” APL materials, vol. 7, no. 9, p. 091105, 2019.
- C. Bengel, A. Siemon, F. Cüppers, S. Hoffmann-Eifert, A. Hardtdegen, M. von Witzleben, L. Hellmich, R. Waser, and S. Menzel, “Variability-aware modeling of filamentary oxide-based bipolar resistive switching cells using spice level compact models,” IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 67, no. 12, pp. 4618–4630, 2020.
- V. Ntinas, D. Patel, Y. Wang, I. Messaris, V. Rana, S. Menzel, A. Ascoli, and R. Tetzlaff, “A simplified variability-aware vcm memristor model for efficient circuit simulation,” in 2023 19th International Conference on Synthesis, Modeling, Analysis and Simulation Methods and Applications to Circuit Design (SMACD), 2023, pp. 1–4.
- C. Li, J. Ignowski, X. Sheng, R. Wessel, B. Jaffe, J. Ingemi, C. Graves, and J. P. Strachan, “Cmos-integrated nanoscale memristive crossbars for cnn and optimization acceleration,” IEEE International Memory Workshop, 2020.
- M.-J. Yang and J. P. Strachan, “State-space modeling and tuning of memristors for neuromorphic computing applications,” ICONS ’23: Proceedings of the 2023 International Conference on Neuromorphic Systems, 2023.
- S. Yu, “Neuro-inspired computing with emerging non-volatile memory,” Proceedings of the IEEE, 2018.