Deep Quantum Graph Dreaming: Deciphering Neural Network Insights into Quantum Experiments (2309.07056v2)
Abstract: Despite their promise to facilitate new scientific discoveries, the opaqueness of neural networks presents a challenge in interpreting the logic behind their findings. Here, we use a eXplainable-AI (XAI) technique called $inception$ or $deep$ $dreaming$, which has been invented in machine learning for computer vision. We use this technique to explore what neural networks learn about quantum optics experiments. Our story begins by training deep neural networks on the properties of quantum systems. Once trained, we "invert" the neural network -- effectively asking how it imagines a quantum system with a specific property, and how it would continuously modify the quantum system to change a property. We find that the network can shift the initial distribution of properties of the quantum system, and we can conceptualize the learned strategies of the neural network. Interestingly, we find that, in the first layers, the neural network identifies simple properties, while in the deeper ones, it can identify complex quantum structures and even quantum entanglement. This is in reminiscence of long-understood properties known in computer vision, which we now identify in a complex natural science task. Our approach could be useful in a more interpretable way to develop new advanced AI-based scientific discovery techniques in quantum physics.
- D. Doran, S. Schulz, and T. R. Besold, What does explainable ai really mean? a new conceptualization of perspectives, arXiv:1710.00794 (2017).
- E. Tjoa and C. Guan, A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI, IEEE Transactions on Neural Networks and Learning Systems 32, 4793 (2021).
- N. Burkart and M. F. Huber, A survey on the explainability of supervised machine learning, Journal of Artificial Intelligence Research 70, 245 (2021).
- S. J. Wetzel and M. Scherzer, Machine learning of explicit order parameters: From the ising model to su (2) lattice gauge theory, Physical Review B 96, 184410 (2017).
- F. Frohnert and E. van Nieuwenburg, Explainable representation learning of small quantum states, arXiv:2306.05694 (2023).
- M. Krenn, M. Erhard, and A. Zeilinger, Computer-inspired quantum experiments, Nature Reviews Physics 2, 649 (2020).
- L. O’Driscoll, R. Nichols, and P. A. Knott, A hybrid machine learning algorithm for designing quantum experiments, Quantum Machine Intelligence 1, 5 (2019).
- S. Arlt, C. Ruiz-Gonzalez, and M. Krenn, Digital discovery of a scientific concept at the core of experimental quantum optics, arXiv:2210.09981 (2022).
- A. Mordvintsev, C. Olah, and M. Tyka, Inceptionism: Going Deeper into Neural Networks (2015), avaliable at https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html.
- R. Lifshitz, Quantum deep dreaming: A novel approach for quantum circuit design, arXiv:2211.04343 (2022).
- F. Schindler, N. Regnault, and T. Neupert, Probing many-body localization with neural networks, Physical Review B 95, 245134 (2017).
- A. Seif, M. Hafezi, and C. Jarzynski, Machine learning the thermodynamic arrow of time, Nature Physics 17, 105 (2021).
- M. Krenn, X. Gu, and A. Zeilinger, Quantum Experiments and Graphs: Multiparty States as coherent superpositions of Perfect Matchings, Physical Review Letters 119, 240403 (2017b).
- L.-T. Feng, G.-C. Guo, and X.-F. Ren, Progress on integrated quantum photonic sources with silicon, Advanced Quantum Technologies 3, 1900058 (2020).
- A. Cabello, Bell’s theorem with and without inequalities for the three-qubit Greenberger-Horne-Zeilinger and W states, Physical Review A 65, 032108 (2002).
- W. K. Wootters, Entanglement of formation and concurrence., Quantum Inf. Comput. 1, 27 (2001).
- A. Nguyen, J. Yosinski, and J. Clune, Multifaceted feature visualization: Uncovering the different types of features learned by each neuron in deep neural networks, arXiv:1602.03616 (2016).
- K. Simonyan, A. Vedaldi, and A. Zisserman, Deep inside convolutional networks: Visualising image classification models and saliency maps, arXiv:1312.6034 (2013).
- J. Frankle and M. Carbin, The lottery ticket hypothesis: Finding sparse, trainable neural networks, arXiv:1803.03635 (2018).
- D. P. Kingma and J. Ba, Adam: A method for stochastic optimization, arXiv:1412.6980 (2014).