Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

TANGO: Time-Reversal Latent GraphODE for Multi-Agent Dynamical Systems (2310.06427v1)

Published 10 Oct 2023 in cs.LG and cs.AI

Abstract: Learning complex multi-agent system dynamics from data is crucial across many domains, such as in physical simulations and material modeling. Extended from purely data-driven approaches, existing physics-informed approaches such as Hamiltonian Neural Network strictly follow energy conservation law to introduce inductive bias, making their learning more sample efficiently. However, many real-world systems do not strictly conserve energy, such as spring systems with frictions. Recognizing this, we turn our attention to a broader physical principle: Time-Reversal Symmetry, which depicts that the dynamics of a system shall remain invariant when traversed back over time. It still helps to preserve energies for conservative systems and in the meanwhile, serves as a strong inductive bias for non-conservative, reversible systems. To inject such inductive bias, in this paper, we propose a simple-yet-effective self-supervised regularization term as a soft constraint that aligns the forward and backward trajectories predicted by a continuous graph neural network-based ordinary differential equation (GraphODE). It effectively imposes time-reversal symmetry to enable more accurate model predictions across a wider range of dynamical systems under classical mechanics. In addition, we further provide theoretical analysis to show that our regularization essentially minimizes higher-order Taylor expansion terms during the ODE integration steps, which enables our model to be more noise-tolerant and even applicable to irreversible systems. Experimental results on a variety of physical systems demonstrate the effectiveness of our proposed method. Particularly, it achieves an MSE improvement of 11.5 % on a challenging chaotic triple-pendulum systems.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (41)
  1. Chaotic zones in triple pendulum dynamics observed experimentally and numerically. Applied Mechanics and Materials, pp.  1–17, 2008.
  2. Interaction networks for learning about objects, relations and physics. In Advances in Neural Information Processing Systems. 2016.
  3. Learning neural event functions for ordinary differential equations. International Conference on Learning Representations, 2021.
  4. Neural ordinary differential equations. In Advances in Neural Information Processing Systems, 2018.
  5. Lagrangian neural networks. arXiv preprint arXiv:2003.04630, 2020.
  6. David F. Mayers Endre Süli. An Introduction to Numerical Analysis. Cambridge University Press, 2003.
  7. Hamiltonian neural networks. Advances in Neural Information Processing Systems, 2019.
  8. Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates. In IEEE international conference on robotics and automation (ICRA), 2017.
  9. Heterogeneous graph transformer. In Proceedings of the 2020 World Wide Web Conference, 2020.
  10. Learning continuous system dynamics from irregularly-sampled partial observations. In Advances in Neural Information Processing Systems, 2020.
  11. Coupled graph ode for learning interacting system dynamics. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2021.
  12. Generalizing graph ode for learning complex system dynamics across environments. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD ’23, pp.  798–809, 2023.
  13. Time-reversal symmetric ode network. In Advances in Neural Information Processing Systems, 2020.
  14. Cf-gode: Continuous-time causal inference for multi-agent dynamical systems. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2023.
  15. Microhydrodynamics: principles and selected applications. Courier Corporation, 2013.
  16. Neural relational inference for interacting systems. arXiv preprint arXiv:1802.04687, 2018.
  17. Time-reversal symmetry in dynamical systems: a survey. Physica D: Nonlinear Phenomena, pp.  1–39, 1998.
  18. igibson 2.0: Object-centric simulation for robot learning of everyday household tasks. In Proceedings of the 5th Conference on Robot Learning, 2022.
  19. Decoupled weight decay regularization. In The International Conference on Learning Representations, 2019.
  20. HOPE: High-order graph ODE for modeling interacting dynamics. In Proceedings of the 40th International Conference on Machine Learning, 2023.
  21. Emmy Noether. Invariant variation problems. Transport theory and statistical physics, 1(3):186–207, 1971.
  22. Jill North. Formulations of classical mechanics. Forthcoming in A companion to the philosophy of physics. Routledge, 2021.
  23. Learning mesh-based simulation with graph networks. In International Conference on Learning Representations, 2021.
  24. Graph neural ordinary differential equations. arXiv preprint arXiv:1911.07532, 2019.
  25. Costas Pozrikidis. Interfacial dynamics for stokes flow. Journal of Computational Physics, 169(2):250–301, 2001.
  26. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational physics, 378:686–707, 2019.
  27. Chaos and time-reversal symmetry. order and chaos in reversible dynamical systems. Physics Reports, 216(2-3):63–177, 1992.
  28. Latent ordinary differential equations for irregularly-sampled time series. In Advances in Neural Information Processing Systems, 2019.
  29. Hamiltonian graph networks with ode integrators. In Advances in Neural Information Processing Systems, 2019.
  30. Learning to simulate complex physics with graph networks. In Proceedings of the 37th International Conference on Machine Learning, 2020.
  31. E (n) equivariant graph neural networks. In International conference on machine learning, pp. 9323–9332. PMLR, 2021.
  32. A probabilistic model for the numerical solution of initial value problems. In Statistics and Computing, pp.  99–122. 2019.
  33. Long short-term memory. Neural computation, 1997.
  34. Chaos in a double pendulum. American Journal of Physics, (6):491–499, 1992.
  35. A numerical analysis of chaos in the double pendulum. Chaos, Solitons & Fractals, (2):417–422, 2006.
  36. E. C. Tolman. The determiners of behavior at a choice point. Psychological Review, 45(1):1–41, 1938.
  37. Learning reversible symplectic dynamics. In Proceedings of The 4th Annual Learning for Dynamics and Control Conference, 2022.
  38. Attention is all you need. In Advances in Neural Information Processing Systems. 2017.
  39. Towards physics-informed deep learning for turbulent flow prediction. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2020.
  40. Social ode: Multi-agent trajectory forecasting with neural ordinary differential equations. In European Conference on Computer Vision, 2022.
  41. Neural dynamics on complex networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2020.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Zijie Huang (29 papers)
  2. Wanjia Zhao (10 papers)
  3. Jingdong Gao (6 papers)
  4. Ziniu Hu (51 papers)
  5. Xiao Luo (112 papers)
  6. Yadi Cao (11 papers)
  7. Yuanzhou Chen (7 papers)
  8. Yizhou Sun (149 papers)
  9. Wei Wang (1793 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.