Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Spectral Invariant Learning for Dynamic Graphs under Distribution Shifts (2403.05026v1)

Published 8 Mar 2024 in cs.LG and cs.AI

Abstract: Dynamic graph neural networks (DyGNNs) currently struggle with handling distribution shifts that are inherent in dynamic graphs. Existing work on DyGNNs with out-of-distribution settings only focuses on the time domain, failing to handle cases involving distribution shifts in the spectral domain. In this paper, we discover that there exist cases with distribution shifts unobservable in the time domain while observable in the spectral domain, and propose to study distribution shifts on dynamic graphs in the spectral domain for the first time. However, this investigation poses two key challenges: i) it is non-trivial to capture different graph patterns that are driven by various frequency components entangled in the spectral domain; and ii) it remains unclear how to handle distribution shifts with the discovered spectral patterns. To address these challenges, we propose Spectral Invariant Learning for Dynamic Graphs under Distribution Shifts (SILD), which can handle distribution shifts on dynamic graphs by capturing and utilizing invariant and variant spectral patterns. Specifically, we first design a DyGNN with Fourier transform to obtain the ego-graph trajectory spectrums, allowing the mixed dynamic graph patterns to be transformed into separate frequency components. We then develop a disentangled spectrum mask to filter graph dynamics from various frequency components and discover the invariant and variant spectral patterns. Finally, we propose invariant spectral filtering, which encourages the model to rely on invariant patterns for generalization under distribution shifts. Experimental results on synthetic and real-world dynamic graph datasets demonstrate the superiority of our method for both node classification and link prediction tasks under distribution shifts.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (103)
  1. Foundations and modeling of dynamic networks using dynamic graph neural networks: A survey. IEEE Access, 9:79143–79168, 2021.
  2. Learnable encoder-decoder architecture for dynamic graph: A survey. arXiv preprint arXiv:2203.10480, 2022.
  3. Survivorship bias in performance studies. The Review of Financial Studies, 5(4):553–580, 1992.
  4. Richard A Berk. An introduction to sample selection bias in sociological data. American sociological review, pages 386–398, 1983.
  5. Shift-robust gnns: Overcoming the limitations of localized graph training data. Advances in Neural Information Processing Systems, 34, 2021.
  6. Reversible instance normalization for accurate time-series forecasting against distribution shift. In International Conference on Learning Representations, 2021.
  7. Dynamic graph neural networks under spatio-temporal distribution shift. In Advances in Neural Information Processing Systems, 2022.
  8. Managing large dynamic graphs efficiently. In Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data, pages 145–156, 2012.
  9. Timearcs: Visualizing fluctuations in dynamic networks. In Computer Graphics Forum, volume 35, pages 61–69. Wiley Online Library, 2016.
  10. Dynamic networks in large financial and economic systems. arXiv preprint arXiv:2007.07842, 2020.
  11. Woods: Benchmarks for out-of-distribution generalization in time series tasks. arXiv preprint arXiv:2203.09978, 2022.
  12. Adarnn: Adaptive learning and forecasting of time series. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pages 402–411, 2021.
  13. Environment agnostic invariant risk minimization for classification of sequential datasets. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pages 1615–1624, 2021.
  14. Diversify to generalize: Learning generalized representations for time series classification. arXiv preprint arXiv:2209.07027, 2021.
  15. Discovering invariant rationales for graph neural networks. arXiv preprint arXiv:2201.12872, 2022.
  16. Handling distribution shifts on graphs: An invariance perspective. arXiv preprint arXiv:2202.02466, 2022.
  17. A closer look at distribution shifts and out-of-distribution generalization on graphs. In NeurIPS 2021 Workshop on Distribution Shifts: Connecting Methods and Applications, 2021.
  18. Community detection and co-author recommendation in co-author networks. International Journal of Machine Learning and Cybernetics, 12(2):597–609, 2021.
  19. Causal representation learning for out-of-distribution recommendation. In Proceedings of the ACM Web Conference 2022, pages 3562–3571, 2022.
  20. Out-of-distribution generalized dynamic graph neural network with disentangled intervention and invariance promotion. arXiv preprint arXiv:2311.14255, 2023.
  21. Invariant risk minimization. arXiv preprint arXiv:1907.02893, 2019.
  22. Invariant rationalization. In International Conference on Machine Learning, pages 1448–1458. PMLR, 2020.
  23. Invariant risk minimization games. In International Conference on Machine Learning, pages 145–155. PMLR, 2020.
  24. Roland: graph learning framework for dynamic graphs. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 2358–2366, 2022.
  25. Dysat: Deep neural representation learning on dynamic graphs via self-attention networks. In Proceedings of the 13th International Conference on Web Search and Data Mining, pages 519–527, 2020.
  26. Evolvegcn: Evolving graph convolutional networks for dynamic graphs. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 5363–5370, 2020.
  27. An algorithm for the machine calculation of complex fourier series. Mathematics of computation, 19(90):297–301, 1965.
  28. The Fourier transform and its applications, volume 31999. McGraw-Hill New York, 1986.
  29. Neural time series analysis with fourier transform: A survey. arXiv preprint arXiv:2302.02173, 2023.
  30. A fourier-based framework for domain generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14383–14392, 2021.
  31. Structural sparseness and spatial phase alignment in natural scenes. JOSA A, 24(7):1873–1885, 2007.
  32. The importance of phase in signals. Proceedings of the IEEE, 69(5):529–541, 1981.
  33. A demonstration of the visual importance and flexibility of spatial-frequency amplitude and phase. Perception, 11(3):337–346, 1982.
  34. Phase in speech and pictures. In ICASSP’79. IEEE International Conference on Acoustics, Speech, and Signal Processing, volume 4, pages 632–637. IEEE, 1979.
  35. Rubi: Reducing unimodal biases for visual question answering. Advances in neural information processing systems, 32, 2019.
  36. Structured sequence modeling with graph convolutional recurrent networks. In International Conference on Neural Information Processing, pages 362–373. Springer, 2018.
  37. Variational graph auto-encoders. arXiv preprint arXiv:1611.07308, 2016.
  38. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In EMNLP, 2014.
  39. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
  40. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. arXiv preprint arXiv:1911.08731, 2019.
  41. Out-of-distribution generalization via risk extrapolation (rex). In International Conference on Machine Learning, pages 5815–5826. PMLR, 2021.
  42. Cross-domain collaboration recommendation. In KDD’2012, 2012.
  43. Arnetminer: extraction and mining of academic social networks. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 990–998, 2008.
  44. An overview of microsoft academic service (mas) and applications. In Proceedings of the 24th international conference on world wide web, pages 243–246. ACM, 2015.
  45. Stochastic blockmodels: First steps. Social Networks, 5(2):109–137, 1983.
  46. Structural temporal graph neural networks for anomaly detection in dynamic graphs. In Proceedings of the 30th ACM international conference on Information & Knowledge Management, pages 3747–3756, 2021.
  47. Dynamic knowledge graph based multi-event forecasting. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1585–1595, 2020.
  48. Hierarchical temporal convolutional networks for dynamic recommender systems. In The world wide web conference, pages 2236–2246, 2019.
  49. Tedic: Neural modeling of behavioral patterns in dynamic social interaction networks. In Proceedings of the Web Conference 2021, pages 693–705, 2021.
  50. Fates of microscopic social ecosystems: Keep alive or dead? In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 668–676, 2019.
  51. Temp: Temporal message passing for temporal knowledge graph completion. arXiv preprint arXiv:2010.03526, 2020.
  52. Dynamic heterogeneous graph attention neural architecture search. In Thirty-Seventh AAAI Conference on Artificial Intelligence, 2023.
  53. Out-of-distribution generalized dynamic graph neural network for human albumin prediction. In IEEE International Conference on Medical Artificial Intelligence, 2023.
  54. Llm4dyg: Can large language models solve problems on dynamic graphs? arXiv preprint, 2023.
  55. Deep learning on graphs: A survey. IEEE Transactions on Knowledge and Data Engineering, 34(1):249–270, 2020.
  56. Unsupervised graph neural architecture search with disentangled self-supervision. In Advances in Neural Information Processing Systems, 2023.
  57. Autogl: A library for automated graph learning. In ICLR 2021 Workshop GTRL, 2021.
  58. Nas-bench-graph: Benchmarking graph neural architecture search. In Thirty-Sixth Conference on Neural Information Processing Systems, 2022.
  59. Graph differentiable architecture search with structure learning. In Thirty-Fifth Conference on Neural Information Processing Systems, 2021.
  60. Large graph models: A perspective. arXiv preprint arXiv:2308.14522, 2023.
  61. Community preserving network embedding. In Proceedings of the AAAI conference on artificial intelligence, volume 31, 2017.
  62. Heterogeneous graph attention network. In The world wide web conference, pages 2022–2032, 2019.
  63. Easydgl: Encode, train and interpret for continuous-time dynamic graph learning. arXiv preprint arXiv:2303.12341, 2023.
  64. Discrete-time temporal network embedding via implicit hierarchical learning in hyperbolic space. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pages 1975–1985, 2021.
  65. Hyperbolic variational graph neural network for modeling dynamic graphs. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 4375–4383, 2021.
  66. Variational graph recurrent neural networks. Advances in neural information processing systems, 32, 2019.
  67. Inductive representation learning in temporal networks via causal anonymous walks. arXiv preprint arXiv:2101.05974, 2021.
  68. Dynamic graph representation learning via graph transformer networks. arXiv preprint arXiv:2111.10447, 2021.
  69. Inductive representation learning on temporal graphs. arXiv preprint arXiv:2002.07962, 2020.
  70. Temporal graph networks for deep learning on dynamic graphs. arXiv preprint arXiv:2006.10637, 2020.
  71. Spectral temporal graph neural network for multivariate time-series forecasting. Advances in neural information processing systems, 33:17766–17778, 2020.
  72. Well-conditioned spectral transforms for dynamic graph representation. In Learning on Graphs Conference, pages 12–1. PMLR, 2022.
  73. Learnable spectral wavelets on dynamic graphs to capture global interactions. arXiv preprint arXiv:2211.11979, 2022.
  74. Towards out-of-distribution generalization: A survey. arXiv preprint arXiv:2108.13624, 2021.
  75. Domain generalization: A survey. arXiv e-prints, pages arXiv–2103, 2021.
  76. Improving out-of-distribution robustness via selective augmentation. In Proceeding of the Thirty-ninth International Conference on Machine Learning, 2022.
  77. Out-of-distribution generalization on graphs: A survey. arXiv preprint arXiv:2202.07987, 2022.
  78. Invariance principle meets out-of-distribution generalization on graphs. arXiv preprint arXiv:2202.05441, 2022.
  79. Graph neural architecture search under distribution shifts. In International Conference on Machine Learning, pages 18083–18095. PMLR, 2022.
  80. Ood-gnn: Out-of-distribution generalized graph neural network. IEEE Transactions on Knowledge and Data Engineering, 2022.
  81. Learning to solve travelling salesman problem with hardness-adaptive curriculum. arXiv preprint arXiv:2204.03236, 2022.
  82. Revisiting transformation invariant geometric deep learning: Are initial representations all you need? arXiv preprint arXiv:2112.12345, 2021.
  83. Generalizing graph neural networks on out-of-distribution graphs. arXiv preprint arXiv:2111.10657, 2021.
  84. Learning invariant graph representations for out-of-distribution generalization. In Thirty-Sixth Conference on Neural Information Processing Systems, 2022.
  85. Wild-time: A benchmark of in-the-wild distribution shift over time. In Proceedings of the Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2022.
  86. Convolutional neural networks on graphs with fast localized spectral filtering. Advances in neural information processing systems, 29, 2016.
  87. Graph wavelet neural network. arXiv preprint arXiv:1904.07785, 2019.
  88. Interpretable stability bounds for spectral graph filters. In International conference on machine learning, pages 5388–5397. PMLR, 2021.
  89. How powerful are spectral graph neural networks. In International Conference on Machine Learning, pages 23341–23362. PMLR, 2022.
  90. Specformer: Spectral graph neural networks meet transformers. arXiv preprint arXiv:2303.01028, 2023.
  91. Spectral subsampling mcmc for stationary time series. In International Conference on Machine Learning, pages 8449–8458. PMLR, 2020.
  92. From fourier to koopman: Spectral methods for long-term time series prediction. The Journal of Machine Learning Research, 22(1):1881–1918, 2021.
  93. Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting. In International Conference on Machine Learning, pages 27268–27286. PMLR, 2022.
  94. Self-supervised contrastive pre-training for time series via time-frequency consistency. Advances in Neural Information Processing Systems, 35:3988–4003, 2022.
  95. A survey on spectral graph neural networks. arXiv preprint arXiv:2302.05631, 2023.
  96. Focal frequency loss for image reconstruction and synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 13919–13929, 2021.
  97. Multiwavelet-based operator learning for differential equations. Advances in neural information processing systems, 34:24048–24062, 2021.
  98. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  99. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017.
  100. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
  101. Arnetminer: Extraction and mining of academic social networks. In KDD’08, pages 990–998, 2008.
  102. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32, 2019.
  103. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Zeyang Zhang (28 papers)
  2. Xin Wang (1307 papers)
  3. Ziwei Zhang (40 papers)
  4. Zhou Qin (6 papers)
  5. Weigao Wen (4 papers)
  6. Hui Xue (109 papers)
  7. Haoyang Li (95 papers)
  8. Wenwu Zhu (104 papers)
Citations (14)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets