Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

What Are Good Positional Encodings for Directed Graphs? (2407.20912v2)

Published 30 Jul 2024 in cs.LG

Abstract: Positional encodings (PEs) are essential for building powerful and expressive graph neural networks and graph transformers, as they effectively capture the relative spatial relationships between nodes. Although extensive research has been devoted to PEs in undirected graphs, PEs for directed graphs remain relatively unexplored. This work seeks to address this gap. We first introduce the notion of Walk Profile, a generalization of walk-counting sequences for directed graphs. A walk profile encompasses numerous structural features crucial for directed graph-relevant applications, such as program analysis and circuit performance prediction. We identify the limitations of existing PE methods in representing walk profiles and propose a novel Multi-q Magnetic Laplacian PE, which extends the Magnetic Laplacian eigenvector-based PE by incorporating multiple potential factors. The new PE can provably express walk profiles. Furthermore, we generalize prior basis-invariant neural networks to enable the stable use of the new PE in the complex domain. Our numerical experiments validate the expressiveness of the proposed PEs and demonstrate their effectiveness in solving sorting network satisfiability and performing well on general circuit benchmarks. Our code is available at https://github.com/Graph-COM/Multi-q-Maglap.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (66)
  1. Vitis hls tool. https://www.xilinx.com/products/design-tools/vitis/vitis-hls.html.
  2. Vivado. https://www.xilinx.com/products/design-tools/vivado.html.
  3. Compilers Principles, Techniques & Tools. pearson Education, 2007.
  4. Miltiadis Allamanis. Graph neural networks in program analysis. Graph neural networks: foundations, frontiers, and applications, pages 483–497, 2022.
  5. Directional graph networks. In International Conference on Machine Learning, pages 748–758. PMLR, 2021.
  6. Comparing graph transformers via positional encodings. arXiv preprint arXiv:2402.14202, 2024.
  7. Statistical analysis of financial networks. Computational statistics & data analysis, 48(2):431–443, 2005.
  8. Compiler-based graph representations for deep learning models of code. In Proceedings of the 29th International Conference on Compiler Construction, pages 201–211, 2020.
  9. Geometric deep learning: going beyond euclidean data. IEEE Signal Processing Magazine, 34(4):18–42, 2017.
  10. Structure-aware transformer for graph representation learning. In International Conference on Machine Learning, pages 3469–3489. PMLR, 2022.
  11. Path sensitization in critical path problem (logic circuit design). IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 12(2):196–207, 1993.
  12. Fan RK Chung. Spectral graph theory, volume 92. American Mathematical Soc., 1997.
  13. RR Coifman and S Lafon. Diffusion maps: Applied and computational harmonic analysis. Appl Comput Harmon Anal, 21(1):5–14, 2004.
  14. Programl: A graph-based program representation for data flow analysis and compiler optimizations. In International Conference on Machine Learning, pages 2244–2253. PMLR, 2021.
  15. Deep data flow analysis. arXiv preprint arXiv:2012.01470, 2020.
  16. Cktgnn: Circuit graph neural network for electronic design automation. In The Eleventh International Conference on Learning Representations, 2022.
  17. Pace: A parallelizable computation encoder for directed acyclic graphs. In International Conference on Machine Learning, pages 5360–5377. PMLR, 2022.
  18. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2020.
  19. Benchmarking graph neural networks. Journal of Machine Learning Research, 24(43):1–48, 2023.
  20. Graph neural networks with learnable structural and positional representations. In International Conference on Learning Representations, 2021.
  21. Magnetic eigenmaps for the visualization of directed networks. Applied and Computational Harmonic Analysis, 44(1):189–199, 2018.
  22. Magnetic eigenmaps for community detection in directed networks. Physical Review E, 95(2):022302, 2017.
  23. Sigmanet: One laplacian to rule them all. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pages 7568–7576, 2023.
  24. Graph signal processing for directed graphs based on the hermitian laplacian. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2019, Würzburg, Germany, September 16–20, 2019, Proceedings, Part I, pages 447–463. Springer, 2020.
  25. Transformers meet directed graphs. In International Conference on Machine Learning, pages 11144–11172. PMLR, 2023.
  26. Msgnn: A spectral graph neural network based on a novel magnetic signed laplacian. In Learning on Graphs Conference, pages 40–1. PMLR, 2022.
  27. On the stability of expressive positional encodings for graph neural networks. arXiv preprint arXiv:2310.02579, 2023.
  28. Global self-attention as a replacement for graph convolution. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 655–665, 2022.
  29. edgnn: A simple and powerful gnn for directed labeled graphs. In International Conference on Learning Representations, 2019.
  30. Donald Ervin Knuth. The art of computer programming, volume 3. Pearson Education, 1997.
  31. Holonets: Spectral convolutions do extend to directed graphs. In The Twelfth International Conference on Learning Representations, 2023.
  32. Directed graph auto-encoders. In Proceedings of the AAAI conference on artificial intelligence, volume 36, pages 7211–7219, 2022.
  33. Rethinking graph transformers with spectral attention. Advances in Neural Information Processing Systems, 34, 2021.
  34. The expressive power of graph neural networks. Graph Neural Networks: Foundations, Frontiers, and Applications, pages 63–98, 2022.
  35. Distance encoding: Design provably more powerful neural networks for graph representation learning. Advances in Neural Information Processing Systems, 33:4465–4478, 2020.
  36. Expressive sign equivariant networks for spectral geometric learning. Advances in Neural Information Processing Systems, 36, 2023.
  37. Sign and basis invariant networks for spectral graph representation learning. arXiv preprint arXiv:2202.13013, 2022.
  38. Biharmonic distance. ACM Transactions on Graphics (TOG), 29(3):1–11, 2010.
  39. Link prediction in paper citation network to construct paper correlation graph. EURASIP Journal on Wireless Communications and Networking, 2019(1):1–12, 2019.
  40. Transformers over directed acyclic graphs. Advances in Neural Information Processing Systems, 36, 2024.
  41. Spectral-based graph convolutional network for directed graphs. arXiv preprint arXiv:1907.08990, 2019.
  42. Motifnet: a motif-based graph convolutional network for directed graphs. In 2018 IEEE data science workshop (DSW), pages 225–228. IEEE, 2018.
  43. Tpugraphs: A performance prediction dataset on large tensor computational graphs. Advances in Neural Information Processing Systems, 36, 2024.
  44. Logical expressiveness of graph neural network for knowledge graph reasoning. arXiv preprint arXiv:2303.12306, 2023.
  45. Recipe for a general, powerful, scalable graph transformer. Advances in Neural Information Processing Systems, 35:14501–14515, 2022.
  46. A comprehensive survey of neural architecture search: Challenges and solutions. ACM Computing Surveys (CSUR), 54(4):1–34, 2021.
  47. MA Shubin. Discrete magnetic laplacian. Communications in mathematical physics, 164(2):259–275, 1994.
  48. Graph fourier transform based on directed laplacian. In 2016 International Conference on Signal Processing and Communications (SPCOM), pages 1–5. IEEE, 2016.
  49. Roformer: Enhanced transformer with rotary position embedding. Neurocomputing, 568:127063, 2024.
  50. Directed acyclic graph neural networks. In International Conference on Learning Representations, 2020.
  51. Knowledge graph and knowledge reasoning: A systematic review. Journal of Electronic Science and Technology, 20(2):100159, 2022.
  52. Directed graph convolutional network. arXiv preprint arXiv:2004.13970, 2020.
  53. Attention is all you need. Advances in neural information processing systems, 30, 2017.
  54. Gcn-rl circuit designer: Transferable transistor sizing with graph neural networks and reinforcement learning. In 2020 57th ACM/IEEE Design Automation Conference (DAC), pages 1–6. IEEE, 2020.
  55. Equivariant and stable positional encoding for more powerful graph neural networks. arXiv preprint arXiv:2203.00199, 2022.
  56. Graph as point set. arXiv preprint arXiv:2405.02795, 2024.
  57. Neural predictor for neural architecture search. In European conference on computer vision, pages 660–676. Springer, 2020.
  58. High-level synthesis performance prediction using gnns: Benchmarking, modeling, and advancing. In Proceedings of the 59th ACM/IEEE Design Automation Conference, pages 49–54, 2022.
  59. Resistance distance and laplacian spectrum. Theoretical chemistry accounts, 110:284–289, 2003.
  60. How powerful are graph neural networks? In International Conference on Learning Representations, 2018.
  61. Are transformers universal approximators of sequence-to-sequence functions? In International Conference on Learning Representations, 2019.
  62. Circuit-gnn: Graph neural networks for distributed circuit design. In International conference on machine learning, pages 7364–7373. PMLR, 2019.
  63. D-vae: A variational autoencoder for directed acyclic graphs. Advances in neural information processing systems, 32, 2019.
  64. Labeling trick: A theory of using graph neural networks for multi-node representation learning. Advances in Neural Information Processing Systems, 34:9061–9073, 2021.
  65. Magnet: A neural network for directed graphs. Advances in neural information processing systems, 34:27003–27015, 2021.
  66. Devign: Effective vulnerability identification by learning comprehensive program semantics via graph neural networks. Advances in neural information processing systems, 32, 2019.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Yinan Huang (10 papers)
  2. Haoyu Wang (309 papers)
  3. Pan Li (164 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets