Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Hypergraph Node Representation Learning with One-Stage Message Passing (2312.00336v1)

Published 1 Dec 2023 in cs.LG and cs.IR

Abstract: Hypergraphs as an expressive and general structure have attracted considerable attention from various research domains. Most existing hypergraph node representation learning techniques are based on graph neural networks, and thus adopt the two-stage message passing paradigm (i.e. node -> hyperedge -> node). This paradigm only focuses on local information propagation and does not effectively take into account global information, resulting in less optimal representations. Our theoretical analysis of representative two-stage message passing methods shows that, mathematically, they model different ways of local message passing through hyperedges, and can be unified into one-stage message passing (i.e. node -> node). However, they still only model local information. Motivated by this theoretical analysis, we propose a novel one-stage message passing paradigm to model both global and local information propagation for hypergraphs. We integrate this paradigm into HGraphormer, a Transformer-based framework for hypergraph node representation learning. HGraphormer injects the hypergraph structure information (local information) into Transformers (global information) by combining the attention matrix and hypergraph Laplacian. Extensive experiments demonstrate that HGraphormer outperforms recent hypergraph learning methods on five representative benchmark datasets on the semi-supervised hypernode classification task, setting new state-of-the-art performance, with accuracy improvements between 2.52% and 6.70%. Our code and datasets are available.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (51)
  1. Clustering in graphs and hypergraphs with categorical edge labels, in: Proceedings of The Web Conference 2020. URL: https://doi.org/10.1145/3366423.3380152, doi:10.1145/3366423.3380152.
  2. Vivit: A video vision transformer. arXiv preprint arXiv:2103.15691 .
  3. Hypersage: Generalizing inductive representation learning on hypergraphs. arXiv preprint arXiv:2010.04558 .
  4. A new dynamic algorithm for densest subhypergraphs, in: Proceedings of the ACM Web Conference 2022. URL: https://doi.org/10.1145/3485447.3512158, doi:10.1145/3485447.3512158.
  5. Ihgnn: Interactive hypergraph neural network for personalized product search. URL: https://doi.org/10.1145/3485447.3511954, doi:10.1145/3485447.3511954.
  6. You are allset: A multiset function framework for hypergraph neural networks, in: International Conference on Learning Representations. URL: https://openreview.net/forum?id=hpBTIv2uy_E.
  7. Midas: Representative sampling from real-world hypergraphs, in: Proceedings of the ACM Web Conference 2022. URL: https://doi.org/10.1145/3485447.3512157, doi:10.1145/3485447.3512157.
  8. Generative pre-training for speech with autoregressive predictive coding, in: ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE.
  9. A generalization of transformer networks to graphs. arXiv preprint arXiv:2012.09699 .
  10. Latent target-opinion as prior for document-level sentiment classification: A variational approach from fine-grained perspective, in: Proceedings of the Web Conference 2021. URL: https://doi.org/10.1145/3442381.3449789, doi:10.1145/3442381.3449789.
  11. Hypergraph neural networks. Proceedings of the AAAI Conference on Artificial Intelligence URL: https://ojs.aaai.org/index.php/AAAI/article/view/4235, doi:10.1609/aaai.v33i01.33013558.
  12. Hypergraph learning: Methods and practices. IEEE Transactions on Pattern Analysis and Machine Intelligence doi:10.1109/TPAMI.2020.3039374.
  13. Heat: Hyperedge attention networks. arXiv preprint arXiv:2201.12113 .
  14. Hybrid computing using a neural network with dynamic external memory. Nature 538, 471–476.
  15. Quantum experiments and hypergraphs: Multiphoton sources for quantum interference, quantum computation, and quantum entanglement. Phys. Rev. A URL: https://link.aps.org/doi/10.1103/PhysRevA.101.033816, doi:10.1103/PhysRevA.101.033816.
  16. From audio to semantics: Approaches to end-to-end spoken language understanding, in: 2018 IEEE Spoken Language Technology Workshop (SLT), IEEE.
  17. Inductive representation learning on large graphs, in: Advances in Neural Information Processing Systems. URL: https://proceedings.neurips.cc/paper/2017/file/5dd9db5e033da9c6fb5ba83c7a7ebea9-Paper.pdf.
  18. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification, in: Proceedings of the IEEE international conference on computer vision.
  19. Multi-granularity residual learning with confidence estimation for time series prediction, in: Proceedings of the ACM Web Conference 2022. URL: https://doi.org/10.1145/3485447.3512056, doi:10.1145/3485447.3512056.
  20. Heterogeneous graph transformer, in: Proceedings of The Web Conference 2020. URL: https://doi.org/10.1145/3366423.3380027, doi:10.1145/3366423.3380027.
  21. Unignn: a unified framework for graph and hypergraph neural networks, in: Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence. URL: https://doi.org/10.24963/ijcai.2021/353, doi:10.24963/ijcai.2021/353.
  22. Edge-augmented graph transformers: Global self-attention is enough for graphs. arXiv preprint arXiv:2108.03348 .
  23. Dynamic hypergraph neural networks, in: Proceedings of the 28th International Joint Conference on Artificial Intelligence.
  24. Transformers in vision: A survey. arXiv preprint arXiv:2101.01169 .
  25. Semi-supervised classification with graph convolutional networks. International Conference on Learning Representations URL: https://openreview.net/forum?id=SJU4ayYgl.
  26. How do hyperedges overlap in real-world hypergraphs? - patterns, measures, and generators, in: Proceedings of the Web Conference 2021. URL: https://doi.org/10.1145/3442381.3450010, doi:10.1145/3442381.3450010.
  27. Strongly local hypergraph diffusions for clustering and semi-supervised learning. doi:10.1145/3442381.3449887.
  28. Semi-dynamic hypergraph neural network for 3d pose estimation, in: Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence. URL: https://doi.org/10.24963/ijcai.2020/109, doi:10.24963/ijcai.2020/109.
  29. Swin transformer: Hierarchical vision transformer using shifted windows, in: Proceedings of the IEEE/CVF International Conference on Computer Vision.
  30. Masked transformer for neighhourhood-aware click-through rate prediction. arXiv preprint arXiv:2201.13311 .
  31. Universal graph transformer self-attention networks, in: Companion Proceedings of the Web Conference 2022. URL: https://doi.org/10.1145/3487553.3524258, doi:10.1145/3487553.3524258.
  32. The network data repository with interactive graph analytics and visualization, in: Proceedings of the AAAI Conference on Artificial Intelligence.
  33. Collective classification in network data. AI magazine .
  34. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research .
  35. Multi-level hyperedge distillation for social linking prediction on sparsely observed networks, in: Proceedings of the Web Conference 2021. URL: https://doi.org/10.1145/3442381.3449912, doi:10.1145/3442381.3449912.
  36. Attention is all you need, in: Advances in Neural Information Processing Systems.
  37. Graph Attention Networks. International Conference on Learning Representations URL: https://openreview.net/forum?id=rJXMpikCZ.
  38. Self-supervised graph co-training for session-based recommendation, in: Proceedings of the 30th ACM International Conference on Information and Knowledge Management. URL: https://doi.org/10.1145/3459637.3482388, doi:10.1145/3459637.3482388.
  39. Multiplex bipartite network embedding using dual hypergraph convolutional networks, in: Proceedings of the Web Conference 2021. URL: https://doi.org/10.1145/3442381.3449954, doi:10.1145/3442381.3449954.
  40. Hypergcn: A new method for training graph convolutional networks on hypergraphs, in: Advances in Neural Information Processing Systems. URL: https://proceedings.neurips.cc/paper/2019/file/1efa39bcaec6f3900149160693694536-Paper.pdf.
  41. Html: Hierarchical transformer-based multi-task learning for volatility prediction, in: Proceedings of The Web Conference 2020. URL: https://doi.org/10.1145/3366423.3380128, doi:10.1145/3366423.3380128.
  42. Heterogeneous graph transformer for graph-to-sequence learning, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 7145–7154.
  43. Do transformers really perform badly for graph representation?, in: Advances in Neural Information Processing Systems.
  44. How much and when do we need higher-order information in hypergraphs? a case study on hyperedge prediction, in: Proceedings of The Web Conference 2020. URL: https://doi.org/10.1145/3366423.3380016, doi:10.1145/3366423.3380016.
  45. Self-supervised multi-channel hypergraph convolutional network for social recommendation, in: Proceedings of the Web Conference 2021. URL: https://doi.org/10.1145/3442381.3449844, doi:10.1145/3442381.3449844.
  46. Cross-lingual language model pretraining for retrieval, in: Proceedings of the Web Conference 2021. URL: https://doi.org/10.1145/3442381.3449830, doi:10.1145/3442381.3449830.
  47. Graph transformer networks, in: Proceedings of the 33rd International Conference on Neural Information Processing Systems.
  48. Graph-bert: Only attention is needed for learning graph representations. arXiv preprint arXiv:2001.05140 .
  49. Multiple knowledge syncretic transformer for natural dialogue generation, in: Proceedings of The Web Conference 2020. URL: https://doi.org/10.1145/3366423.3380156, doi:10.1145/3366423.3380156.
  50. Condition aware and revise transformer for question answering, in: Proceedings of The Web Conference 2020. URL: https://doi.org/10.1145/3366423.3380301, doi:10.1145/3366423.3380301.
  51. Learning with hypergraphs: Clustering, classification, and embedding, in: Advances in Neural Information Processing Systems. URL: https://proceedings.neurips.cc/paper/2006/file/dff8e9c2ac33381546d96deea9922999-Paper.pdf.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Shilin Qu (8 papers)
  2. Weiqing Wang (54 papers)
  3. Yuan-Fang Li (90 papers)
  4. Xin Zhou (319 papers)
  5. Fajie Yuan (33 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.