Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
143 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Structural Knowledge Exploitation and Synergy for Estimating Node Importance Value on Heterogeneous Information Networks (2402.12411v1)

Published 19 Feb 2024 in cs.SI, cs.AI, and cs.LG

Abstract: Node importance estimation problem has been studied conventionally with homogeneous network topology analysis. To deal with network heterogeneity, a few recent methods employ graph neural models to automatically learn diverse sources of information. However, the major concern revolves around that their full adaptive learning process may lead to insufficient information exploration, thereby formulating the problem as the isolated node value prediction with underperformance and less interpretability. In this work, we propose a novel learning framework: SKES. Different from previous automatic learning designs, SKES exploits heterogeneous structural knowledge to enrich the informativeness of node representations. Based on a sufficiently uninformative reference, SKES estimates the importance value for any input node, by quantifying its disparity against the reference. This establishes an interpretable node importance computation paradigm. Furthermore, SKES dives deep into the understanding that "nodes with similar characteristics are prone to have similar importance values" whilst guaranteeing that such informativeness disparity between any different nodes is orderly reflected by the embedding distance of their associated latent features. Extensive experiments on three widely-evaluated benchmarks demonstrate the performance superiority of SKES over several recent competing methods.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (65)
  1. Bipartite Graph Convolutional Hashing for Effective and Efficient Top-N Search in Hamming Space. In WWW, 3164–3172.
  2. Learning binarized graph representations with multi-faceted quantization reinforcement for top-k recommendation. In SIGKDD, 168–178.
  3. Topological representation learning for e-commerce shopping behaviors. In MLG-KDD.
  4. A Literature Review of Recent Graph Embedding Techniques for Biomedical Data. In ICONIP, 21–29.
  5. Modeling scale-free graphs with hyperbolic geometry for knowledge-aware recommendation. In WSDM, 94–102.
  6. Attentive knowledge-aware graph convolutional networks with collaborative guidance for personalized recommendation. In ICDE, 299–311. IEEE.
  7. Efficient community search over large directed graphs: An augmented index-based approach. In IJCAI, 3544–3550.
  8. An Effective Post-training Embedding Binarization Approach for Fast Online Top-K Passage Matching. In AACL, 102–108.
  9. WSFE: Wasserstein Sub-graph Feature Encoder for Effective User Segmentation in Collaborative Filtering. In SIGIR, 2521–2525.
  10. K-core organization of complex networks. Physical review letters, 96(4): 040601.
  11. Egghe, L.; et al. 2006. An improvement of the h-index: The g-index. ISSI newsletter, 2(1): 8–9.
  12. Effective and efficient attributed community search. The VLDB Journal, 26: 803–828.
  13. FedHGN: A Federated Framework for Heterogeneous Graph Neural Networks. In IJCAI, 3705–3713.
  14. MECCH: Metapath Context Convolution-based Heterogeneous Graph Neural Networks. Neural Networks, 170: 266–275.
  15. Magnn: Metapath aggregated graph neural network for heterogeneous graph embedding. In WWW, 2331–2341.
  16. node2vec: Scalable feature learning for networks. In SIGKDD, 855–864.
  17. Haveliwala, T. H. 2003. Topic-sensitive pagerank: A context-sensitive ranking algorithm for web search. TKDE, 15(4): 784–796.
  18. Dynamic Embedding Size Search with Minimum Regret for Streaming Recommender System. In CIKM, 741–750.
  19. Dynamically Expandable Graph Convolution for Streaming Recommendation. In WWW, 1457–1467.
  20. Offline imitation learning with variational counterfactual reasoning. arXiv preprint arXiv:2310.04706.
  21. Hirsch, J. E. 2005. An index to quantify an individual’s scientific research output. PNAS, 102(46): 16569–16572.
  22. Do Large Language Models Know about Facts? arXiv preprint arXiv:2310.05177.
  23. CHEF: A Pilot Chinese Dataset for Evidence-Based Fact-Checking. In NAACL-HLT, 3362–3376.
  24. SelfORE: Self-supervised Relational Feature Learning for Open Relation Extraction. In EMNLP, 3673–3682.
  25. Semi-supervised Relation Extraction via Incremental Meta Self-Training. In EMNLP, 487–496.
  26. Gradient Imitation Reinforcement Learning for Low Resource Relation Extraction. In EMNLP, 2737–2746.
  27. Heterogeneous graph transformer. In WWW, 2704–2710.
  28. Estimating Node Importance Values in Heterogeneous Information Networks. In ICDE, 846–858.
  29. Representation Learning on Knowledge Graphs for Node Importance Estimation. In SIGKDD, 646–655.
  30. Neural Optimal Transport. In ICLR. OpenReview.net.
  31. Text Revision by On-the-Fly Representation Optimization. In AAAI, 10956–10964.
  32. Unsupervised text generation by learning from search. In NeurIPS, volume 33, 10820–10831.
  33. Predicting Global Label Relationship Matrix for Graph Neural Networks under Heterophily. In NeurIPS.
  34. Harmony in the small-world. Physica A: Statistical Mechanics and its Applications, 285(3-4): 539–546.
  35. Pooling by sliced-Wasserstein embedding. volume 34, 3389–3400.
  36. Eigenvector centrality for characterization of protein allosteric pathways. PNAS, 115(52): E12201–E12208.
  37. Nieminen, J. 1974. On the centrality in a graph. Scandinavian journal of psychology, 15(1): 332–336.
  38. Statistical, robustness, and computational guarantees for sliced wasserstein distances. NeurIPS, 35: 28179–28193.
  39. The PageRank citation ranking: Bringing order to the web. Technical report, Stanford InfoLab.
  40. Estimating node importance in knowledge graphs using graph neural networks. In SIGKDD, 596–606.
  41. Multiimport: Inferring node importance in a knowledge graph from multiple input signals. In SIGKDD, 503–512.
  42. Unsupervised Hashing with Contrastive Information Bottleneck. In IJCAI, 959–965.
  43. Efficient Document Retrieval by End-to-End Refining and Quantizing BERT Embedding with Contrastive Product Quantization. In EMNLP, 853–863.
  44. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084.
  45. Sabidussi, G. 1966. The centrality index of a graph. Psychometrika, 31(4): 581–603.
  46. Shaw, M. E. 1954. Group structure and the behavior of individuals in small groups. The Journal of psychology, 38(1): 139–149.
  47. No Change, No Gain: Empowering Graph Neural Networks with Expected Model Change Maximization for Active Learning. In NeurIPS.
  48. Optimal Block-wise Asymmetric Graph Construction for Graph-based Semi-supervised Learning. In NeurIPS.
  49. Towards Fair Financial Services for All: A Temporal GNN Approach for Individual Fairness on Transaction Networks. In CIKM, 2331–2341. ACM.
  50. A Unified Strategy for Multilingual Grammatical Error Correction with Pre-trained Cross-Lingual Language Model. In Raedt, L. D., ed., IJCAI, 4367–4374. Main Track.
  51. Pathsim: Meta path-based top-k similarity search in heterogeneous information networks. VLDB, 4(11): 992–1003.
  52. Attention is all you need. NeurIPS, 30.
  53. Graph attention networks. arXiv preprint arXiv:1710.10903.
  54. Villani, C.; et al. 2009. Optimal transport: old and new, volume 338. Springer.
  55. Hicf: Hyperbolic informative collaborative filtering. In SIGKDD, 2212–2221.
  56. κ𝜅\kappaitalic_κHGCN: Tree-likeness Modeling via Continuous and Discrete Curvature Learning. In SIGKDD, 2965–2977.
  57. Hyperbolic Representation Learning: Revisiting and Advancing. ICML.
  58. Knowledge-aware Neural Networks with Personalized Feature Referencing for Cold-start Recommendation. arXiv preprint arXiv:2209.13973.
  59. Contrastive cross-scale graph knowledge synergy. In SIGKDD, 3422–3433.
  60. Doc2hash: Learning discrete latent variables for documents retrieval. In NAACL.
  61. Mitigating the Popularity Bias of Graph Collaborative Filtering: A Dimensional Collapse Perspective. In NeurIPS.
  62. COSTA: covariance-preserving feature augmentation for graph contrastive learning. In SIGKDD.
  63. Spectral feature augmentation for graph contrastive learning and beyond. In AAAI.
  64. When Convolutional Network Meets Temporal Heterogeneous Graphs: An Effective Community Detection Method. IEEE TKDE.
  65. Pointclip v2: Prompting clip and gpt for powerful 3d open-world learning. In ICCV, 2639–2650.
Citations (5)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com