Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficient Heterogeneous Graph Learning via Random Projection (2310.14481v2)

Published 23 Oct 2023 in cs.LG and cs.SI

Abstract: Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs. Typical HGNNs require repetitive message passing during training, limiting efficiency for large-scale real-world graphs. Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors, enabling efficient mini-batch training. Existing pre-computation-based HGNNs can be mainly categorized into two styles, which differ in how much information loss is allowed and efficiency. We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN), which combines the benefits of one style's efficiency with the low information loss of the other style. To achieve efficiency, the main framework of RpHGNN consists of propagate-then-update iterations, where we introduce a Random Projection Squashing step to ensure that complexity increases only linearly. To achieve low information loss, we introduce a Relation-wise Neighbor Collection component with an Even-odd Propagation Scheme, which aims to collect information from neighbors in a finer-grained way. Experimental results indicate that our approach achieves state-of-the-art results on seven small and large benchmark datasets while also being 230% faster compared to the most effective baseline. Surprisingly, our approach not only surpasses pre-processing-based baselines but also outperforms end-to-end methods.

Efficient Heterogeneous Graph Learning via Random Projection

The paper "Efficient Heterogeneous Graph Learning via Random Projection" presents a novel approach to improve the efficiency and performance of Heterogeneous Graph Neural Networks (HGNNs) through a method called Random Projection Heterogeneous Graph Neural Network (RpHGNN). HGNNs are widely used in processing heterogeneous graphs, which consist of multiple types of nodes and edges. These networks typically rely on message passing to aggregate information from neighboring nodes, a process that can become inefficient and resource-intensive on large-scale graphs.

A key innovation introduced by this paper is the Random Projection Squashing technique, which enables more efficient computation by controlling the complexity of vertex representation updates. The core of RpHGNN is designed around propagate-then-update iterations, incorporating this squashing step to ensure systemic scalability. It employs random projection techniques to maintain a constant dimensionality during updates, thereby reducing computational overhead and preserving efficiency.

To minimize information loss—a common issue in pre-computation models—RpHGNN integrates the Relation-wise Neighbor Collection with an Even-odd Propagation Scheme. The former maintains a fine granularity by collecting information based on different relations separately, while the latter permits fewer propagate-then-update iterations without sacrificing efficacy in capturing multi-hop relations. This combination offers a more comprehensive aggregation of meaningful neighbor information compared to existing pre-computation-based approaches, such as SeHGNN and NARS, which respectively struggle with efficiency and information loss due to simplifications in processing.

Experimentation conducted across seven benchmark datasets indicates RpHGNN's capability to achieve state-of-the-art results. Notably, it demonstrates increased performance by as much as 230% improvement in efficiency over its predecessors, confirming the advantages of blending efficiency and low information loss through hybrid strategies. The research shows that RpHGNN performs consistently better than both end-to-end and other pre-computation-based HGNN baselines across small and large datasets alike.

The implications of this research are twofold: On a practical level, RpHGNN facilitates scalable graph learning applicable to large-scale real-world scenarios, reducing the computational burden typical of HGNNs. Theoretically, the hybrid model of combining relation-wise and representation-wise styles represents an advancement in how heterogeneous graph data is processed by neural networks, opening avenues for further research into optimizing HGNN architectures.

Future developments in AI could expand on these findings by exploring other hybrid models or enhancing the integration of random projection methods in neural network training. Additional research might also focus on tailoring these methods to other types of graph-based data structures or domains, broadening the utility and impact of heterogeneous graph neural networks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Jun Hu (239 papers)
  2. Bryan Hooi (158 papers)
  3. Bingsheng He (105 papers)
Citations (4)