Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cascade-BGNN: Toward Efficient Self-supervised Representation Learning on Large-scale Bipartite Graphs (1906.11994v3)

Published 27 Jun 2019 in cs.SI, cs.AI, cs.LG, and stat.ML

Abstract: Bipartite graphs have been used to represent data relationships in many data-mining applications such as in E-commerce recommendation systems. Since learning in graph space is more complicated than in Euclidian space, recent studies have extensively utilized neural nets to effectively and efficiently embed a graph's nodes into a multidimensional space. However, this embedding method has not yet been applied to large-scale bipartite graphs. Existing techniques either cannot be scaled to large-scale bipartite graphs that have limited labels or cannot exploit the unique structure of bipartite graphs, which have distinct node features in two domains. Thus, we propose Cascade Bipartite Graph Neural Networks, Cascade-BGNN, a novel node representation learning for bipartite graphs that is domain-consistent, self-supervised, and efficient. To efficiently aggregate information both across and within the two partitions of a bipartite graph, BGNN utilizes a customized Inter-domain Message Passing (IDMP) and Intra-domain Alignment (IDA), which is our adaptation of adversarial learning, for message aggregation across and within partitions, respectively. BGNN is trained in a self-supervised manner. Moreover, we formulate a multi-layer BGNN in a cascaded training manner to enable multi-hop relationship modeling while improving training efficiency. Extensive experiments on several datasets of varying scales verify the effectiveness and efficiency of BGNN over baselines. Our design is further affirmed through theoretical analysis for domain alignment. The scalability of BGNN is additionally verified through its demonstrated rapid training speed and low memory cost over a large-scale real-world bipartite graph.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Chaoyang He (46 papers)
  2. Tian Xie (77 papers)
  3. Yu Rong (146 papers)
  4. Wenbing Huang (95 papers)
  5. Junzhou Huang (137 papers)
  6. Xiang Ren (194 papers)
  7. Cyrus Shahabi (55 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.