Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Adaptive Graph Pre-training Framework for Localized Collaborative Filtering (2112.07191v1)

Published 14 Dec 2021 in cs.IR and cs.AI

Abstract: Graph neural networks (GNNs) have been widely applied in the recommendation tasks and have obtained very appealing performance. However, most GNN-based recommendation methods suffer from the problem of data sparsity in practice. Meanwhile, pre-training techniques have achieved great success in mitigating data sparsity in various domains such as NLP and computer vision (CV). Thus, graph pre-training has the great potential to alleviate data sparsity in GNN-based recommendations. However, pre-training GNNs for recommendations face unique challenges. For example, user-item interaction graphs in different recommendation tasks have distinct sets of users and items, and they often present different properties. Therefore, the successful mechanisms commonly used in NLP and CV to transfer knowledge from pre-training tasks to downstream tasks such as sharing learned embeddings or feature extractors are not directly applicable to existing GNN-based recommendations models. To tackle these challenges, we delicately design an adaptive graph pre-training framework for localized collaborative filtering (ADAPT). It does not require transferring user/item embeddings, and is able to capture both the common knowledge across different graphs and the uniqueness for each graph. Extensive experimental results have demonstrated the effectiveness and superiority of ADAPT.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yiqi Wang (39 papers)
  2. Chaozhuo Li (54 papers)
  3. Zheng Liu (312 papers)
  4. Mingzheng Li (9 papers)
  5. Jiliang Tang (204 papers)
  6. Xing Xie (220 papers)
  7. Lei Chen (485 papers)
  8. Philip S. Yu (592 papers)
Citations (20)

Summary

We haven't generated a summary for this paper yet.