Linear-Time Graph Neural Networks for Scalable Recommendations (2402.13973v1)
Abstract: In an era of information explosion, recommender systems are vital tools to deliver personalized recommendations for users. The key of recommender systems is to forecast users' future behaviors based on previous user-item interactions. Due to their strong expressive power of capturing high-order connectivities in user-item interaction data, recent years have witnessed a rising interest in leveraging Graph Neural Networks (GNNs) to boost the prediction performance of recommender systems. Nonetheless, classic Matrix Factorization (MF) and Deep Neural Network (DNN) approaches still play an important role in real-world large-scale recommender systems due to their scalability advantages. Despite the existence of GNN-acceleration solutions, it remains an open question whether GNN-based recommender systems can scale as efficiently as classic MF and DNN methods. In this paper, we propose a Linear-Time Graph Neural Network (LTGNN) to scale up GNN-based recommender systems to achieve comparable scalability as classic MF approaches while maintaining GNNs' powerful expressiveness for superior prediction accuracy. Extensive experiments and ablation studies are presented to validate the effectiveness and scalability of the proposed algorithm. Our implementation based on PyTorch is available.
- Graph convolutional matrix completion. arXiv preprint arXiv:1706.02263 (2017).
- Macro Graph Neural Networks for Online Billion-Scale Recommender Systems. In WWW.
- Fastgcn: fast learning with graph convolutional networks via importance sampling. In ICLR.
- Stochastic Training of Graph Convolutional Networks with Variance Reduction. In ICML.
- Fairly adaptive negative sampling for recommendations. In WWW.
- Minimal variance sampling with provable guarantees for fast training of graph neural networks. In KDD.
- Deep neural networks for youtube recommendations. In RecSys.
- A comprehensive study on large-scale graph training: Benchmarking and rethinking. NeurIPS (2022).
- Implicit deep learning. SIAM Journal on Mathematics of Data Science 3, 3 (2021), 930–958.
- A multi-view deep learning approach for cross domain user modeling in recommendation systems. In WWW.
- Deep modeling of social relations for recommendation. In AAAI.
- Graph trend filtering networks for recommendation. In SIGIR.
- Graph neural networks for social recommendation. In WWW.
- A graph neural network framework for social recommendations. TKDE 34, 5 (2020), 2033–2047.
- Deep social collaborative filtering. In RecSys.
- Untargeted Black-box Attacks for Social Recommendations. arXiv preprint arXiv:2311.07127 (2023).
- Adversarial Attacks for Black-Box Recommender Systems Via Copying Transferable Cross-Domain User Profiles. TKDE (2023).
- Gnnautoscale: Scalable and expressive graph neural networks via historical embeddings. In ICML.
- Predict then Propagate: Graph Neural Networks meet Personalized PageRank. In ICLR.
- Implicit graph neural networks. In NeurIPS.
- An attentional recurrent neural network for personalized next location recommendation. In AAAI.
- Inductive representation learning on large graphs. In NeurIPS.
- Lightgcn: Simplifying and powering graph convolution network for recommendation. In SIGIR.
- NAIS: Neural attentive item similarity model for recommendation. TKDE 30, 12 (2018), 2354–2366.
- Neural collaborative filtering. In WWW.
- Combining Label Propagation and Simple Models out-performs Graph Neural Networks. In ICLR.
- Mixgcf: An improved training method for graph neural network-based recommender systems. In KDD.
- Amazon-M2: A Multilingual Multi-locale Shopping Session Dataset for Recommendation and Text Generation. In NeurIPS.
- Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR.
- Thomas N Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In ICLR.
- Yehuda Koren. 2008. Factorization meets the neighborhood: a multifaceted collaborative filtering model. In KDD.
- Page Lawrence. 1998. The pagerank citation ranking: Bringing order to the web. Technical report (1998).
- Unbiased Stochastic Proximal Solver for Graph Neural Networks with Equilibrium States. In ICLR.
- Hierarchical bipartite graph neural networks: Towards large-scale e-commerce applications. In ICDE.
- Generative diffusion models on graphs: Methods and applications. In IJCAI.
- SimpleX: A simple and strong baseline for collaborative filtering. In CIKM.
- BPR: Bayesian personalized ranking from implicit feedback. In UAI.
- Shang-Hua Teng et al. 2016. Scalable algorithms for data and network analysis. Foundations and Trends® in Theoretical Computer Science 12, 1–2 (2016), 1–274.
- Billion-scale commodity embedding for e-commerce recommendation in alibaba. In KDD.
- Fast graph condensation with structure-based neural tangent kernel. In WWW.
- Kgat: Knowledge graph attention network for recommendation. In KDD.
- Neural graph collaborative filtering. In SIGIR.
- Disentangled graph collaborative filtering. In SIGIR.
- Simplifying graph convolutional networks. In ICML.
- Self-supervised graph learning for recommendation. In SIGIR.
- A comprehensive survey on graph neural networks. TNNLS 32, 1 (2020), 4–24.
- Deep matrix factorization models for recommender systems.. In IJCAI.
- LazyGNN: Large-Scale Graph Neural Networks via Lazy Propagation. In ICML.
- Graph convolutional neural networks for web-scale recommender systems. In KDD.
- GraphFM: Improving large-scale GNN training via feature momentum. In ICML.
- Are graph augmentations necessary? simple graph contrastive learning for recommendation. In SIGIR.
- GraphSAINT: Graph Sampling Based Inductive Learning Method. In ICLR.
- Graph attention multi-layer perceptron. In KDD.
- Xiaojin Zhu. 2005. Semi-supervised learning with graphs. Carnegie Mellon University.
- Layer-dependent importance sampling for training deep and large graph convolutional networks. In NeurIPS.
- Jiahao Zhang (81 papers)
- Rui Xue (75 papers)
- Wenqi Fan (78 papers)
- Xin Xu (188 papers)
- Qing Li (430 papers)
- Jian Pei (104 papers)
- Xiaorui Liu (50 papers)