Large Language Model with Graph Convolution for Recommendation (2402.08859v1)
Abstract: In recent years, efforts have been made to use text information for better user profiling and item characterization in recommendations. However, text information can sometimes be of low quality, hindering its effectiveness for real-world applications. With knowledge and reasoning capabilities capsuled in LLMs, utilizing LLMs emerges as a promising way for description improvement. However, existing ways of prompting LLMs with raw texts ignore structured knowledge of user-item interactions, which may lead to hallucination problems like inconsistent description generation. To this end, we propose a Graph-aware Convolutional LLM method to elicit LLMs to capture high-order relations in the user-item graph. To adapt text-based LLMs with structured graphs, We use the LLM as an aggregator in graph processing, allowing it to understand graph-based information step by step. Specifically, the LLM is required for description enhancement by exploring multi-hop neighbors layer by layer, thereby propagating information progressively in the graph. To enable LLMs to capture large-scale graph information, we break down the description task into smaller parts, which drastically reduces the context length of the token input with each step. Extensive experiments on three real-world datasets show that our method consistently outperforms state-of-the-art methods.
- Enhanced story comprehension for large language models through dynamic document-based knowledge graphs. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36. 10436–10444.
- Large language models and the perils of their hallucinations. Critical Care 27, 1 (2023), 1–2.
- Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877–1901.
- Exploring the potential of large language models (llms) in learning on graphs. arXiv preprint arXiv:2307.03393 (2023).
- Why can gpt learn in-context? language models secretly perform gradient descent as meta optimizers. arXiv preprint arXiv:2212.10559 (2022).
- Enhancing Job Recommendation through LLM-based Generative Adversarial Networks. arXiv preprint arXiv:2307.10747 (2023).
- GLM: General Language Model Pretraining with Autoregressive Blank Infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 320–335.
- Recommendation as language processing (rlp): A unified pretrain, personalized prompt & predict paradigm (p5). In Proceedings of the 16th ACM Conference on Recommender Systems. 299–315.
- Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval. 639–648.
- Neural collaborative filtering. In Proceedings of the 26th international conference on world wide web. 173–182.
- Large language models are zero-shot rankers for recommender systems. arXiv preprint arXiv:2305.08845 (2023).
- LoRA: Low-Rank Adaptation of Large Language Models. In International Conference on Learning Representations. https://openreview.net/forum?id=nZeVKeeFYf9
- Learning effective representations for person-job fit by feature fusion. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management. 2549–2556.
- Multi-behavior recommendation with graph convolutional networks. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. 659–668.
- Do LLMs Understand User Preferences? Evaluating LLMs On User Rating Prediction. arXiv preprint arXiv:2305.06474 (2023).
- A review of text-based recommendation systems. IEEE Access 9 (2021), 31638–31661.
- Thomas N Kipf and Max Welling. 2016. Semi-Supervised Classification with Graph Convolutional Networks. In International Conference on Learning Representations.
- Matrix factorization techniques for recommender systems. Computer 42, 8 (2009), 30–37.
- Improving graph collaborative filtering with neighborhood-enriched contrastive learning. In Proceedings of the ACM Web Conference 2022. 2320–2329.
- Is chatgpt a good recommender? a preliminary study. arXiv preprint arXiv:2304.10149 (2023).
- Pre-train, prompt and recommendation: A comprehensive survey of language modelling paradigm adaptations in recommender systems. arXiv preprint arXiv:2302.03735 (2023).
- A First Look at LLM-Powered Generative News Recommendation. arXiv preprint arXiv:2305.06566 (2023).
- UltraGCN: ultra simplification of graph convolutional networks for recommendation. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management. 1253–1262.
- Niklas Muennighoff. 2022. SGPT: GPT Sentence Embeddings for Semantic Search. arXiv preprint arXiv:2202.08904 (2022).
- BPR: Bayesian personalized ranking from implicit feedback. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence. 452–461.
- Ernie 3.0: Large-scale knowledge enhanced pre-training for language understanding and generation. arXiv preprint arXiv:2107.02137 (2021).
- Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023).
- Knowledge graph convolutional networks for recommender systems. In The world wide web conference. 3307–3313.
- Enhancing Recommender Systems with Large Language Model Reasoning Graphs. arXiv preprint arXiv:2308.10835 (2023).
- Recmind: Large language model powered agent for recommendation. arXiv preprint arXiv:2308.14296 (2023).
- Self-supervised graph learning for recommendation. In Proceedings of the 44th international ACM SIGIR conference on research and development in information retrieval. 726–735.
- Exploring large language model for graph data understanding in online job recommendations. arXiv preprint arXiv:2307.05722 (2023).
- A Survey on Large Language Models for Recommendation. arXiv preprint arXiv:2305.19860 (2023).
- Towards Personalized Cold-Start Recommendation with Prompts. arXiv preprint arXiv:2306.17256 (2023).
- Modeling two-way selection preference for person-job fit. In Proceedings of the 16th ACM Conference on Recommender Systems. 102–112.
- Deep bidirectional language-knowledge graph pretraining. Advances in Neural Information Processing Systems 35 (2022), 37309–37323.
- Graph convolutional neural networks for web-scale recommender systems. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining. 974–983.
- Are graph augmentations necessary? simple graph contrastive learning for recommendation. In Proceedings of the 45th international ACM SIGIR conference on research and development in information retrieval. 1294–1303.
- Recommendation as instruction following: A large language model empowered recommendation approach. arXiv preprint arXiv:2305.07001 (2023).
- Greaselm: Graph reasoning enhanced language models. In International conference on learning representations.
- Generative Job Recommendations with Large Language Model. arXiv preprint arXiv:2307.02157 (2023).
- Least-to-Most Prompting Enables Complex Reasoning in Large Language Models. In The Eleventh International Conference on Learning Representations.
- Yingpeng Du (6 papers)
- Ziyan Wang (42 papers)
- Zhu Sun (32 papers)
- Haoyan Chua (2 papers)
- Hongzhi Liu (21 papers)
- Zhonghai Wu (29 papers)
- Yining Ma (31 papers)
- Jie Zhang (846 papers)
- Youchen Sun (1 paper)