Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LLM-Enhanced User-Item Interactions: Leveraging Edge Information for Optimized Recommendations (2402.09617v1)

Published 14 Feb 2024 in cs.AI and cs.IR

Abstract: The extraordinary performance of LLMs has not only reshaped the research landscape in the field of NLP but has also demonstrated its exceptional applicative potential in various domains. However, the potential of these models in mining relationships from graph data remains under-explored. Graph neural networks, as a popular research area in recent years, have numerous studies on relationship mining. Yet, current cutting-edge research in graph neural networks has not been effectively integrated with LLMs, leading to limited efficiency and capability in graph relationship mining tasks. A primary challenge is the inability of LLMs to deeply exploit the edge information in graphs, which is critical for understanding complex node relationships. This gap limits the potential of LLMs to extract meaningful insights from graph structures, limiting their applicability in more complex graph-based analysis. We focus on how to utilize existing LLMs for mining and understanding relationships in graph data, applying these techniques to recommendation tasks. We propose an innovative framework that combines the strong contextual representation capabilities of LLMs with the relationship extraction and analysis functions of GNNs for mining relationships in graph data. Specifically, we design a new prompt construction framework that integrates relational information of graph data into natural language expressions, aiding LLMs in more intuitively grasping the connectivity information within graph data. Additionally, we introduce graph relationship understanding and analysis functions into LLMs to enhance their focus on connectivity information in graph data. Our evaluation on real-world datasets demonstrates the framework's ability to understand connectivity information in graph data.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (43)
  1. Tallrec: An effective and efficient tuning framework to align large language model with recommendation. arXiv preprint arXiv:2305.00447 (2023).
  2. Science in the age of large language models. Nature Reviews Physics 5, 5 (01 May 2023), 277–280. https://doi.org/10.1038/s42254-023-00581-4
  3. Uncovering ChatGPT’s Capabilities in Recommender Systems. arXiv preprint arXiv:2305.02182 (2023).
  4. Leveraging Large Language Models in Conversational Recommender Systems. arXiv preprint arXiv:2305.07961 (2023).
  5. Deep learning on knowledge graph for recommender system: A survey. arXiv preprint arXiv:2004.00387 (2020).
  6. Chat-rec: Towards interactive and explainable llms-augmented recommender system. arXiv preprint arXiv:2303.14524 (2023).
  7. Pre-trained models: Past, present and future. AI Open 2 (2021), 225–250.
  8. Session-based recommendations with recurrent neural networks. arXiv preprint arXiv:1511.06939 (2015).
  9. Towards universal sequence representation learning for recommender systems. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 585–593.
  10. Large language models are zero-shot rankers for recommender systems. arXiv preprint arXiv:2305.08845 (2023).
  11. Genrec: Large language model for generative recommendation. arXiv e-prints (2023), arXiv–2307.
  12. Wang-Cheng Kang and Julian McAuley. 2018. Self-attentive sequential recommendation. In 2018 IEEE international conference on data mining (ICDM). IEEE, 197–206.
  13. Do LLMs Understand User Preferences? Evaluating LLMs On User Rating Prediction. arXiv preprint arXiv:2305.06474 (2023).
  14. PBNR: Prompt-based News Recommender System. arXiv preprint arXiv:2304.07862 (2023).
  15. Variational autoencoders for collaborative filtering. In Proceedings of the 2018 world wide web conference. 689–698.
  16. How Can Recommender Systems Benefit from Large Language Models: A Survey. arXiv preprint arXiv:2306.05817 (2023).
  17. Is chatgpt a good recommender? a preliminary study. arXiv preprint arXiv:2304.10149 (2023).
  18. Boosting deep CTR prediction with a plug-and-play pre-trainer for news recommendation. In Proceedings of the 29th International Conference on Computational Linguistics. 2823–2833.
  19. GPT understands, too. AI Open (2023).
  20. Julian McAuley and Alex Yang. 2016. Addressing complex and subjective product-related queries with customer reviews. In Proceedings of the 25th International Conference on World Wide Web. 625–635.
  21. A Study of Generative Large Language Model for Medical Research and Healthcare. arXiv preprint arXiv:2305.13523 (2023).
  22. Gustavo Penha and Claudia Hauff. 2020. What does bert know about books, movies and music? probing bert for conversational recommendation. In Proceedings of the 14th ACM Conference on Recommender Systems. 388–397.
  23. Aleksandr V Petrov and Craig Macdonald. 2023. Generative Sequential Recommendation with GPTRec. arXiv preprint arXiv:2306.11114 (2023).
  24. Mohammad Taher Pilehvar and Jose Camacho-Collados. 2020. Embeddings in natural language processing: Theory and advances in vector representations of meaning. Morgan & Claypool Publishers.
  25. U-BERT: Pre-training user representations for improved recommendation. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. 4320–4327.
  26. Language models are unsupervised multitask learners. OpenAI blog 1, 8 (2019), 9.
  27. Representation Learning with Large Language Models for Recommendation. arXiv preprint arXiv:2310.15950 (2023).
  28. Large language models are competitive near cold-start recommenders for language-and item-based preferences. In Proceedings of the 17th ACM conference on recommender systems. 890–896.
  29. Zero-shot recommendation as language modeling. In European Conference on Information Retrieval. Springer, 223–230.
  30. BERT4Rec: Sequential recommendation with bidirectional encoder representations from transformer. In Proceedings of the 28th ACM international conference on information and knowledge management. 1441–1450.
  31. Enhancing recommender systems with large language model reasoning graphs. arXiv preprint arXiv:2308.10835 (2023).
  32. A theoretical analysis of NDCG type ranking measures. In Conference on learning theory. PMLR, 25–54.
  33. Jonathan J Webster and Chunyu Kit. 1992. Tokenization as the initial phase in NLP. In COLING 1992 volume 4: The 14th international conference on computational linguistics.
  34. Empowering news recommendation with pre-trained language models. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 1652–1656.
  35. A Survey on Large Language Models for Recommendation. arXiv preprint arXiv:2305.19860 (2023).
  36. Lightweight composite re-ranking for efficient keyword search with BERT. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining. 1234–1244.
  37. Do transformers really perform bad for graph representation? arXiv 2021. arXiv preprint arXiv:2106.05234 ([n. d.]).
  38. Tiny-newsrec: Effective and efficient plm-based news recommendation. arXiv preprint arXiv:2112.00944 (2021).
  39. Feature-level Deeper Self-Attention Network for Sequential Recommendation.. In IJCAI. 4320–4326.
  40. Qihang Zhao. 2022. RESETBERT4Rec: A pre-training model integrating time and user historical behavior for sequential recommendation. In Proceedings of the 45th international ACM SIGIR conference on research and development in information retrieval. 1812–1816.
  41. Generative job recommendations with large language model. arXiv preprint arXiv:2307.02157 (2023).
  42. S3-rec: Self-supervised learning for sequential recommendation with mutual information maximization. In Proceedings of the 29th ACM international conference on information & knowledge management. 1893–1902.
  43. Yaochen Zhu and Zhenzhong Chen. 2022. Mutually-regularized dual collaborative variational auto-encoder for recommendation systems. In Proceedings of The ACM Web Conference 2022. 2379–2387.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Xinyuan Wang (34 papers)
  2. Liang Wu (138 papers)
  3. Liangjie Hong (16 papers)
  4. Hao Liu (497 papers)
  5. Yanjie Fu (93 papers)
Citations (11)
X Twitter Logo Streamline Icon: https://streamlinehq.com