Enhancing ID-based Recommendation with Large Language Models (2411.02041v1)
Abstract: LLMs have recently garnered significant attention in various domains, including recommendation systems. Recent research leverages the capabilities of LLMs to improve the performance and user modeling aspects of recommender systems. These studies primarily focus on utilizing LLMs to interpret textual data in recommendation tasks. However, it's worth noting that in ID-based recommendations, textual data is absent, and only ID data is available. The untapped potential of LLMs for ID data within the ID-based recommendation paradigm remains relatively unexplored. To this end, we introduce a pioneering approach called "LLM for ID-based Recommendation" (LLM4IDRec). This innovative approach integrates the capabilities of LLMs while exclusively relying on ID data, thus diverging from the previous reliance on textual data. The basic idea of LLM4IDRec is that by employing LLM to augment ID data, if augmented ID data can improve recommendation performance, it demonstrates the ability of LLM to interpret ID data effectively, exploring an innovative way for the integration of LLM in ID-based recommendation. We evaluate the effectiveness of our LLM4IDRec approach using three widely-used datasets. Our results demonstrate a notable improvement in recommendation performance, with our approach consistently outperforming existing methods in ID-based recommendation by solely augmenting input data.
- TALLRec: An Effective and Efficient Tuning Framework to Align Large Language Model with Recommendation. In RecSys. ACM, 1007–1014.
- Language Models are Realistic Tabular Data Generators. In The International Conference on Learning Representations. https://openreview.net/forum?id=cEygmQNOeI
- Augmenting large language models with chemistry tools. In Conference on Neural Information Processing Systems (AI for Science Workshop).
- Language models are few-shot learners. Advances in Neural Information Processing Systems 33 (2020), 1877–1901.
- Language Models are Few-Shot Learners. In NeurIPS.
- Privacy-Preserving Recommender Systems with Synthetic Query Generation using Differentially Private Large Language Models. arXiv preprint arXiv:2305.05973 (2023).
- Bias and debias in recommender system: A survey and future directions. ACM Transactions on Information Systems 41, 3 (2023), 1–39.
- Revisiting graph based collaborative filtering: A linear residual graph convolutional network approach. In Proceedings of the AAAI conference on artificial intelligence, Vol. 34. 27–34.
- Try this instead: Personalized and interpretable substitute recommendation. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. 891–900.
- Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality. https://lmsys.org/blog/2023-03-30-vicuna/
- Deep neural networks for youtube recommendations. In Proceedings of the ACM conference on recommender systems. 191–198.
- M6-Rec: Generative Pretrained Language Models are Open-Ended Recommender Systems. CoRR abs/2205.08084 (2022).
- Uncovering ChatGPT’s Capabilities in Recommender Systems. In RecSys. ACM, 1126–1132.
- Uncovering ChatGPT’s Capabilities in Recommender Systems. arXiv preprint arXiv:2305.02182 (2023).
- An adversarial imitation click model for information retrieval. In Proceedings of the Web Conference. 1809–1820.
- Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).
- Chat-rec: Towards interactive and explainable llms-augmented recommender system. arXiv preprint arXiv:2303.14524 (2023).
- Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5). In RecSys. ACM, 299–315.
- Recommendation as language processing (rlp): A unified pretrain, personalized prompt & predict paradigm (p5). In Proceedings of the 16th ACM Conference on Recommender Systems. 299–315.
- ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving. arXiv preprint arXiv:2309.17452 (2023).
- DeepFM: a factorization-machine based neural network for CTR prediction. arXiv preprint arXiv:1703.04247 (2017).
- Gpt4graph: Can large language models understand graph structured data? an empirical evaluation and benchmarking. arXiv preprint arXiv:2305.15066 (2023).
- Ruining He and Julian McAuley. 2016. Fusing similarity models with markov chains for sparse sequential recommendation. In IEEE international conference on data mining (ICDM). 191–200.
- Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. 639–648.
- Neural collaborative filtering. In Proceedings of the 26th International Conference on World Wide Web. 173–182.
- B Hidasi. 2015. Session-based Recommendations with Recurrent Neural Networks. arXiv preprint arXiv:1511.06939 (2015).
- Towards universal sequence representation learning for recommender systems. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 585–593.
- Large language models are zero-shot rankers for recommender systems. arXiv preprint arXiv:2305.08845 (2023).
- How to index item ids for recommendation foundation models. In Proceedings of the Annual International ACM SIGIR Conference on Research and Development in Information Retrieval in the Asia Pacific Region. 195–204.
- Jie Huang and Kevin Chen-Chuan Chang. 2022. Towards Reasoning in Large Language Models: A Survey. arXiv preprint arXiv:2212.10403 (2022).
- Wang-Cheng Kang and Julian McAuley. 2018. Self-attentive sequential recommendation. In 2018 IEEE international conference on data mining (ICDM). IEEE, 197–206.
- Matrix factorization techniques for recommender systems. Computer 42, 8 (2009), 30–37.
- SAILER: Structure-aware Pre-trained Language Model for Legal Case Retrieval. arXiv preprint arXiv:2304.11370 (2023).
- Text Is All You Need: Learning Language Representations for Sequential Recommendation. In KDD. ACM, 1258–1267.
- Time interval aware self-attention for sequential recommendation. In Proceedings of the 13th international conference on web search and data mining. 322–330.
- Personalized prompt learning for explainable recommendation. ACM Transactions on Information Systems 41, 4 (2023), 1–26.
- Exploring the Upper Limits of Text-Based Collaborative Filtering Using Large Language Models: Discoveries and Insights. arXiv preprint arXiv:2305.11700 (2023).
- CTRL: Connect Tabular and Language Model for CTR Prediction. arXiv preprint arXiv:2306.02841 (2023).
- A Graph-Enhanced Click Model for Web Search. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. 1259–1268.
- A multi-facet paradigm to bridge large language model and recommendation. arXiv preprint arXiv:2310.06491 (2023).
- Improving graph collaborative filtering with neighborhood-enriched contrastive learning. In Proceedings of the ACM Web Conference. 2320–2329.
- Amazon. com recommendations: Item-to-item collaborative filtering. IEEE Internet computing 7, 1 (2003), 76–80.
- Is chatgpt a good recommender? a preliminary study. arXiv preprint arXiv:2304.10149 (2023).
- Pre-train, prompt and recommendation: A comprehensive survey of language modelling paradigm adaptations in recommender systems. arXiv preprint arXiv:2302.03735 (2023).
- A First Look at LLM-Powered Generative News Recommendation. arXiv preprint arXiv:2305.06566 (2023).
- UniTRec: A Unified Text-to-Text Transformer and Joint Contrastive Learning Framework for Text-based Recommendation. The Association for Computational Linguistics (2023).
- Large language models: A survey. arXiv preprint arXiv:2402.06196 (2024).
- Large Language Model Augmented Narrative Driven Recommendations. arXiv preprint arXiv:2306.02250 (2023).
- BERT with history answer embedding for conversational question answering. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. 1133–1136.
- Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21, 1 (2020), 5485–5551.
- Sayplan: Grounding large language models using 3d scene graphs for scalable task planning. arXiv preprint arXiv:2307.06135 (2023).
- A survey of large language models for graphs. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 6616–6626.
- Steffen Rendle. 2010. Factorization machines. In IEEE International Conference on Data Mining. 995–1000.
- BPR: Bayesian personalized ranking from implicit feedback. In Proceedings of the Conference on Uncertainty in Artificial Intelligence. 452–461.
- BPR: Bayesian personalized ranking from implicit feedback. arXiv preprint arXiv:1205.2618 (2012).
- Factorizing personalized markov chains for next-basket recommendation. In Proceedings of the international conference on World wide web. 811–820.
- FAQ retrieval using query-question similarity and BERT-based query-answer relevance. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. 1113–1116.
- Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761 (2023).
- BERT4Rec: Sequential recommendation with bidirectional encoder representations from transformer. In Proceedings of the 28th ACM international conference on information and knowledge management. 1441–1450.
- Determlr: Augmenting llm-based logical reasoning from indeterminacy to determinacy. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 9828–9862.
- Is ChatGPT Good at Search? Investigating Large Language Models as Re-Ranking Agent. arXiv preprint arXiv:2304.09542 (2023).
- LLaMA: Open and Efficient Foundation Language Models. CoRR abs/2302.13971 (2023).
- LLMs as Zero-shot Graph Learners: Alignment of GNN Representations with LLM Token Embeddings. arXiv preprint arXiv:2408.14512 (2024).
- Generative recommendation: Towards next-generation recommender paradigm. arXiv preprint arXiv:2304.03516 (2023).
- Neural graph collaborative filtering. In Proceedings of the 42nd international ACM SIGIR conference on Research and development in Information Retrieval. 165–174.
- Towards unified conversational recommender systems via knowledge-enhanced prompt learning. In Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 1929–1937.
- LLMRec: Large Language Models with Graph Augmentation for Recommendation. The ACM International Conference on Web Search and Data Mining (2023).
- A survey on large language models for recommendation. World Wide Web 27, 5 (2024), 60.
- A Bird’s-eye View of Reranking: from List Level to Page Level. In Proceedings of the ACM International Conference on Web Search and Data Mining. 1075–1083.
- Contrastive learning for sequential recommendation. In IEEE International Conference on Data Engineering. 1259–1273.
- End-to-end open-domain question answering with bertserini. arXiv preprint arXiv:1902.01718 (2019).
- Simple applications of BERT for ad hoc document retrieval. arXiv preprint arXiv:1903.10972 (2019).
- ONCE: Boosting Content-based Recommendation with Both Open- and Closed-source Large Language Models. arXiv preprint arXiv:2305.06566 (2023).
- Language is all a graph needs. In Findings of the Association for Computational Linguistics: EACL 2024. 1955–1973.
- XSimGCL: Towards extremely simple graph contrastive learning for recommendation. IEEE Transactions on Knowledge and Data Engineering (2023).
- Are graph augmentations necessary? simple graph contrastive learning for recommendation. In Proceedings of the 45th international ACM SIGIR conference on research and development in information retrieval. 1294–1303.
- Self-supervised learning for recommender systems: A survey. IEEE Transactions on Knowledge and Data Engineering (2023).
- Tenrec: A Large-scale Multipurpose Benchmark Dataset for Recommender Systems. Advances in Neural Information Processing Systems 35 (2022), 11480–11493.
- Where to go next for recommender systems? id-vs. modality-based recommender models revisited. arXiv preprint arXiv:2303.13835 (2023).
- Knowledge transfer via pre-training for recommendation: A review and prospect. Frontiers in Big Data 4 (2021), 602071.
- On Generative Agents in Recommendation. In SIGIR.
- LLM4DyG: Can Large Language Models Solve Spatial-Temporal Problems on Dynamic Graphs?. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 4350–4361.
- Variational self-attention network for sequential recommendation. In 2021 IEEE 37th International Conference on Data Engineering (ICDE). IEEE, 1559–1570.
- A survey of large language models. arXiv preprint arXiv:2303.18223 (2023).
- Let Me Do It For You: Towards LLM Empowered Recommendation via Tool Learning. In SIGIR.
- Deep interest network for click-through rate prediction. In Proceedings of the ACM SIGKDD international conference on knowledge discovery & data mining. 1059–1068.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days freePaper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.