Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Next-Generation LLM-based Recommender Systems: A Survey and Beyond (2410.19744v1)

Published 10 Oct 2024 in cs.IR and cs.AI

Abstract: LLMs have not only revolutionized the field of NLP but also have the potential to bring a paradigm shift in many other fields due to their remarkable abilities of language understanding, as well as impressive generalization capabilities and reasoning skills. As a result, recent studies have actively attempted to harness the power of LLMs to improve recommender systems, and it is imperative to thoroughly review the recent advances and challenges of LLM-based recommender systems. Unlike existing work, this survey does not merely analyze the classifications of LLM-based recommendation systems according to the technical framework of LLMs. Instead, it investigates how LLMs can better serve recommendation tasks from the perspective of the recommender system community, thus enhancing the integration of LLMs into the research of recommender system and its practical application. In addition, the long-standing gap between academic research and industrial applications related to recommender systems has not been well discussed, especially in the era of LLMs. In this review, we introduce a novel taxonomy that originates from the intrinsic essence of recommendation, delving into the application of LLM-based recommendation systems and their industrial implementation. Specifically, we propose a three-tier structure that more accurately reflects the developmental progression of recommendation systems from research to practical implementation, including representing and understanding, scheming and utilizing, and industrial deployment. Furthermore, we discuss critical challenges and opportunities in this emerging field. A more up-to-date version of the papers is maintained at: https://github.com/jindongli-Ai/Next-Generation-LLM-based-Recommender-Systems-Survey.

Towards Next-Generation LLM-based Recommender Systems: A Survey and Beyond

This paper provides a comprehensive examination of the evolving landscape of recommender systems augmented by LLMs. It highlights how these models, which have transformed numerous applications across natural language processing domains, are positioned to significantly advance recommender systems by enhancing their contextual understanding and adaptability.

Main Contributions

The authors diverge from traditional surveys that focus strictly on the technical frameworks derived from NLP. Instead, they offer a novel taxonomy inspired by insights from the recommender systems community. This taxonomy is key to aligning LLM capabilities with the intrinsic essence of recommendation tasks. It comprises a three-tier structure:

  1. Representing and Understanding: The paper identifies an imperative need to capture more nuanced representations of users and items beyond the typical ID-based methods. LLMs can enrich these representations with semantic depth and reasoning capabilities, leveraging vast pre-trained language data that many conventional recommender systems lack.
  2. Scheming and Utilizing: The survey suggests that LLMs can introduce a new dynamic in recommendation pipelines. Traditional data processing often overlooks the embedded semantic knowledge these models offer. Therefore, incorporating LLM-enhanced approaches can revitalize challenging areas, such as capturing implicit user preferences or anticipating user needs without requiring extensive historical interaction data.
  3. Industrial Deployment: Bridging the gap between academic insights and practical implementations is crucial, especially in an industrial setting where scalability, efficiency, and adaptability matter. The survey notes that LLMs, despite being resource-intensive, have shown potential for integration into large-scale systems due to their ability to generalize from diverse datasets.

Challenges and Opportunities

Several challenges and opportunities are identified. Key challenges involve the computational demands associated with deploying LLMs at scale and ensuring their outputs align with user privacy and ethical standards. There is a focus on addressing cold start problems and the integration of temporal dynamics to better understand evolving user preferences.

Future opportunities lie in refining how LLMs are utilized within recommender systems. One potential area is developing techniques for multimodal integration, where LLMs can process not just text, but also incorporate images and other media types into user preference modeling. Such advances could drastically improve the system's ability to generate recommendations in media-rich environments like social media platforms or e-commerce sites.

Implications and Speculations

Theoretically, this survey lays down a foundation for future work addressing the intricate balance of accuracy, efficiency, and user satisfaction in recommender systems. Practically, the alignment of LLMs in this role suggests a future where recommendation tasks may evolve beyond heuristic rules and pre-defined algorithms to more fluid, AI-driven systems that operate with a nuanced understanding akin to human reasoning.

The paper speculates on the potential for advancements in LLM capacity and architecture to further blur the lines between explicit and implicit recommendation mechanisms. As LLMs continue to evolve, there lies potential for them to not only recommend items but also to generate and curate content on behalf of users, fundamentally changing the landscape of digital consumption and interaction.

In conclusion, this paper underscores the transformative potential of LLMs for recommender systems, advocating for a paradigm shift toward models that leverage rich semantic contexts and improved interaction mechanisms. Its exploration of both challenges and opportunities presents a roadmap for integrating LLMs into practical recommender frameworks, setting the stage for next-generation systems that are more intuitive, adaptive, and user-centered.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (179)
  1. Supporting student decisions on learning recommendations: An llm-based chatbot with knowledge graph contextualization for conversational explainability and mentoring. arXiv preprint arXiv:2401.08517, 2024a.
  2. Knowledge graphs as context sources for llm-based explanations of learning recommendations. arXiv preprint arXiv:2403.03008, 2024b.
  3. Beyond labels: Leveraging deep learning and llms for content metadata. In Proceedings of the 17th ACM Conference on Recommender Systems, pages 123–145, 2023.
  4. Information retrieval meets large language models: A strategic report from chinese ir community. AI Open, 4, 08 2023. doi: 10.1016/j.aiopen.2023.08.001.
  5. A bi-step grounding paradigm for large language models in recommendation systems. arXiv preprint arXiv:2308.08434, 2023a.
  6. Tallrec: An effective and efficient tuning framework to align large language model with recommendation. In Proceedings of the 17th ACM Conference on Recommender Systems, pages 1007––1014, 2023b.
  7. Improving sequential recommendations with llms. arXiv preprint arXiv:2402.01339, 2024a.
  8. Improving sequential recommendations with llms. arXiv preprint arXiv:2402.01339, 2024b.
  9. Aligning large language models with recommendation knowledge. In Proceedings of the 25th Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 1051–1066, 2024.
  10. Enhancing recommendation diversity by re-ranking with large language models. arXiv preprint arXiv:2401.11506, 2024.
  11. New community cold-start recommendation: A novel large language model-based method. Available at SSRN 4828316, 2024.
  12. Junyi Chen. A survey on large language models for personalized and explainable recommendations. ArXiv, abs/2311.12338, 2023. URL https://api.semanticscholar.org/CorpusID:265308990.
  13. Joint neural collaborative filtering for recommender systems. ACM Transactions on Information Systems (TOIS), 37(4):1–30, 2019.
  14. Delf: A dual-embedding based deep latent factor model for recommendation. In IJCAI, volume 18, pages 3329–3335, 2018.
  15. Improve temporal awareness of llms for sequential recommendation. arXiv preprint arXiv:2405.02778, 2024a.
  16. Llm-guided multi-view hypergraph learning for human-centric explainable recommendation. arXiv preprint arXiv:2401.08217, 2024b.
  17. Uncovering chatgpt’s capabilities in recommender systems. In Proceedings of the 17th ACM Conference on Recommender Systems, pages 1126–1132, 2023.
  18. A review of modern recommender systems using generative models (gen-recsys). In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2024.
  19. Jacob Devlin. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
  20. Large language model with graph convolution for recommendation. arXiv preprint arXiv:2402.08859, 2024.
  21. Clap learning audio concepts from natural language supervision. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1–5. IEEE, 2023.
  22. Deep social collaborative filtering. In Proceedings of the 13th ACM conference on recommender systems, pages 305–313, 2019.
  23. Where to move next: Zero-shot generalization of llms for next poi recommendation. arXiv preprint arXiv:2404.01855, 2024a.
  24. Where to move next: Zero-shot generalization of llms for next poi recommendation. arXiv preprint arXiv:2404.01855, 2024b.
  25. A large language model enhanced conversational recommender system. arXiv preprint arXiv:2308.06212, 2023.
  26. Leveraging large language models in conversational recommender systems. arXiv preprint arXiv:2305.07961, 2023.
  27. Dre: Generating recommendation explanations by aligning large language models at data-level. arXiv preprint arXiv:2404.06311, 2024.
  28. Chat-rec: Towards interactive and explainable llms-augmented recommender system. ArXiv, abs/2303.14524, 2023a. URL https://api.semanticscholar.org/CorpusID:257766541.
  29. Chat-rec: Towards interactive and explainable llms-augmented recommender system. arXiv preprint arXiv:2303.14524, 2023b.
  30. Breaking the length barrier: Llm-enhanced ctr prediction in long textual user behaviors. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 2311–2315, 2024.
  31. An unified search and recommendation foundation model for cold-start scenario. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, page 4595–4601, 2023.
  32. Google. Palm 2 technical overview. https://blog.google/technology/ai/google-palm-2-ai-large-language-model, 2023.
  33. Trustsvd: Collaborative filtering with both the explicit and implicit influence of user trust and of item ratings. In Proceedings of the AAAI conference on artificial intelligence, volume 29, 2015.
  34. Dh-hgcn: dual homogeneity hypergraph convolutional network for multiple social recommendations. In Proceedings of the 45th international ACM SIGIR conference on research and development in information retrieval, pages 2190–2194, 2022.
  35. Zero-shot recommendations with pre-trained large language models for multimodal nudging. In 2023 IEEE International Conference on Data Mining Workshops (ICDMW), pages 1535–1542. IEEE, 2023.
  36. Leveraging large language models for sequential recommendation. In Proceedings of the 17th ACM Conference on Recommender Systems, page 1096–1102, 2023.
  37. Large language models as zero-shot conversational recommenders. In Proceedings of the 32nd ACM international conference on information and knowledge management, pages 720–730, 2023.
  38. Reindex-then-adapt: Improving large language models for conversational recommendation. arXiv preprint arXiv:2405.12119, 2024.
  39. Large language models are zero-shot rankers for recommender systems. In European Conference on Information Retrieval, pages 364–381. Springer, 2024.
  40. Lora: Low-rank adaptation of large language models. In Proceedings of the 38th International Conference on Machine Learning (ICML), pages 4183–4193, 2021.
  41. Enhancing sequential recommendation via llm-based semantic embedding learning. In Companion Proceedings of the ACM on Web Conference 2024, pages 103–111, 2024a.
  42. Exact and efficient unlearning for large language model-based recommendation. arXiv preprint arXiv:2404.10327, 2024b.
  43. How to index item ids for recommendation foundation models. In Proceedings of the Annual International ACM SIGIR Conference on Research and Development in Information Retrieval in the Asia Pacific Region, pages 195–204, 2023.
  44. Large language model interaction simulator for cold-start item recommendation. arXiv preprint arXiv:2402.09176, 2024.
  45. Recommender ai agent: Integrating large language models for interactive recommendations. arXiv preprint arXiv:2308.16505, 2023.
  46. A matrix factorization technique with trust propagation for recommendation in social networks. In Proceedings of the fourth ACM conference on Recommender systems, pages 135–142, 2010.
  47. Genrec: Large language model for generative recommendation. In European Conference on Information Retrieval, pages 494–502. Springer, 2024.
  48. Knowledge adaptation from large language model to recommendation for practical industrial application. arXiv preprint arXiv:2405.03988, 2024.
  49. Item-side fairness of large language model-based recommendation system. In Proceedings of the ACM Web Conference 2024, 2024.
  50. Interarec: Interactive recommendations using multimodal large language models. In Pacific-Asia Conference on Knowledge Discovery and Data Mining, pages 32–43. Springer, 2024.
  51. Efficient and responsible adaptation of large language models for robust top-k recommendations. arXiv preprint arXiv:2405.00824, 2024.
  52. Review-driven personalized preference reasoning with large language models for recommendation. arXiv preprint arXiv:2408.06276, 2024.
  53. Matrix factorization techniques for recommender systems. Computer, 42(8):30–37, 2009.
  54. Gpt4rec: A generative framework for personalized recommendation and user interests interpretation. arXiv preprint arXiv:2304.03879, 2023a.
  55. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In International conference on machine learning, pages 19730–19742. PMLR, 2023b.
  56. Prompt distillation for efficient llm-based recommendation. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, pages 1348–1357, 2023c.
  57. Large language models for generative recommendation: A survey and visionary discussions. ArXiv, abs/2309.01157, 2023d. URL https://api.semanticscholar.org/CorpusID:261531422.
  58. Large language models for generative recommendation: A survey and visionary discussions. In The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, 2024a.
  59. Prompt tuning large language models on personalized aspect extraction for recommendations. arXiv preprint arXiv:2306.01475, 2023e.
  60. Large language models for next point-of-interest recommendation. arXiv preprint arXiv:2404.17591, 2024b.
  61. Learning structure and knowledge aware representation with large language models for concept recommendation. arXiv preprint arXiv:2405.12442, 2024c.
  62. E4srec: An elegant effective efficient extensible solution of large language models for sequential recommendation. In Proceedings of the 33rd International World Wide Web Conference, pages 1234–1245, 2024d.
  63. Exploring fine-tuning chatgpt for news recommendation. arXiv preprint arXiv:2311.05850, 2023f.
  64. Ganprompt: Enhancing robustness in llm-based recommendations with gan-enhanced diversity prompts. arXiv preprint arXiv:2408.09671, 2024e.
  65. Calrec: Contrastive alignment of generative llms for sequential recommendation. arXiv preprint arXiv:2405.02429, 2024f.
  66. Pap-rec: Personalized automatic prompt for recommendation language model. arXiv preprint arXiv:2402.00284, 2024g.
  67. Bookgpt: A general framework for book recommendation empowered by large language model. Electronics, 12(22):4654, 2023g.
  68. Llara: Large language-recommendation assistant. arXiv preprint arXiv:2312.02445, 2023.
  69. How can recommender systems benefit from large language models: A survey. ArXiv, abs/2306.05817, 2023a. URL https://api.semanticscholar.org/CorpusID:259129651.
  70. Rella: Retrieval-enhanced large language models for lifelong sequential behavior comprehension in recommendation. In Proceedings of the ACM on Web Conference 2024, pages 3497–3508, 2024a.
  71. A multi-facet paradigm to bridge large language model and recommendation. arXiv preprint arXiv:2310.06491, 2023b.
  72. Data-efficient fine-tuning for llm-based recommendation. arXiv preprint arXiv:2401.17197, 2024b.
  73. Social collaborative filtering by trust. In Proceedings of the twenty-third international joint conference on artificial intelligence, IEEE, 2013.
  74. Beyond inter-item relations: Dynamic adaptive mixture-of-experts for llm-based sequential recommendation. arXiv preprint arXiv:2408.07427, 2024a.
  75. Recprompt: A prompt tuning framework for news recommendation using large language models. arXiv preprint arXiv:2312.10463, 2023a.
  76. Understanding before recommendation: Semantic aspect-aware review exploitation via large language models. arXiv preprint arXiv:2312.16275, 2023b.
  77. Llmrec: Benchmarking large language models on recommendation task. In Proceedings of the 38th AAAI Conference on Artificial Intelligence, pages 1234–1245, 2024b.
  78. Large language model distilling medication recommendation model. arXiv preprint arXiv:2402.02803, 2024c.
  79. Once: Boosting content-based recommendation with both open- and closed-source large language models. In Proceedings of the 17th ACM International Conference on Web Search and Data Mining, page 452–461, 2024d.
  80. Once: Boosting content-based recommendation with both open-and closed-source large language models. In Proceedings of the 17th ACM International Conference on Web Search and Data Mining, pages 452–461, 2024e.
  81. Multimodal pretraining, adaptation, and generation for recommendation: A survey. arXiv preprint arXiv:2404.00621, 2024f.
  82. P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks. arXiv preprint arXiv:2110.07602, 2021.
  83. Conversational recommender system and large language model are made for each other in e-commerce pre-sales dialogue. In Proceedings of the 28th Empirical Methods in Natural Language Processing, pages 9587–9605, 2023c.
  84. Modeling user viewing flow using large language models for article recommendation. In Companion Proceedings of the ACM on Web Conference 2024, pages 83–92, 2024g.
  85. Aligning large language models for controllable recommendations. arXiv preprint arXiv:2403.05063, 2024.
  86. Recranker: Instruction tuning large language model as ranker for top-k recommendation. arXiv preprint arXiv:2312.16018, 2023a.
  87. Integrating large language models into recommendation via mutual augmentation and adaptive aggregation. arXiv preprint arXiv:2401.13870, 2024a.
  88. Trawl: External knowledge-enhanced recommendation with llm assistance. arXiv preprint arXiv:2403.06642, 2024b.
  89. Unlocking the potential of large language models for explainable recommendations. arXiv preprint arXiv:2312.15661, 2023b.
  90. Llm-rec: Personalized recommendation via prompting large language models. Findings of the Association for Computational Linguistics: NAACL 2024, pages 583–612, 2024a.
  91. X-reflect: Cross-reflection prompting for multimodal recommendation. arXiv preprint arXiv:2408.15172, 2024b.
  92. Llm-based aspect augmentations for recommendation systems. In Proceedings of the 40th International Conference on Machine Learning, Challenges in Deployable Generative AI Workshop, 2023.
  93. Probabilistic matrix factorization. Advances in neural information processing systems, 20, 2007.
  94. Large language model augmented narrative driven recommendations. In Proceedings of the 17th ACM Conference on Recommender Systems, pages 777–783, 2023.
  95. Recgpt: Generative pre-training for text-based recommendation. arXiv preprint arXiv:2405.12715, 2024.
  96. Ad recommendation in a collapsed and entangled world. arXiv preprint arXiv:2403.00793, 2024.
  97. Uncertainty-aware explainable recommendation with large language models. arXiv preprint arXiv:2402.03366, 2024.
  98. Llm4sbr: A lightweight and effective framework for integrating large language models in session-based recommendation. arXiv preprint arXiv:2402.13840, 2024.
  99. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
  100. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research, 21(140):1–67, 2020.
  101. Logic-scaffolding: Personalized aspect-instructed recommendation explanation generation using llms. In Proceedings of the 17th ACM International Conference on Web Search and Data Mining, pages 1078–1081, 2024.
  102. Representation learning with large language models for recommendation. In Proceedings of the ACM on Web Conference 2024, pages 3464–3475, 2024.
  103. Lkpnr: Llm and kg for personalized news recommendation framework. arXiv preprint arXiv:2308.12028, 2023.
  104. Large language models are competitive near cold-start recommenders for language-and item-based preferences. In Proceedings of the 17th ACM conference on recommender systems, pages 890–896, 2023.
  105. Pmg: Personalized multimodal generation with large language models. In Proceedings of the ACM on Web Conference 2024, pages 3833–3843, 2024.
  106. Large language models are learnable planners for long-term recommendation. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1893–1903, 2024.
  107. Leveraging large language models for recommendation and explanation. In IntRS@ RecSys, pages 74–81, 2023.
  108. Leveraging chatgpt for automated human-centered explanations in recommender systems. In Proceedings of the 29th International Conference on Intelligent User Interfaces, pages 597–608, 2024.
  109. Chatgpt for conversational recommendation: Refining recommendations by reprompting with feedback. arXiv preprint arXiv:2401.03605, 2024.
  110. Delrec: Distilling sequential pattern to enhance llm-based recommendation. arXiv preprint arXiv:2406.11156, 2024.
  111. Large language models for intent-driven session recommendations. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’24, page 324–334, New York, NY, USA, 2024. Association for Computing Machinery. ISBN 9798400704314. doi: 10.1145/3626772.3657688. URL https://doi.org/10.1145/3626772.3657688.
  112. Idgenrec: Llm-recsys alignment with textual id learning. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 355–364, 2024.
  113. Mmrec: Llm based multi-modal recommender system. arXiv preprint arXiv:2408.04211, 2024.
  114. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023a.
  115. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023b.
  116. Llm4dsr: Leveraing large language model for denoising sequential recommendation. arXiv preprint arXiv:2408.08208, 2024a.
  117. Multiple key-value strategy in recommendation systems incorporating large language model. arXiv preprint arXiv:2310.16409, 2023a.
  118. Rethinking large language model architectures for sequential recommendations. arXiv preprint arXiv:2402.09543, 2024b.
  119. Towards efficient and effective unlearning of large language models for recommendation. arXiv preprint arXiv:2403.03536, 2024c.
  120. Large language models as data augmenters for cold-start item recommendation. In Companion Proceedings of the ACM on Web Conference 2024, pages 726–729, 2024d.
  121. Zero-shot next-item recommendation using large pretrained language models. arXiv preprint arXiv:2304.03153, 2023.
  122. Llm4vis: Explainable visualization recommendation using chatgpt. arXiv preprint arXiv:2310.07652, 2023b.
  123. Learnable tokenizer for llm-based generative recommendation. arXiv preprint arXiv:2405.07314, 2024e.
  124. Neural graph collaborative filtering. In Proceedings of the 42nd international ACM SIGIR conference on Research and development in Information Retrieval, pages 165–174, 2019.
  125. Rdrec: Rationale distillation for llm-based recommendation. arXiv preprint arXiv:2405.10587, 2024f.
  126. Llm-enhanced user-item interactions: Leveraging edge information for optimized recommendations. arXiv preprint arXiv:2402.09617, 2024g.
  127. Llmrg: Improving recommendations through large language model reasoning graphs. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 19189–19196, 2024h.
  128. RecMind: Large language model powered agent for recommendation. In Kevin Duh, Helena Gomez, and Steven Bethard, editors, Findings of the Association for Computational Linguistics: NAACL 2024, pages 4351–4364, Mexico City, Mexico, June 2024i. Association for Computational Linguistics. URL https://aclanthology.org/2024.findings-naacl.271.
  129. Drdt: Dynamic reflection with divergent thinking for llm-based sequential recommendation. arXiv preprint arXiv:2312.11336, 2023c.
  130. Llm4msr: An llm-enhanced paradigm for multi-scenario recommendation. arXiv preprint arXiv:2406.12529, 2024j.
  131. To recommend or not: Recommendability identification in conversations with pre-trained language models. arXiv preprint arXiv:2403.18628, 2024k.
  132. Re2llm: Reflective reinforcement large language model for session-based recommendation. arXiv preprint arXiv:2403.16427, 2024l.
  133. Collaborative filtering and deep learning based recommendation system for cold start items. Expert Systems with Applications, 69:29–39, 2017.
  134. Llmrec: Large language models with graph augmentation for recommendation. In Proceedings of the 17th ACM International Conference on Web Search and Data Mining, pages 806–815, 2024.
  135. Leveraging large language models (llms) to empower training-free dataset condensation for content-based recommendation. arXiv preprint arXiv:2310.09874, 2023a.
  136. Coral: Collaborative retrieval-augmented large language models improve long-tail recommendation. arXiv preprint arXiv:2403.06447, 2024a.
  137. Diffnet++: A neural influence and interest diffusion network for social recommendation. IEEE Transactions on Knowledge and Data Engineering, 34(10):4753–4766, 2020.
  138. A survey on large language models for recommendation. arXiv preprint arXiv:2305.19860, 2023b.
  139. Exploring large language model for graph data understanding in online job recommendations. In Proceedings of the 38th AAAI Conference on Artificial Intelligence, 2024b.
  140. Dual graph attention networks for deep latent representation of multifaceted social effects in recommender systems. In The world wide web conference, pages 2091–2102, 2019.
  141. Towards open-world recommendation with knowledge augmentation from large language models. arXiv preprint arXiv:2306.10933, 2023.
  142. A decoding acceleration framework for industrial deployable llm-based recommender systems. arXiv preprint arXiv:2408.05676, 2024.
  143. Prompting large language models for recommender systems: A comprehensive framework and empirical analysis. arXiv preprint arXiv:2401.04997, 2024a.
  144. Enhancing content-based recommendation via large language model. arXiv preprint arXiv:2404.00236, 2024b.
  145. News recommendation with category description by a large language model. arXiv preprint arXiv:2405.13007, 2024.
  146. Fine-tuning large language model based explainable recommendation with explainable quality reward. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 9250–9259, 2024a.
  147. Sequential recommendation with latent relations based on large language model. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 335–344, 2024b.
  148. Common sense enhanced knowledge-based recommendation with large language model. arXiv preprint arXiv:2403.18325, 2024c.
  149. Darec: A disentangled alignment framework for large language model and recommender system. arXiv preprint arXiv:2408.08231, 2024d.
  150. Knowledge plugins: Enhancing large language models for domain-specific recommendations. arXiv preprint arXiv:2311.10779, 2023.
  151. mplug-owl: Modularization empowers large language models with multimodality. arXiv preprint arXiv:2304.14178, 2023.
  152. Harnessing multimodal large language models for multimodal sequential recommendation. arXiv preprint arXiv:2408.09698, 2024.
  153. Heterogeneous knowledge fusion: A novel approach for personalized recommendation via llm. In Proceedings of the 17th ACM Conference on Recommender Systems, page 599–601, 2023.
  154. Cosmo: A large-scale e-commerce common sense knowledge generation and serving system at amazon. In Companion of the 2024 International Conference on Management of Data, pages 148–160, 2024.
  155. Llamarec: Two-stage recommendation using large language models for ranking. arXiv preprint arXiv:2311.02089, 2023.
  156. Federated recommendation via hybrid retrieval augmented generation. arXiv preprint arXiv:2403.04256, 2024.
  157. Actions speak louder than words: Trillion-parameter sequential transducers for generative recommendations. arXiv preprint arXiv:2402.17152, 2024.
  158. Notellm: A retrievable large language model for note recommendation. In Proceedings of the 33rd International Conference on World Wide Web, pages 123–132, 2024a.
  159. Gangyi Zhang. User-centric conversational recommendation: Adapting the need of user with large language models. In Proceedings of the 17th ACM Conference on Recommender Systems, pages 1349–1354, 2023.
  160. Recommendation as instruction following: A large language model empowered recommendation approach. ArXiv, abs/2305.07001, 2023a. URL https://api.semanticscholar.org/CorpusID:258615776.
  161. Lorec: Large language model for robust sequential recommendation against poisoning attacks. arXiv preprint arXiv:2401.17723, 2024b.
  162. Instruction tuning for large language models: A survey. arXiv preprint arXiv:2308.10792, 2024c.
  163. Tired of plugins? large language models can be end-to-end recommenders. arXiv preprint arXiv:2404.00702, 2024d.
  164. Finerec: Exploring fine-grained sequential recommendation. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1599–1608, 2024e.
  165. Recgpt: Generative personalized prompts for sequential recommendation via chatgpt training paradigm. arXiv preprint arXiv:2404.08675, 2024f.
  166. Collm: Integrating collaborative embeddings into large language models for recommendatio. arXiv preprint arXiv:2310.19488, 2023b.
  167. Navigating user experience of chatgpt-based conversational recommender systems: The effects of prompt guidance and recommendation domain. arXiv preprint arXiv:2405.13560, 2024g.
  168. Lane: Logic alignment of non-tuning large language models and online recommendation systems for explainable reason generation. arXiv preprint arXiv:2407.02833, 2024a.
  169. Llm-based federated recommendation. arXiv preprint arXiv:2402.09959, 2024b.
  170. Breaking the barrier: Utilizing large language models for industrial recommendation systems through an inferential knowledge graph. arXiv preprint arXiv:2402.13750, 2024c.
  171. Recommender systems in the era of large language models (llms). IEEE Transactions on Knowledge and Data Engineering, 2024d.
  172. Dynllm: When large language models meet dynamic graph recommendation. arXiv preprint arXiv:2405.07580, 2024e.
  173. A large language model enhanced sequential recommender for joint video and comment recommendation. arXiv preprint arXiv:2403.13574, 2024a.
  174. Harnessing large language models for text-rich sequential recommendation. In Proceedings of the 33rd International World Wide Web Conference, pages 3207––3216, 2024b.
  175. Gpt as a baseline for recommendation explanation texts. arXiv preprint arXiv:2309.08817, 2023.
  176. Language-based user profiles for recommendation. arXiv preprint arXiv:2402.15623, 2024.
  177. Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023.
  178. Lifelong personalized low-rank adaptation of large language models for recommendation. arXiv preprint arXiv:2408.03533, 2024a.
  179. Collaborative large language model for recommender systems. In Proceedings of the ACM on Web Conference 2024, pages 3162–3172, 2024b.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Qi Wang (560 papers)
  2. Jindong Li (33 papers)
  3. Shiqi Wang (162 papers)
  4. Qianli Xing (8 papers)
  5. Runliang Niu (5 papers)
  6. He Kong (28 papers)
  7. Rui Li (384 papers)
  8. Guodong Long (115 papers)
  9. Yi Chang (150 papers)
  10. Chengqi Zhang (74 papers)
Citations (1)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets