Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploring the Impact of Large Language Models on Recommender Systems: An Extensive Review (2402.18590v3)

Published 11 Feb 2024 in cs.IR and cs.AI

Abstract: The paper underscores the significance of LLMs in reshaping recommender systems, attributing their value to unique reasoning abilities absent in traditional recommenders. Unlike conventional systems lacking direct user interaction data, LLMs exhibit exceptional proficiency in recommending items, showcasing their adeptness in comprehending intricacies of language. This marks a fundamental paradigm shift in the realm of recommendations. Amidst the dynamic research landscape, researchers actively harness the language comprehension and generation capabilities of LLMs to redefine the foundations of recommendation tasks. The investigation thoroughly explores the inherent strengths of LLMs within recommendation frameworks, encompassing nuanced contextual comprehension, seamless transitions across diverse domains, adoption of unified approaches, holistic learning strategies leveraging shared data reservoirs, transparent decision-making, and iterative improvements. Despite their transformative potential, challenges persist, including sensitivity to input prompts, occasional misinterpretations, and unforeseen recommendations, necessitating continuous refinement and evolution in LLM-driven recommender systems.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (76)
  1. Knowledge-Augmented Large Language Models for Personalized Contextual Query Suggestion. arXiv:2311.06318 [cs.IR]
  2. TALLRec: An Effective and Efficient Tuning Framework to Align Large Language Model with Recommendation. In Proceedings of the 17th ACM Conference on Recommender Systems. ACM. https://doi.org/10.1145/3604915.3608857
  3. Let’s Get It Started: Fostering the Discoverability of New Releases on Deezer. arXiv:2401.02827 [cs.IR]
  4. Language Models are Few-Shot Learners. arXiv:2005.14165 [cs.CL]
  5. Diego Carraro and Derek Bridge. 2024. Enhancing Recommendation Diversity by Re-ranking with Large Language Models. arXiv:2401.11506 [cs.IR]
  6. PaLM: Scaling Language Modeling with Pathways. arXiv:2204.02311 [cs.CL]
  7. M6-Rec: Generative Pretrained Language Models are Open-Ended Recommender Systems. arXiv:2205.08084 [cs.IR]
  8. POSO: Personalized Cold Start Modules for Large-scale Recommender Systems. arXiv:2108.04690 [cs.IR]
  9. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv:1810.04805 [cs.CL]
  10. Dario Di Palma. 2023. Retrieval-augmented Recommender System: Enhancing Recommender Systems with Large Language Models (RecSys ’23). Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3604915.3608889
  11. User Embedding Model for Personalized Language Prompting. arXiv:2401.04858 [cs.CL]
  12. A Survey on In-context Learning. arXiv:2301.00234 [cs.CL]
  13. Enhancing Job Recommendation through LLM-based Generative Adversarial Networks. arXiv:2307.10747 [cs.IR]
  14. Recommender Systems in the Era of Large Language Models (LLMs). arXiv:2307.02046 [cs.IR]
  15. A Large Language Model Enhanced Conversational Recommender System. arXiv:2308.06212 [cs.IR]
  16. Leveraging Large Language Models in Conversational Recommender Systems. arXiv:2305.07961 [cs.IR]
  17. Chat-REC: Towards Interactive and Explainable LLMs-Augmented Recommender System. arXiv:2303.14524 [cs.IR]
  18. Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt and Predict Paradigm (P5). arXiv:2203.13366 [cs.IR]
  19. Learning Vector-Quantized Item Representation for Transferable Sequential Recommenders (WWW ’23). Association for Computing Machinery, New York, NY, USA, 1162–1171. https://doi.org/10.1145/3543507.3583434
  20. Large Language Models are Zero-Shot Rankers for Recommender Systems. arXiv:2305.08845 [cs.IR]
  21. Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations. arXiv:2308.16505 [cs.IR]
  22. GenRec: Large Language Model for Generative Recommendation. arXiv:2307.00457 [cs.IR]
  23. Health-LLM: Personalized Retrieval-Augmented Disease Prediction Model. arXiv:2402.00746 [cs.CL]
  24. Do LLMs Understand User Preferences? Evaluating LLMs On User Rating Prediction. arXiv:2305.06474 [cs.IR]
  25. RecExplainer: Aligning Large Language Models for Recommendation Model Interpretability. arXiv:2311.10947 [cs.IR]
  26. Personalized Prompt Learning for Explainable Recommendation. arXiv:2202.07371 [cs.IR]
  27. Prompt Distillation for Efficient LLM-based Recommendation. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3583780.3615017
  28. E4SRec: An Elegant Effective Efficient Extensible Solution of Large Language Models for Sequential Recommendation. arXiv:2312.02443 [cs.IR]
  29. PBNR: Prompt-based News Recommender System. arXiv:2304.07862 [cs.IR]
  30. PAP-REC: Personalized Automatic Prompt for Recommendation Language Model. arXiv:2402.00284 [cs.IR]
  31. LLaRA: Aligning Large Language Models with Sequential Recommenders. arXiv:2312.02445 [cs.IR]
  32. A Multi-facet Paradigm to Bridge Large Language Model and Recommendation. arXiv:2310.06491 [cs.IR]
  33. Data-efficient Fine-tuning for LLM-based Recommendation. arXiv:2401.17197 [cs.IR]
  34. ONCE: Boosting Content-based Recommendation with Both Open- and Closed-source Large Language Models. arXiv:2305.06566 [cs.IR]
  35. End-to-end Learnable Clustering for Intent Learning in Recommendation. arXiv:2401.05975 [cs.IR]
  36. ChatQA: Building GPT-4 Level Conversational QA Models. arXiv:2401.10225 [cs.CL]
  37. LLM-Rec: Personalized Recommendation via Prompting Large Language Models. arXiv:2307.15780 [cs.CL]
  38. Plug-in Diffusion Model for Sequential Recommendation. arXiv:2401.02913 [cs.IR]
  39. Large Language Model Augmented Narrative Driven Recommendations. arXiv:2306.02250 [cs.IR]
  40. Instruction Tuning with GPT-4. arXiv:2304.03277 [cs.CL]
  41. Aleksandr V. Petrov and Craig Macdonald. 2023. Generative Sequential Recommendation with GPTRec. arXiv:2306.11114 [cs.IR]
  42. ControlRec: Bridging the Semantic Gap between Language Model and Personalized Recommendation. arXiv:2311.16441 [cs.IR]
  43. Improving language understanding by generative pre-training. (2018).
  44. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. arXiv:1910.10683 [cs.LG]
  45. Representation Learning with Large Language Models for Recommendation. arXiv:2310.15950 [cs.IR]
  46. ChatGPT for Conversational Recommendation: Refining Recommendations by Reprompting with Feedback. arXiv:2401.03605 [cs.IR]
  47. Is ChatGPT Good at Search? Investigating Large Language Models as Re-Ranking Agents. arXiv:2304.09542 [cs.CL]
  48. Yueming Sun and Yi Zhang. 2018. Conversational Recommender System. arXiv:1806.03277 [cs.IR]
  49. One Model for All: Large Language Models are Domain-Agnostic Recommendation Systems. arXiv:2310.14304 [cs.IR]
  50. LaMDA: Language Models for Dialog Applications. arXiv:2201.08239 [cs.CL]
  51. RecRec: Algorithmic Recourse for Recommender Systems. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management. ACM. https://doi.org/10.1145/3583780.3615181
  52. When Large Language Model based Agent Meets User Behavior Analysis: A Novel User Simulation Paradigm. arXiv:2306.02552 [cs.IR]
  53. LLM4Vis: Explainable Visualization Recommendation using ChatGPT. arXiv:2310.07652 [cs.HC]
  54. Neural Graph Collaborative Filtering. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM. https://doi.org/10.1145/3331184.3331267
  55. Rethinking the Evaluation for Conversational Recommendation in the Era of Large Language Models. arXiv:2305.13112 [cs.CL]
  56. Enhancing Recommender Systems with Large Language Model Reasoning Graphs. arXiv:2308.10835 [cs.IR]
  57. RecMind: Large Language Model Powered Agent For Recommendation. arXiv:2308.14296 [cs.IR]
  58. DRDT: Dynamic Reflection with Divergent Thinking for LLM-based Sequential Recommendation. arXiv:2312.11336 [cs.IR]
  59. MediTab: Scaling Medical Tabular Data Predictors via Data Consolidation, Enrichment, and Refinement. arXiv:2305.12081 [cs.LG]
  60. Towards Open-World Recommendation with Knowledge Augmentation from Large Language Models. arXiv:2306.10933 [cs.IR]
  61. MultiSlot ReRanker: A Generic Model-based Re-Ranking Framework in Recommendation Systems. arXiv:2401.06293 [cs.AI]
  62. G-Meta: Distributed Meta Learning in GPU Clusters for Large-Scale Recommender Systems. arXiv:2401.04338 [cs.LG]
  63. Prompting Large Language Models for Recommender Systems: A Comprehensive Framework and Empirical Analysis. arXiv:2401.04997 [cs.IR]
  64. PALR: Personalization Aware LLMs for Recommendation. arXiv:2305.07622 [cs.IR]
  65. Large Language Model Can Interpret Latent Space of Sequential Recommender. arXiv:2310.20487 [cs.IR]
  66. Knowledge Plugins: Enhancing Large Language Models for Domain-Specific Recommendations. arXiv:2311.10779 [cs.IR]
  67. LlamaRec: Two-Stage Recommendation using Large Language Models for Ranking. arXiv:2311.02089 [cs.IR]
  68. On Generative Agents in Recommendation. arXiv:2310.10108 [cs.IR]
  69. Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation. In Proceedings of the 17th ACM Conference on Recommender Systems (RecSys ’23). ACM. https://doi.org/10.1145/3604915.3608860
  70. Recommendation as Instruction Following: A Large Language Model Empowered Recommendation Approach. arXiv:2305.07001 [cs.IR]
  71. TSRankLLM: A Two-Stage Adaptation of LLMs for Text Ranking. arXiv:2311.16720 [cs.IR]
  72. Bridging the Information Gap Between Domain-Specific Model and General LLM for Personalized Recommendation. arXiv:2311.03778 [cs.IR]
  73. CoLLM: Integrating Collaborative Embeddings into Large Language Models for Recommendation. arXiv:2310.19488 [cs.IR]
  74. Generative Job Recommendations with Large Language Model. arXiv:2307.02157 [cs.IR]
  75. Collaborative Large Language Model for Recommender Systems. arXiv:2311.01343 [cs.IR]
  76. INTERS: Unlocking the Power of Large Language Models in Search with Instruction Tuning. arXiv:2401.06532 [cs.CL]
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Arpita Vats (12 papers)
  2. Vinija Jain (42 papers)
  3. Rahul Raja (6 papers)
  4. Aman Chadha (109 papers)
Citations (10)