Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Understanding Language Modeling Paradigm Adaptations in Recommender Systems: Lessons Learned and Open Challenges (2404.03788v2)

Published 4 Apr 2024 in cs.IR

Abstract: The emergence of LLMs has achieved tremendous success in the field of Natural Language Processing owing to diverse training paradigms that empower LLMs to effectively capture intricate linguistic patterns and semantic representations. In particular, the recent "pre-train, prompt and predict" training paradigm has attracted significant attention as an approach for learning generalizable models with limited labeled data. In line with this advancement, these training paradigms have recently been adapted to the recommendation domain and are seen as a promising direction in both academia and industry. This half-day tutorial aims to provide a thorough understanding of extracting and transferring knowledge from pre-trained models learned through different training paradigms to improve recommender systems from various perspectives, such as generality, sparsity, effectiveness and trustworthiness. In this tutorial, we first introduce the basic concepts and a generic architecture of the LLMing paradigm for recommendation purposes. Then, we focus on recent advancements in adapting LLM-related training strategies and optimization objectives for different recommendation tasks. After that, we will systematically introduce ethical issues in LLM-based recommender systems and discuss possible approaches to assessing and mitigating them. We will also summarize the relevant datasets, evaluation metrics, and an empirical study on the recommendation performance of training paradigms. Finally, we will conclude the tutorial with a discussion of open challenges and future directions.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (32)
  1. Unilmv2: Pseudo-masked language models for unified language model pre-training. In International conference on machine learning, pages 642–652. PMLR, 2020.
  2. Y. Deldjoo. Fairness of chatgpt and the role of explainable-guided prompts. arXiv preprint arXiv:2307.11761, 2023.
  3. Y. Deldjoo. Understanding biases in chatgpt-based recommender systems: Provider fairness, temporal stability, and recency. arXiv preprint arXiv:2401.10545, 2024.
  4. Recommender systems leveraging multimedia content. ACM Computing Surveys, 53(5):1–38, 2021.
  5. A survey on adversarial recommender systems: from attack/defense strategies to generative adversarial networks. ACM Computing Surveys, (2):1–38, 2022.
  6. Fairness in recommender systems: research landscape and future directions. User Modeling and User-Adapted Interaction, pages 1–50, 2023a.
  7. A review of modern fashion recommender systems. ACM Computing Surveys, 56(4):1–37, 2023b.
  8. A unified multi-task learning framework for multi-goal conversational recommender systems. ACM Transactions on Information Systems, 41(3):1–25, 2023.
  9. European Commission. Laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Eur Comm, 106:1–108, 2021.
  10. Improving personalized explanation generation through visualization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 244–255, 2022a.
  11. Path language modeling over knowledge graphsfor explainable recommendation. In Proceedings of the ACM Web Conference 2022, pages 946–955, 2022b.
  12. R. He and J. McAuley. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In proceedings of the 25th international conference on world wide web, pages 507–517, 2016.
  13. Towards universal sequence representation learning for recommender systems. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 585–593, 2022.
  14. Use case cards: a use case reporting framework inspired by the european ai act. arXiv preprint arXiv:2306.13701, 2023.
  15. Towards deep conversational recommendations. Advances in neural information processing systems, 31, 2018.
  16. Pre-train, prompt and recommendation: A comprehensive survey of language modelling paradigm adaptations in recommender systems. arXiv preprint arXiv:2302.03735, 2023a.
  17. Boosting deep ctr prediction with a plug-and-play pre-trainer for news recommendation. In Proceedings of the 29th International Conference on Computational Linguistics, pages 2823–2833, 2022.
  18. Graph neural pre-training for recommendation with side information. ACM Transactions on Information Systems, 41(3):1–28, 2023b.
  19. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
  20. Chatgpt-healthprompt. harnessing the power of xai in prompt-based healthcare decision support using chatgpt. XL-ML@ECAI’23, 2023.
  21. Curriculum pre-training heterogeneous subgraph transformer for top-n recommendation. ACM Transactions on Information Systems, 41(1):1–28, 2023.
  22. Recindial: A unified framework for conversational recommendation with pretrained language models. arXiv preprint arXiv:2110.07477, 2021.
  23. Towards unified conversational recommender systems via knowledge-enhanced prompt learning. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 1929–1937, 2022.
  24. Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359, 2021.
  25. Mm-rec: Visiolinguistic model empowered multimodal news recommendation. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 2560–2564, 2022.
  26. Mind: A large-scale dataset for news recommendation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3597–3606, 2020.
  27. Training large-scale news recommenders with pretrained language models in the loop. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 4215–4225, 2022.
  28. Factual and informative review generation for explainable recommendation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pages 13816–13824, 2023.
  29. Rethinking reinforcement learning for recommendation: A prompt perspective. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1347–1357, 2022.
  30. E. Zangerle and C. Bauer. Evaluating recommender systems: survey and framework. ACM Computing Surveys, 55(8):1–38, 2022.
  31. Keep: An industrial pre-training framework for online recommendation via knowledge extraction and plugging. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, pages 3684–3693, 2022.
  32. Q. Zhao. Resetbert4rec: A pre-training model integrating time and user historical behavior for sequential recommendation. In Proceedings of the 45th international ACM SIGIR conference on research and development in information retrieval, pages 1812–1816, 2022.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Lemei Zhang (6 papers)
  2. Peng Liu (372 papers)
  3. Yashar Deldjoo (46 papers)
  4. Yong Zheng (63 papers)
  5. Jon Atle Gulla (11 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com