Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The 2nd Workshop on Recommendation with Generative Models (2403.04399v1)

Published 7 Mar 2024 in cs.IR

Abstract: The rise of generative models has driven significant advancements in recommender systems, leaving unique opportunities for enhancing users' personalized recommendations. This workshop serves as a platform for researchers to explore and exchange innovative concepts related to the integration of generative models into recommender systems. It primarily focuses on five key perspectives: (i) improving recommender algorithms, (ii) generating personalized content, (iii) evolving the user-system interaction paradigm, (iv) enhancing trustworthiness checks, and (v) refining evaluation methodologies for generative recommendations. With generative models advancing rapidly, an increasing body of research is emerging in these domains, underscoring the timeliness and critical importance of this workshop. The related research will introduce innovative technologies to recommender systems and contribute to fresh challenges in both academia and industry. In the long term, this research direction has the potential to revolutionize the traditional recommender paradigms and foster the development of next-generation recommender systems.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (26)
  1. TALLRec: An Effective and Efficient Tuning Framework to Align Large Language Model with Recommendation. In RecSys. ACM.
  2. M6-Rec: Generative Pretrained Language Models are Open-Ended Recommender Systems. arXiv:2205.08084.
  3. Generative Slate Recommendation with Reinforcement Learning. In The Sixteenth International Conference on Web Search and Data Mining. ACM, 580–588.
  4. Chat-REC: Towards Interactive and Explainable LLMs-Augmented Recommender System. arXiv:2303.14524.
  5. VIP5: Towards Multimodal Foundation Models for Recommendation. In EMNLP.
  6. GPT4Rec: A Generative Framework for Personalized Recommendation and User Interests Interpretation.
  7. Prompt distillation for efficient llm-based recommendation. In CIKM. 1348–1357.
  8. Proactive Conversational Agents. In The Sixteenth International Conference on Web Search and Data Mining.
  9. A Multi-facet Paradigm to Bridge Large Language Model and Recommendation. arXiv:2310.06491.
  10. Data-efficient Fine-tuning for LLM-based Recommendation. arXiv:2401.17197.
  11. Is ChatGPT a Good Recommender? A Preliminary Study.
  12. Kai Mei and Yongfeng Zhang. 2023. LightLM: A Lightweight Deep and Narrow Language Model for Generative Recommendation. arXiv:2310.17488.
  13. Aixin Sun. 2023. Take a Fresh Look at Recommender Systems from an Evaluation Standpoint. In SIGIR. ACM.
  14. High-Quality Diversification for Task-Oriented Dialogue Systems. In ACL-IJCNLP 2021.
  15. Lei Wang and Ee-Peng Lim. 2023. Zero-Shot Next-Item Recommendation using Large Pretrained Language Models. arXiv preprint arXiv:2304.03153 (2023).
  16. When Large Language Model based Agent Meets User Behavior Analysis: A Novel User Simulation Paradigm. arXiv:2306.02552.
  17. Generative Recommendation: Towards Next-generation Recommender Paradigm. arXiv:2304.03516.
  18. Diffusion Recommender Model. In SIGIR. ACM, 832–841.
  19. Towards Open-World Recommendation with Knowledge Augmentation from Large Language Models. arXiv:2306.10933.
  20. DiFashion: Towards Personalized Outfit Generation. arXiv:2402.17279.
  21. On Generative Agents in Recommendation. arXiv:2310.10108.
  22. Prospect Personalized Recommendation on Large Language Model-based Agent Platform. arXiv:2402.18240 (2024).
  23. Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation. In RecSys. ACM.
  24. Language Models as Recommender Systems: Evaluations and Limitations. In NeurIPS 2021 Workshop on I (Still) Can’t Believe It’s Not Better.
  25. LLM-based Federated Recommendation. arXiv:2402.09959.
  26. Exploring Recommendation Capabilities of GPT-4V(ision): A Preliminary Case Study. arXiv:2311.04199.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Wenjie Wang (150 papers)
  2. Yang Zhang (1129 papers)
  3. Xinyu Lin (24 papers)
  4. Fuli Feng (143 papers)
  5. Weiwen Liu (59 papers)
  6. Yong Liu (721 papers)
  7. Xiangyu Zhao (192 papers)
  8. Wayne Xin Zhao (196 papers)
  9. Yang Song (299 papers)
  10. Xiangnan He (200 papers)

Summary

We haven't generated a summary for this paper yet.