Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Federated Recommendation via Hybrid Retrieval Augmented Generation (2403.04256v1)

Published 7 Mar 2024 in cs.IR and cs.AI

Abstract: Federated Recommendation (FR) emerges as a novel paradigm that enables privacy-preserving recommendations. However, traditional FR systems usually represent users/items with discrete identities (IDs), suffering from performance degradation due to the data sparsity and heterogeneity in FR. On the other hand, LLMs as recommenders have proven effective across various recommendation scenarios. Yet, LLM-based recommenders encounter challenges such as low inference efficiency and potential hallucination, compromising their performance in real-world scenarios. To this end, we propose GPT-FedRec, a federated recommendation framework leveraging ChatGPT and a novel hybrid Retrieval Augmented Generation (RAG) mechanism. GPT-FedRec is a two-stage solution. The first stage is a hybrid retrieval process, mining ID-based user patterns and text-based item features. Next, the retrieved results are converted into text prompts and fed into GPT for re-ranking. Our proposed hybrid retrieval mechanism and LLM-based re-rank aims to extract generalized features from data and exploit pretrained knowledge within LLM, overcoming data sparsity and heterogeneity in FR. In addition, the RAG approach also prevents LLM hallucination, improving the recommendation performance for real-world users. Experimental results on diverse benchmark datasets demonstrate the superior performance of GPT-FedRec against state-of-the-art baseline methods.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (37)
  1. Federated collaborative filtering for privacy-preserving personalized recommendation system. arXiv preprint arXiv:1901.09888.
  2. Tallrec: An effective and efficient tuning framework to align large language model with recommendation. arXiv preprint arXiv:2305.00447.
  3. Secure federated matrix factorization. IEEE Intelligent Systems, 36(5):11–20.
  4. Zheng Chen. 2023. Palr: Personalization aware llms for recommendation. arXiv preprint arXiv:2305.07622.
  5. Autoregressive entity retrieval. arXiv preprint arXiv:2010.00904.
  6. Graph neural networks for social recommendation. In The world wide web conference, pages 417–426.
  7. Recommendation as language processing (rlp): A unified pretrain, personalized prompt & predict paradigm (p5). In Proceedings of the 16th ACM Conference on Recommender Systems, pages 299–315.
  8. F Maxwell Harper and Joseph A Konstan. 2015. The movielens datasets: History and context. Acm transactions on interactive intelligent systems (tiis), 5(4):1–19.
  9. Leveraging large language models for sequential recommendation. In Proceedings of the 17th ACM Conference on Recommender Systems, pages 1096–1102.
  10. Ruining He and Julian McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In proceedings of the 25th international conference on world wide web, pages 507–517.
  11. Large language models as zero-shot conversational recommenders. In Proceedings of the 32nd ACM international conference on information and knowledge management, pages 720–730.
  12. Learning vector-quantized item representation for transferable sequential recommenders. In Proceedings of the ACM Web Conference 2023, pages 1162–1171.
  13. Towards universal sequence representation learning for recommender systems. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 585–593.
  14. Large language models are zero-shot rankers for recommender systems. arXiv preprint arXiv:2305.08845.
  15. Wang-Cheng Kang and Julian McAuley. 2018. Self-attentive sequential recommendation. In 2018 IEEE International Conference on Data Mining (ICDM), pages 197–206. IEEE.
  16. Do llms understand user preferences? evaluating llms on user rating prediction. arXiv preprint arXiv:2305.06474.
  17. Text is all you need: Learning language representations for sequential recommendation. arXiv preprint arXiv:2305.13731.
  18. Large language models for generative recommendation: A survey and visionary discussions. arXiv preprint arXiv:2309.01157.
  19. Bookgpt: A general framework for book recommendation empowered by large language model. Electronics, 12(22):4654.
  20. Fedrec: Federated recommendation with explicit feedback. IEEE Intelligent Systems, 36(5):21–30.
  21. Towards communication efficient and fair federated personalized sequential recommendation. In 2022 5th International Conference on Information Communication and Signal Processing (ICICSP), pages 1–6. IEEE.
  22. Image-based recommendations on styles and substitutes. In Proceedings of the 38th international ACM SIGIR conference on research and development in information retrieval, pages 43–52.
  23. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pages 1273–1282. PMLR.
  24. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748.
  25. OpenAI. Chatgpt: Optimizing language models for dialogue. openai.
  26. Zero-shot recommendation as language modeling. In European Conference on Information Retrieval, pages 223–230. Springer.
  27. Is chatgpt good at search? investigating large language models as re-ranking agent. arXiv preprint arXiv:2304.09542.
  28. Transformer memory as a differentiable search index. Advances in Neural Information Processing Systems, 35:21831–21843.
  29. Andrei Nikolaevich Tikhonov. 1963. On the solution of ill-posed problems and the method of regularization. In Doklady akademii nauk, volume 151, pages 501–504. Russian Academy of Sciences.
  30. Text embeddings by weakly-supervised contrastive pre-training. arXiv preprint arXiv:2212.03533.
  31. Fedcl: Federated contrastive learning for privacy-preserving recommendation. arXiv preprint arXiv:2204.09850.
  32. Graph convolutional neural networks for web-scale recommender systems. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, pages 974–983.
  33. Llamarec: Two-stage recommendation using large language models for ranking. arXiv preprint arXiv:2311.02089.
  34. Linear recurrent units for sequential recommendation. arXiv preprint arXiv:2310.02367.
  35. Defending substitution-based profile pollution attacks on sequential recommenders. In Proceedings of the 16th ACM Conference on Recommender Systems, pages 59–70.
  36. Transfr: Transferable federated recommendation with pre-trained language models. arXiv preprint arXiv:2402.01124.
  37. Recommendation as instruction following: A large language model empowered recommendation approach. arXiv preprint arXiv:2305.07001.
Citations (6)

Summary

We haven't generated a summary for this paper yet.