Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Efficient and Effective Unlearning of Large Language Models for Recommendation (2403.03536v2)

Published 6 Mar 2024 in cs.IR and cs.AI

Abstract: The significant advancements in LLMs give rise to a promising research direction, i.e., leveraging LLMs as recommenders (LLMRec). The efficacy of LLMRec arises from the open-world knowledge and reasoning capabilities inherent in LLMs. LLMRec acquires the recommendation capabilities through instruction tuning based on user interaction data. However, in order to protect user privacy and optimize utility, it is also crucial for LLMRec to intentionally forget specific user data, which is generally referred to as recommendation unlearning. In the era of LLMs, recommendation unlearning poses new challenges for LLMRec in terms of \textit{inefficiency} and \textit{ineffectiveness}. Existing unlearning methods require updating billions of parameters in LLMRec, which is costly and time-consuming. Besides, they always impact the model utility during the unlearning process. To this end, we propose \textbf{E2URec}, the first \underline{E}fficient and \underline{E}ffective \underline{U}nlearning method for LLM\underline{Rec}. Our proposed E2URec enhances the unlearning efficiency by updating only a few additional LoRA parameters, and improves the unlearning effectiveness by employing a teacher-student framework, where we maintain multiple teacher networks to guide the unlearning process. Extensive experiments show that E2URec outperforms state-of-the-art baselines on two real-world datasets. Specifically, E2URec can efficiently forget specific data without affecting recommendation performance. The source code is at \url{https://github.com/justarter/E2URec}.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (63)
  1. Tallrec: An effective and efficient tuning framework to align large language model with recommendation. arXiv preprint arXiv:2305.00447, 2023.
  2. Machine unlearning. In 2021 IEEE Symposium on Security and Privacy (SP), pages 141–159. IEEE, 2021.
  3. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
  4. Towards making systems forget with machine unlearning. In 2015 IEEE symposium on security and privacy, pages 463–480. IEEE, 2015.
  5. Recommendation unlearning. In Proceedings of the ACM Web Conference 2022, pages 2768–2777, 2022a.
  6. Unlearn what you want to forget: Efficient unlearning for llms. arXiv preprint arXiv:2310.20150, 2023.
  7. Graph unlearning. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, pages 499–513, 2022b.
  8. Can bad teaching induce forgetting? unlearning in deep networks using an incompetent teacher. In AAAI, volume 37, pages 7210–7217, 2023.
  9. Who’s harry potter? approximate unlearning in llms. arXiv preprint arXiv:2310.02238, 2023.
  10. Recommender systems in the era of large language models (llms), 2023.
  11. Unlearning protected user attributes in recommendations with adversarial training. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 2142–2147, 2022.
  12. Recommendation as language processing (rlp): A unified pretrain, personalized prompt & predict paradigm (p5). In Proceedings of the 16th ACM Conference on Recommender Systems, pages 299–315, 2022.
  13. Eternal sunshine of the spotless net: Selective forgetting in deep networks. In CVPR, pages 9304–9312, 2020.
  14. Amnesiac machine learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 11516–11524, 2021.
  15. Certified data removal from machine learning models. arXiv preprint arXiv:1911.03030, 2019.
  16. Large language models are zero-shot rankers for recommender systems. arXiv preprint arXiv:2305.08845, 2023.
  17. Lora: Low-rank adaptation of large language models. In International Conference on Learning Representations, 2021.
  18. Neural statistics for click-through rate prediction. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1849–1853, 2022.
  19. California consumer privacy act. The Business Lawyer, 75(1):1637–1646, 2019.
  20. Approximate data deletion from machine learning models. In International Conference on Artificial Intelligence and Statistics, pages 2008–2016. PMLR, 2021.
  21. Understanding black-box predictions via influence functions. In International conference on machine learning, pages 1885–1894. PMLR, 2017.
  22. E4srec: An elegant effective efficient extensible solution of large language models for sequential recommendation. arXiv preprint arXiv:2312.02443, 2023a.
  23. Selective and collaborative influence function for efficient recommendation unlearning. Expert Systems with Applications, 234:121025, 2023b.
  24. Making users indistinguishable: Attribute-wise unlearning in recommender systems. In Proceedings of the 31st ACM International Conference on Multimedia, pages 984–994, 2023c.
  25. Ultrare: Enhancing receraser for recommendation unlearning via error decomposition. Advances in Neural Information Processing Systems, 36, 2024a.
  26. Making recommender systems forget: Learning and unlearning for erasable recommendation. Knowledge-Based Systems, 283:111124, 2024b.
  27. Llara: Aligning large language models with sequential recommenders. arXiv preprint arXiv:2312.02445, 2023.
  28. Clickprompt: Ctr models are strong prompt generators for adapting language models to ctr prediction. arXiv preprint arXiv:2310.09234, 2023a.
  29. How can recommender systems benefit from large language models: A survey. arXiv preprint arXiv:2306.05817, 2023b.
  30. Map: A model-agnostic pretraining framework for click-through rate prediction. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 1384–1395, 2023c.
  31. Rella: Retrieval-enhanced large language models for lifelong sequential behavior comprehension in recommendation. arXiv preprint arXiv:2308.11131, 2023d.
  32. Recommendation unlearning via matrix correction. arXiv preprint arXiv:2307.15960, 2023.
  33. Rethinking machine unlearning for large language models. arXiv preprint arXiv:2402.08787, 2024a.
  34. Forgetting fast in recommender systems. arXiv preprint arXiv:2208.06875, 2022.
  35. Towards safer large language models through machine unlearning. arXiv preprint arXiv:2402.10058, 2024b.
  36. Recranker: Instruction tuning large language model as ranker for top-k recommendation. arXiv preprint arXiv:2312.16018, 2023.
  37. Tofu: A task of fictitious unlearning for llms. arXiv preprint arXiv:2401.06121, 2024.
  38. Alessandro Mantelero. The eu proposal for a general data protection regulation and the roots of the ‘right to be forgotten’. Computer Law & Security Review, 29(3):229–235, 2013.
  39. Deep unlearning via randomized conditionally independent hessians. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10422–10431, 2022.
  40. A survey of machine unlearning. arXiv preprint arXiv:2209.02299, 2022.
  41. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744, 2022.
  42. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485–5551, 2020.
  43. Machine unlearning for recommendation systems: An insight. arXiv preprint arXiv:2401.10942, 2024.
  44. Forget me now: Fast and exact unlearning in neighborhood-based recommendation. 2023.
  45. Remember what you want to forget: Algorithms for machine unlearning. Advances in Neural Information Processing Systems, 34:18075–18086, 2021.
  46. Knowledge unlearning for llms: Tasks, methods, and challenges. arXiv preprint arXiv:2311.15766, 2023.
  47. Flip: Towards fine-grained alignment between id-based models and pretrained language models for ctr prediction. arXiv e-prints, pages arXiv–2310, 2023.
  48. A survey on large language models for recommendation, 2023.
  49. A bird’s-eye view of reranking: from list level to page level. In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, pages 1075–1083, 2023.
  50. On the effectiveness of unlearning in session-based recommendation. arXiv preprint arXiv:2312.14447, 2023.
  51. Prompting large language models for recommender systems: A comprehensive framework and empirical analysis, 2024.
  52. Netflix and forget: Efficient and exact machine unlearning from bi-linear recommendations. arXiv preprint arXiv:2302.06676, 2023.
  53. Arcane: An efficient architecture for exact machine unlearning. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, pages 4006–4013, 2022.
  54. Machine unlearning of pre-trained large language models. arXiv preprint arXiv:2402.15159, 2024.
  55. Large language model unlearning. arXiv preprint arXiv:2310.10683, 2023.
  56. Sequence unlearning for sequential recommender systems. In Australasian Joint Conference on Artificial Intelligence, pages 403–415. Springer, 2023.
  57. Federated unlearning for on-device recommendation. In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, pages 393–401, 2023.
  58. Agentcf: Collaborative learning with autonomous language agents for recommender systems. arXiv preprint arXiv:2310.09233, 2023a.
  59. Recommendation as instruction following: A large language model empowered recommendation approach. arXiv preprint arXiv:2305.07001, 2023b.
  60. Collm: Integrating collaborative embeddings into large language models for recommendation. arXiv preprint arXiv:2310.19488, 2023c.
  61. Recommendation unlearning via influence function. arXiv preprint arXiv:2307.02147, 2023d.
  62. A survey of large language models. arXiv preprint arXiv:2303.18223, 2023.
  63. Adapting large language models by integrating collaborative semantics for recommendation. arXiv preprint arXiv:2311.09049, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Hangyu Wang (6 papers)
  2. Jianghao Lin (47 papers)
  3. Bo Chen (309 papers)
  4. Yang Yang (884 papers)
  5. Ruiming Tang (171 papers)
  6. Weinan Zhang (322 papers)
  7. Yong Yu (219 papers)
Citations (9)