Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

When Large Language Models Meet Personalization: Perspectives of Challenges and Opportunities (2307.16376v1)

Published 31 Jul 2023 in cs.IR, cs.AI, and cs.CL

Abstract: The advent of LLMs marks a revolutionary breakthrough in artificial intelligence. With the unprecedented scale of training and model parameters, the capability of LLMs has been dramatically improved, leading to human-like performances in understanding, language synthesizing, and common-sense reasoning, etc. Such a major leap-forward in general AI capacity will change the pattern of how personalization is conducted. For one thing, it will reform the way of interaction between humans and personalization systems. Instead of being a passive medium of information filtering, LLMs present the foundation for active user engagement. On top of such a new foundation, user requests can be proactively explored, and user's required information can be delivered in a natural and explainable way. For another thing, it will also considerably expand the scope of personalization, making it grow from the sole function of collecting personalized information to the compound function of providing personalized services. By leveraging LLMs as general-purpose interface, the personalization systems may compile user requests into plans, calls the functions of external tools to execute the plans, and integrate the tools' outputs to complete the end-to-end personalization tasks. Today, LLMs are still being developed, whereas the application in personalization is largely unexplored. Therefore, we consider it to be the right time to review the challenges in personalization and the opportunities to address them with LLMs. In particular, we dedicate this perspective paper to the discussion of the following aspects: the development and challenges for the existing personalization system, the newly emerged capabilities of LLMs, and the potential ways of making use of LLMs for personalization.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Jin Chen (98 papers)
  2. Zheng Liu (312 papers)
  3. Xu Huang (56 papers)
  4. Chenwang Wu (13 papers)
  5. Qi Liu (485 papers)
  6. Gangwei Jiang (17 papers)
  7. Yuanhao Pu (4 papers)
  8. Yuxuan Lei (12 papers)
  9. Xiaolong Chen (86 papers)
  10. Xingmei Wang (7 papers)
  11. Defu Lian (142 papers)
  12. Enhong Chen (242 papers)
Citations (50)