Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reinforcing User Retention in a Billion Scale Short Video Recommender System (2302.01724v3)

Published 3 Feb 2023 in cs.LG and cs.IR

Abstract: Recently, short video platforms have achieved rapid user growth by recommending interesting content to users. The objective of the recommendation is to optimize user retention, thereby driving the growth of DAU (Daily Active Users). Retention is a long-term feedback after multiple interactions of users and the system, and it is hard to decompose retention reward to each item or a list of items. Thus traditional point-wise and list-wise models are not able to optimize retention. In this paper, we choose reinforcement learning methods to optimize the retention as they are designed to maximize the long-term performance. We formulate the problem as an infinite-horizon request-based Markov Decision Process, and our objective is to minimize the accumulated time interval of multiple sessions, which is equal to improving the app open frequency and user retention. However, current reinforcement learning algorithms can not be directly applied in this setting due to uncertainty, bias, and long delay time incurred by the properties of user retention. We propose a novel method, dubbed RLUR, to address the aforementioned challenges. Both offline and live experiments show that RLUR can significantly improve user retention. RLUR has been fully launched in Kuaishou app for a long time, and achieves consistent performance improvement on user retention and DAU.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Qingpeng Cai (43 papers)
  2. Shuchang Liu (39 papers)
  3. Xueliang Wang (16 papers)
  4. Tianyou Zuo (3 papers)
  5. Wentao Xie (7 papers)
  6. Bin Yang (320 papers)
  7. Dong Zheng (30 papers)
  8. Peng Jiang (274 papers)
  9. Kun Gai (125 papers)
Citations (30)