Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Provably Efficient Reinforcement Learning for Online Adaptive Influence Maximization (2206.14846v1)

Published 29 Jun 2022 in cs.LG, cs.SI, and stat.ML

Abstract: Online influence maximization aims to maximize the influence spread of a content in a social network with unknown network model by selecting a few seed nodes. Recent studies followed a non-adaptive setting, where the seed nodes are selected before the start of the diffusion process and network parameters are updated when the diffusion stops. We consider an adaptive version of content-dependent online influence maximization problem where the seed nodes are sequentially activated based on real-time feedback. In this paper, we formulate the problem as an infinite-horizon discounted MDP under a linear diffusion process and present a model-based reinforcement learning solution. Our algorithm maintains a network model estimate and selects seed users adaptively, exploring the social network while improving the optimal policy optimistically. We establish $\widetilde O(\sqrt{T})$ regret bound for our algorithm. Empirical evaluations on synthetic network demonstrate the efficiency of our algorithm.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Kaixuan Huang (70 papers)
  2. Yu Wu (196 papers)
  3. Xuezhou Zhang (36 papers)
  4. Shenyinying Tu (10 papers)
  5. Qingyun Wu (47 papers)
  6. Mengdi Wang (199 papers)
  7. Huazheng Wang (44 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.