Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Online Non-Monotone DR-submodular Maximization (1909.11426v2)

Published 25 Sep 2019 in cs.LG, cs.DS, and stat.ML

Abstract: In this paper, we study fundamental problems of maximizing DR-submodular continuous functions that have real-world applications in the domain of machine learning, economics, operations research and communication systems. It captures a subclass of non-convex optimization that provides both theoretical and practical guarantees. Here, we focus on minimizing regret for online arriving non-monotone DR-submodular functions over different types of convex sets: hypercube, down-closed and general convex sets. First, we present an online algorithm that achieves a $1/e$-approximation ratio with the regret of $O(T{2/3})$ for maximizing DR-submodular functions over any down-closed convex set. Note that, the approximation ratio of $1/e$ matches the best-known guarantee for the offline version of the problem. Moreover, when the convex set is the hypercube, we propose a tight 1/2-approximation algorithm with regret bound of $O(\sqrt{T})$. Next, we give an online algorithm that achieves an approximation guarantee (depending on the search space) for the problem of maximizing non-monotone continuous DR-submodular functions over a \emph{general} convex set (not necessarily down-closed). To best of our knowledge, no prior algorithm with approximation guarantee was known for non-monotone DR-submodular maximization in the online setting. Finally we run experiments to verify the performance of our algorithms on problems arising in machine learning domain with the real-world datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Nguyen Kim Thang (23 papers)
  2. Abhinav Srivastav (5 papers)
Citations (11)

Summary

We haven't generated a summary for this paper yet.