Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
91 tokens/sec
Gemini 2.5 Pro Premium
40 tokens/sec
GPT-5 Medium
33 tokens/sec
GPT-5 High Premium
28 tokens/sec
GPT-4o
105 tokens/sec
DeepSeek R1 via Azure Premium
93 tokens/sec
GPT OSS 120B via Groq Premium
479 tokens/sec
Kimi K2 via Groq Premium
160 tokens/sec
2000 character limit reached

A Unified Approach for Maximizing Continuous DR-submodular Functions (2305.16671v3)

Published 26 May 2023 in cs.LG, cs.AI, and cs.CC

Abstract: This paper presents a unified approach for maximizing continuous DR-submodular functions that encompasses a range of settings and oracle access types. Our approach includes a Frank-Wolfe type offline algorithm for both monotone and non-monotone functions, with different restrictions on the general convex set. We consider settings where the oracle provides access to either the gradient of the function or only the function value, and where the oracle access is either deterministic or stochastic. We determine the number of required oracle accesses in all cases. Our approach gives new/improved results for nine out of the sixteen considered cases, avoids computationally expensive projections in two cases, with the proposed framework matching performance of state-of-the-art approaches in the remaining five cases. Notably, our approach for the stochastic function value-based oracle enables the first regret bounds with bandit feedback for stochastic DR-submodular functions.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (33)
  1. Optimal algorithms for online convex optimization with multi-point bandit feedback. In Kalai, A. T. and Mohri, M., editors, Proceedings of the 23rd Annual Conference on Learning Theory (COLT 2010), pages 28–40.
  2. Bach, F. (2019). Submodular functions: from discrete to continuous domains. Mathematical Programming, 175:419–459.
  3. Continuous DR-submodular maximization: Structure and algorithms. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.
  4. Guaranteed Non-convex Optimization: Submodular Maximization over Continuous Domains. In Singh, A. and Zhu, J., editors, Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, volume 54 of Proceedings of Machine Learning Research, pages 111–120. PMLR.
  5. Optimal continuous DR-submodular maximization and applications to provable mean field inference. In International Conference on Machine Learning, pages 644–653. PMLR.
  6. Projection-free online optimization with stochastic gradient: From convexity to submodularity. In Dy, J. and Krause, A., editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 814–823. PMLR.
  7. Online continuous submodular maximization. In Storkey, A. and Perez-Cruz, F., editors, Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, volume 84 of Proceedings of Machine Learning Research, pages 1896–1905. PMLR.
  8. Black box submodular maximization: Discrete and continuous settings. In Chiappa, S. and Calandra, R., editors, Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, volume 108 of Proceedings of Machine Learning Research, pages 1058–1070. PMLR.
  9. From map to marginals: Variational inference in Bayesian submodular models. Advances in Neural Information Processing Systems, 27.
  10. Du, D. (2022). Lyapunov function approach for approximation algorithm design and analysis: with applications in submodular maximization. arXiv preprint arXiv:2205.12442.
  11. An improved approximation algorithm for maximizing a DR-submodular function over a convex set. arXiv preprint arXiv:2203.14740.
  12. Non-monotone DR-submodular maximization: Approximation and regret guarantees. arXiv preprint arXiv:1905.09595.
  13. Fast First-Order Methods for Monotone Strongly DR-Submodular Maximization, pages 169–179. ACDA23.
  14. Online convex optimization in the bandit setting: gradient descent without a gradient. In Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms, pages 385–394.
  15. Profit maximization in social networks and non-monotone DR-submodular maximization. Theoretical Computer Science, 957:113847.
  16. Gradient methods for submodular maximization. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.
  17. Hazan, E. et al. (2016). Introduction to online convex optimization. Foundations and Trends® in Optimization, 2(3-4):157–325.
  18. Large-scale price optimization via network flow. Advances in Neural Information Processing Systems, 29.
  19. Bandit algorithms. Cambridge University Press.
  20. Experimental design networks: A paradigm for serving heterogeneous learners under networking constraints. IEEE/ACM Transactions on Networking, 31(5):2236–2250.
  21. Jointly optimal routing and caching with bounded link capacities. In ICC 2023 - IEEE International Conference on Communications, pages 1130–1136.
  22. Conditional gradient method for stochastic submodular maximization: Closing the gap. In International Conference on Artificial Intelligence and Statistics, pages 1886–1895. PMLR.
  23. Stochastic conditional gradient methods: From convex minimization to submodular maximization. The Journal of Machine Learning Research, 21(1):4232–4280.
  24. Resolving the approximability of offline and online non-monotone DR-submodular maximization over general convex sets. In Ruiz, F., Dy, J., and van de Meent, J.-W., editors, Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, volume 206 of Proceedings of Machine Learning Research, pages 2542–2564. PMLR.
  25. Online learning via offline greedy algorithms: Applications in market design and optimization. In Proceedings of the 22nd ACM Conference on Economics and Computation, EC ’21, page 737–738, New York, NY, USA. Association for Computing Machinery.
  26. Optimal algorithms for continuous non-monotone submodular and DR-submodular maximization. The Journal of Machine Learning Research, 21(1):4937–4967.
  27. Shamir, O. (2017). An optimal algorithm for bandit and zero-order convex optimization with two-point feedback. Journal of Machine Learning Research, 18(52):1–11.
  28. Online non-monotone DR-submodular maximization. Proceedings of the AAAI Conference on Artificial Intelligence, 35(11):9868–9876.
  29. Vondrák, J. (2013). Symmetry and approximability of submodular maximization problems. SIAM Journal on Computing, 42(1):265–304.
  30. Bandit multi-linear DR-submodular maximization and its applications on adversarial submodular bandits. In Proceedings of the 40th International Conference on Machine Learning, pages 35491–35524. PMLR.
  31. Online continuous submodular maximization: From full-information to bandit feedback. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E., and Garnett, R., editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.
  32. Stochastic continuous submodular maximization: Boosting via non-oblivious function. In Proceedings of the 39th International Conference on Machine Learning, pages 26116–26134. PMLR.
  33. Online learning for non-monotone DR-submodular maximization: From full information to bandit feedback. In Ruiz, F., Dy, J., and van de Meent, J.-W., editors, Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, volume 206 of Proceedings of Machine Learning Research, pages 3515–3537. PMLR.
Citations (5)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.