Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
91 tokens/sec
Gemini 2.5 Pro Premium
40 tokens/sec
GPT-5 Medium
33 tokens/sec
GPT-5 High Premium
28 tokens/sec
GPT-4o
105 tokens/sec
DeepSeek R1 via Azure Premium
93 tokens/sec
GPT OSS 120B via Groq Premium
479 tokens/sec
Kimi K2 via Groq Premium
160 tokens/sec
2000 character limit reached

From Linear to Linearizable Optimization: A Novel Framework with Applications to Stationary and Non-stationary DR-submodular Optimization (2405.00065v3)

Published 27 Apr 2024 in math.OC, cs.CC, cs.LG, and stat.ML

Abstract: This paper introduces the notion of upper-linearizable/quadratizable functions, a class that extends concavity and DR-submodularity in various settings, including monotone and non-monotone cases over different convex sets. A general meta-algorithm is devised to convert algorithms for linear/quadratic maximization into ones that optimize upper-linearizable/quadratizable functions, offering a unified approach to tackling concave and DR-submodular optimization problems. The paper extends these results to multiple feedback settings, facilitating conversions between semi-bandit/first-order feedback and bandit/zeroth-order feedback, as well as between first/zeroth-order feedback and semi-bandit/bandit feedback. Leveraging this framework, new algorithms are derived using existing results as base algorithms for convex optimization, improving upon state-of-the-art results in various cases. Dynamic and adaptive regret guarantees are obtained for DR-submodular maximization, marking the first algorithms to achieve such guarantees in these settings. Notably, the paper achieves these advancements with fewer assumptions compared to existing state-of-the-art results, underscoring its broad applicability and theoretical contributions to non-convex optimization.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (41)
  1. Non-stationary stochastic optimization. Operations Research, 63(5):1227–1244.
  2. Continuous DR-submodular maximization: Structure and algorithms. In Advances in Neural Information Processing Systems.
  3. Guaranteed Non-convex Optimization: Submodular Maximization over Continuous Domains. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics.
  4. Optimal continuous DR-submodular maximization and applications to provable mean field inference. In Proceedings of the 36th International Conference on Machine Learning.
  5. Maximizing a monotone submodular function subject to a matroid constraint. SIAM Journal on Computing, 40(6):1740–1766.
  6. Online continuous submodular maximization. In Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics.
  7. Projection-free bandit convex optimization. In Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, pages 2047–2056. PMLR.
  8. Continuous non-monotone DR-submodular maximization with down-closed convex constraint. arXiv preprint arXiv:2307.09616.
  9. Strongly adaptive online learning. In Proceedings of the 32nd International Conference on Machine Learning, pages 1405–1411. PMLR.
  10. From map to marginals: Variational inference in Bayesian submodular models. Advances in Neural Information Processing Systems.
  11. Fast first-order methods for monotone strongly dr-submodular maximization. In SIAM Conference on Applied and Computational Discrete Algorithms (ACDA23).
  12. A tight combinatorial algorithm for submodular maximization subject to a matroid constraint. In 2012 IEEE 53rd Annual Symposium on Foundations of Computer Science, pages 659–668.
  13. Online convex optimization in the bandit setting: gradient descent without a gradient. In Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms, pages 385–394.
  14. New projection-free algorithms for online convex optimization with adaptive regret guarantees. In Proceedings of Thirty Fifth Conference on Learning Theory, pages 2326–2359. PMLR.
  15. Profit maximization in social networks and non-monotone DR-submodular maximization. Theoretical Computer Science, 957:113847.
  16. Stochastic conditional gradient++: (non)convex minimization and continuous submodular maximization. SIAM Journal on Optimization, 30(4):3315–3344.
  17. Gradient methods for submodular maximization. In Advances in Neural Information Processing Systems.
  18. Faster projection-free online learning. In Proceedings of Thirty Third Conference on Learning Theory, pages 1877–1893. PMLR.
  19. Efficient learning algorithms for changing environments. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML ’09, pages 393–400. Association for Computing Machinery.
  20. Large-scale price optimization via network flow. Advances in Neural Information Processing Systems.
  21. Efficient algorithms for online decision problems. Journal of Computer and System Sciences, 71(3):291–307.
  22. Non-smooth, h\”older-smooth, and robust submodular maximization. arXiv preprint arXiv:2210.06061.
  23. Experimental design networks: A paradigm for serving heterogeneous learners under networking constraints. IEEE/ACM Transactions on Networking.
  24. Improved Projection-free Online Continuous Submodular Maximization. arXiv preprint arXiv:2305.18442.
  25. Submodular+ concave. Advances in Neural Information Processing Systems.
  26. Stochastic conditional gradient methods: From convex minimization to submodular maximization. The Journal of Machine Learning Research, 21(1):4232–4280.
  27. Resolving the approximability of offline and online non-monotone DR-submodular maximization over general convex sets. In Proceedings of The 26th International Conference on Artificial Intelligence and Statistics.
  28. Online learning via offline greedy algorithms: Applications in market design and optimization. Management Science, 69(7):3797–3817.
  29. A generalized approach to online convex optimization. arXiv preprint arXiv:2402.08621.
  30. Unified projection-free algorithms for adversarial DR-submodular optimization. In The Twelfth International Conference on Learning Representations.
  31. A unified approach for maximizing continuous γ𝛾\gammaitalic_γ-weakly DR-submodular functions. optimization-online preprint optimization-online:25915.
  32. A unified approach for maximizing continuous DR-submodular functions. In Thirty-seventh Conference on Neural Information Processing Systems.
  33. Shalev-Shwartz, S. (2012). Online learning and online convex optimization. Foundations and Trends® in Machine Learning, 4(2):107–194.
  34. Bandit multi-linear dr-submodular maximization and its applications on adversarial submodular bandits. In International Conference on Machine Learning.
  35. Wilder, B. (2018). Equilibrium computation and robust optimization in zero sum games with submodular structure. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32.
  36. Adaptive online learning in dynamic environments. In Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc.
  37. Online continuous submodular maximization: From full-information to bandit feedback. In Advances in Neural Information Processing Systems, volume 32.
  38. Stochastic continuous submodular maximization: Boosting via non-oblivious function. In Proceedings of the 39th International Conference on Machine Learning.
  39. Online learning for non-monotone DR-submodular maximization: From full information to bandit feedback. In Proceedings of The 26th International Conference on Artificial Intelligence and Statistics.
  40. Boosting gradient ascent for continuous DR-submodular maximization. arXiv preprint arXiv:2401.08330.
  41. Zinkevich, M. (2003). Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the 20th international conference on machine learning (icml-03), pages 928–936.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.