From Linear to Linearizable Optimization: A Novel Framework with Applications to Stationary and Non-stationary DR-submodular Optimization (2405.00065v3)
Abstract: This paper introduces the notion of upper-linearizable/quadratizable functions, a class that extends concavity and DR-submodularity in various settings, including monotone and non-monotone cases over different convex sets. A general meta-algorithm is devised to convert algorithms for linear/quadratic maximization into ones that optimize upper-linearizable/quadratizable functions, offering a unified approach to tackling concave and DR-submodular optimization problems. The paper extends these results to multiple feedback settings, facilitating conversions between semi-bandit/first-order feedback and bandit/zeroth-order feedback, as well as between first/zeroth-order feedback and semi-bandit/bandit feedback. Leveraging this framework, new algorithms are derived using existing results as base algorithms for convex optimization, improving upon state-of-the-art results in various cases. Dynamic and adaptive regret guarantees are obtained for DR-submodular maximization, marking the first algorithms to achieve such guarantees in these settings. Notably, the paper achieves these advancements with fewer assumptions compared to existing state-of-the-art results, underscoring its broad applicability and theoretical contributions to non-convex optimization.
- Non-stationary stochastic optimization. Operations Research, 63(5):1227–1244.
- Continuous DR-submodular maximization: Structure and algorithms. In Advances in Neural Information Processing Systems.
- Guaranteed Non-convex Optimization: Submodular Maximization over Continuous Domains. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics.
- Optimal continuous DR-submodular maximization and applications to provable mean field inference. In Proceedings of the 36th International Conference on Machine Learning.
- Maximizing a monotone submodular function subject to a matroid constraint. SIAM Journal on Computing, 40(6):1740–1766.
- Online continuous submodular maximization. In Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics.
- Projection-free bandit convex optimization. In Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, pages 2047–2056. PMLR.
- Continuous non-monotone DR-submodular maximization with down-closed convex constraint. arXiv preprint arXiv:2307.09616.
- Strongly adaptive online learning. In Proceedings of the 32nd International Conference on Machine Learning, pages 1405–1411. PMLR.
- From map to marginals: Variational inference in Bayesian submodular models. Advances in Neural Information Processing Systems.
- Fast first-order methods for monotone strongly dr-submodular maximization. In SIAM Conference on Applied and Computational Discrete Algorithms (ACDA23).
- A tight combinatorial algorithm for submodular maximization subject to a matroid constraint. In 2012 IEEE 53rd Annual Symposium on Foundations of Computer Science, pages 659–668.
- Online convex optimization in the bandit setting: gradient descent without a gradient. In Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms, pages 385–394.
- New projection-free algorithms for online convex optimization with adaptive regret guarantees. In Proceedings of Thirty Fifth Conference on Learning Theory, pages 2326–2359. PMLR.
- Profit maximization in social networks and non-monotone DR-submodular maximization. Theoretical Computer Science, 957:113847.
- Stochastic conditional gradient++: (non)convex minimization and continuous submodular maximization. SIAM Journal on Optimization, 30(4):3315–3344.
- Gradient methods for submodular maximization. In Advances in Neural Information Processing Systems.
- Faster projection-free online learning. In Proceedings of Thirty Third Conference on Learning Theory, pages 1877–1893. PMLR.
- Efficient learning algorithms for changing environments. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML ’09, pages 393–400. Association for Computing Machinery.
- Large-scale price optimization via network flow. Advances in Neural Information Processing Systems.
- Efficient algorithms for online decision problems. Journal of Computer and System Sciences, 71(3):291–307.
- Non-smooth, h\”older-smooth, and robust submodular maximization. arXiv preprint arXiv:2210.06061.
- Experimental design networks: A paradigm for serving heterogeneous learners under networking constraints. IEEE/ACM Transactions on Networking.
- Improved Projection-free Online Continuous Submodular Maximization. arXiv preprint arXiv:2305.18442.
- Submodular+ concave. Advances in Neural Information Processing Systems.
- Stochastic conditional gradient methods: From convex minimization to submodular maximization. The Journal of Machine Learning Research, 21(1):4232–4280.
- Resolving the approximability of offline and online non-monotone DR-submodular maximization over general convex sets. In Proceedings of The 26th International Conference on Artificial Intelligence and Statistics.
- Online learning via offline greedy algorithms: Applications in market design and optimization. Management Science, 69(7):3797–3817.
- A generalized approach to online convex optimization. arXiv preprint arXiv:2402.08621.
- Unified projection-free algorithms for adversarial DR-submodular optimization. In The Twelfth International Conference on Learning Representations.
- A unified approach for maximizing continuous γ𝛾\gammaitalic_γ-weakly DR-submodular functions. optimization-online preprint optimization-online:25915.
- A unified approach for maximizing continuous DR-submodular functions. In Thirty-seventh Conference on Neural Information Processing Systems.
- Shalev-Shwartz, S. (2012). Online learning and online convex optimization. Foundations and Trends® in Machine Learning, 4(2):107–194.
- Bandit multi-linear dr-submodular maximization and its applications on adversarial submodular bandits. In International Conference on Machine Learning.
- Wilder, B. (2018). Equilibrium computation and robust optimization in zero sum games with submodular structure. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32.
- Adaptive online learning in dynamic environments. In Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc.
- Online continuous submodular maximization: From full-information to bandit feedback. In Advances in Neural Information Processing Systems, volume 32.
- Stochastic continuous submodular maximization: Boosting via non-oblivious function. In Proceedings of the 39th International Conference on Machine Learning.
- Online learning for non-monotone DR-submodular maximization: From full information to bandit feedback. In Proceedings of The 26th International Conference on Artificial Intelligence and Statistics.
- Boosting gradient ascent for continuous DR-submodular maximization. arXiv preprint arXiv:2401.08330.
- Zinkevich, M. (2003). Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the 20th international conference on machine learning (icml-03), pages 928–936.