Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
157 tokens/sec
GPT-4o
43 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adaptive Combinatorial Maximization: Beyond Approximate Greedy Policies (2404.01930v1)

Published 2 Apr 2024 in cs.LG, cs.DM, and stat.ML

Abstract: We study adaptive combinatorial maximization, which is a core challenge in machine learning, with applications in active learning as well as many other domains. We study the Bayesian setting, and consider the objectives of maximization under a cardinality constraint and minimum cost coverage. We provide new comprehensive approximation guarantees that subsume previous results, as well as considerably strengthen them. Our approximation guarantees simultaneously support the maximal gain ratio as well as near-submodular utility functions, and include both maximization under a cardinality constraint and a minimum cost coverage guarantee. In addition, we provided an approximation guarantee for a modified prior, which is crucial for obtaining active learning guarantees that do not depend on the smallest probability in the prior. Moreover, we discover a new parameter of adaptive selection policies, which we term the "maximal gain ratio". We show that this parameter is strictly less restrictive than the greedy approximation parameter that has been used in previous approximation guarantees, and show that it can be used to provide stronger approximation guarantees than previous results. In particular, we show that the maximal gain ratio is never larger than the greedy approximation factor of a policy, and that it can be considerably smaller. This provides a new insight into the properties that make a policy useful for adaptive combinatorial maximization.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (29)
  1. Fast adaptive non-monotone submodular maximization subject to a knapsack constraint. Advances in neural information processing systems, 33:16903–16915, 2020.
  2. Stochastic submodular maximization. In Internet and Network Economics: 4th International Workshop, WINE 2008, Shanghai, China, December 17-20, 2008. Proceedings 4, pages 477–489. Springer, 2008.
  3. Approximation guarantees for adaptive sampling. In International Conference on Machine Learning, pages 384–393. PMLR, 2018.
  4. Near-optimal batch mode active learning and adaptive submodular optimization. In International Conference on Machine Learning, pages 160–168. PMLR, 2013.
  5. Submodular set functions, matroids and the greedy algorithm: tight worst-case bounds and some generalizations of the rado-edmonds theorem. Discrete applied mathematics, 7(3):251–274, 1984.
  6. Near-optimal adaptive pool-based active learning with general loss. In 30th conference on Uncertainty in Artificial Intelligence (UAI), pages 122–131, 2014.
  7. Submodular meets spectral: Greedy algorithms for subset selection, sparse approximation and dictionary selection. arXiv preprint arXiv:1102.3975, 2011.
  8. Janardhan Rao Doppa. Adaptive experimental design for optimizing combinatorial structures. In IJCAI, pages 4940–4945, 2021.
  9. Jack Edmonds. Submodular functions, matroids, and certain polyhedra. Combinatorial Structures and Their Applications, pages 69–87, 1970.
  10. Adaptivity in adaptive submodularity. In Conference on Learning Theory, pages 1823–1846. PMLR, 2021.
  11. Beyond adaptive submodularity: Approximation guarantees of greedy policy with adaptive submodularity ratio. In International Conference on Machine Learning, pages 2042–2051. PMLR, 2019.
  12. Adaptive submodular maximization in bandit setting. Advances in Neural Information Processing Systems, 26, 2013.
  13. Adaptive submodularity: Theory and applications in active learning and stochastic optimization. Journal of Artificial Intelligence Research, 42:427–486, 2011.
  14. Adaptive submodularity: A new approach to active learning and stochastic optimization. CoRR, abs/1003.3967v5, 2017.
  15. Non-monotone adaptive submodular maximization. In IJCAI, pages 1996–2003, 2015.
  16. Interactive submodular set cover. arXiv preprint arXiv:1002.3345, 2010.
  17. Hideitsu Hino. Active learning: Problem settings and recent developments. arXiv preprint arXiv:2012.04225, 2020.
  18. Employing em and pool-based active learning for text classification. In ICML, volume 98, pages 350–358. Citeseer, 1998.
  19. Adaptive sequence submodularity. Advances in Neural Information Processing Systems, 32, 2019.
  20. An analysis of approximations for maximizing submodular set functions—i. Mathematical Programming, 14(1):265–294, 1978.
  21. Sivan Sabato. Submodular learning and covering with response-dependent costs. Theoretical Computer Science, 742:98 – 113, 2018.
  22. Adaptive seeding in social networks. In 2013 IEEE 54th Annual Symposium on Foundations of Computer Science, pages 459–468. IEEE, 2013.
  23. Shaojie Tang. Beyond pointwise submodularity: Non-monotone adaptive submodular maximization in linear time. Theoretical Computer Science, 850:249–261, 2021. ISSN 0304-3975.
  24. Shaojie Tang. Robust adaptive submodular maximization. INFORMS Journal on Computing, 34(6):3277–3291, 2022.
  25. Streaming adaptive submodular maximization. Theoretical Computer Science, 944:113644, 2023.
  26. Adaptive influence maximization in dynamic social networks. IEEE/ACM Transactions on Networking, 25(1):112–125, 2016.
  27. Jan Vondrak. Submodularity and curvature: the optimal algorithm. RIMS Kôkyûroku Bessatsu, 01 2010.
  28. Laurence A Wolsey. An analysis of the greedy algorithm for the submodular set covering problem. Combinatorica, 2(4):385–393, 1982.
  29. Adaptive submodular inverse reinforcement learning for spatial search and map exploration. Autonomous Robots, 46(2):321–347, 2022.

Summary

We haven't generated a summary for this paper yet.