Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Frugal Algorithm Selection (2405.11059v1)

Published 17 May 2024 in cs.LG

Abstract: When solving decision and optimisation problems, many competing algorithms (model and solver choices) have complementary strengths. Typically, there is no single algorithm that works well for all instances of a problem. Automated algorithm selection has been shown to work very well for choosing a suitable algorithm for a given instance. However, the cost of training can be prohibitively large due to running candidate algorithms on a representative set of training instances. In this work, we explore reducing this cost by choosing a subset of the training instances on which to train. We approach this problem in three ways: using active learning to decide based on prediction uncertainty, augmenting the algorithm predictors with a timeout predictor, and collecting training data using a progressively increasing timeout. We evaluate combinations of these approaches on six datasets from ASLib and present the reduction in labelling cost achieved by each option.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (39)
  1. Sunny: a lazy portfolio approach for constraint solving. Theory and Practice of Logic Programming, 14(4-5):509–524, 2014.
  2. Sunny-cp: a sequential cp portfolio solver. In Proceedings of the 30th Annual ACM Symposium on Applied Computing, pages 1861–1867, 2015.
  3. Maxsat by improved instance-specific algorithm configuration. Artificial Intelligence, 235:26–39, 2016.
  4. Aslib: A benchmark library for algorithm selection. Artificial Intelligence, 237:41–58, 2016. URL: https://www.sciencedirect.com/science/article/pii/S0004370216300388, doi:10.1016/j.artint.2016.04.003.
  5. Christopher M Bishop. Pattern recognition and machine learning. Springer google schola, 2:645–678, 2006.
  6. Selector: selecting a representative benchmark suite for reproducible statistical comparison. In Proceedings of The Genetic and Evolutionary Computation Conference, pages 620–629, 2022.
  7. David Cohn. Active Learning, pages 10–14. Springer US, Boston, MA, 2010. doi:10.1007/978-0-387-30164-8_6.
  8. Snnap: Solver-based nearest neighbor for algorithm portfolios. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2013, Prague, Czech Republic, September 23-27, 2013, Proceedings, Part III 13, pages 435–450. Springer, 2013.
  9. Active learning for probabilistic hypotheses using the maximum gibbs error criterion. In Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 1, NIPS’13, page 1457–1465, Red Hook, NY, USA, 2013. Curran Associates Inc.
  10. Active learning for sat solver benchmarking. In Tools and Algorithms for the Construction and Analysis of Systems: 29th International Conference, TACAS 2023, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022, Paris, France, April 22–27, 2023, Proceedings, Part I, pages 407–425. Springer, 2023.
  11. Algorithm selector and prescheduler in the icon challenge. Bioinspired heuristics for optimization, pages 203–219, 2019.
  12. Hybrid ranking and regression for algorithm selection. In German Conference on Artificial Intelligence (Künstliche Intelligenz), pages 59–72. Springer, 2020.
  13. claspfolio 2: Advances in algorithm selection for answer set programming. Theory and Practice of Logic Programming, 14(4-5):569–585, 2014.
  14. Robust benchmark set selection for boolean constraint solvers. In Learning and Intelligent Optimization: 7th International Conference, LION 7, Catania, Italy, January 7-11, 2013, Revised Selected Papers 7, pages 138–152. Springer, 2013.
  15. Active learning by querying informative and representative examples. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(10):1936–1949, 2014. doi:10.1109/TPAMI.2014.2307881.
  16. Isac–instance-specific algorithm configuration. In ECAI 2010, pages 751–756. IOS Press, 2010.
  17. Automated algorithm selection: Survey and perspectives. Evolutionary computation, 27(1):3–45, 2019.
  18. Leveraging tsp solver complementarity through machine learning. Evolutionary computation, 26(4):597–620, 2018.
  19. Lars Kotthoff. Algorithm selection for combinatorial search problems: A survey. Data mining and constraint programming: Foundations of a cross-disciplinary approach, pages 149–190, 2016.
  20. Autofolio: An automatically configured algorithm selector. Journal of Artificial Intelligence Research, 53:745–778, 2015.
  21. sunny-as2: Enhancing sunny for algorithm selection. Journal of Artificial Intelligence Research, 72:329–376, 2021.
  22. Better evaluations by analyzing benchmark structure. Proc. PoS, 2016.
  23. Statistical comparison of algorithm performance through instance selection. In 27th International Conference on Principles and Practice of Constraint Programming (CP 2021). Schloss Dagstuhl-Leibniz-Zentrum für Informatik, 2021.
  24. Mustafa Mısır. Benchmark set reduction for cheap empirical algorithmic studies. In 2021 IEEE Congress on Evolutionary Computation (CEC), pages 871–877. IEEE, 2021.
  25. Using case-based reasoning in an algorithm portfolio for constraint solving. In Irish conference on artificial intelligence and cognitive science, pages 210–216, 2008.
  26. Bayesian active learning with localized priors for fast receptive field characterization. In Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 2, NIPS’12, page 2348–2356, Red Hook, NY, USA, 2012. Curran Associates Inc.
  27. Optimal selection of benchmarking datasets for unbiased machine learning algorithm evaluation. Data Mining and Knowledge Discovery, 38(2):461–500, 2024.
  28. John R Rice. The algorithm selection problem. In Advances in computers, volume 15, pages 65–118. Elsevier, 1976.
  29. Portfolio methods for optimal planning: an empirical analysis. In 2015 IEEE 27th International Conference on Tools with Artificial Intelligence (ICTAI), pages 494–501. IEEE, 2015.
  30. Static and dynamic portfolio methods for optimal planning: An empirical analysis. International Journal on Artificial Intelligence Tools, 26(01):1760006, 2017.
  31. Burr Settles. Active learning literature survey. 2009. URL: https://api.semanticscholar.org/CorpusID:324600.
  32. A comparison of models for cost-sensitive active learning. In Coling 2010: Posters, pages 1247–1255, 2010.
  33. Annotation cost-sensitive active learning by tree sampling. Machine Learning, 108(5):785–807, 2019.
  34. Asap: an automatic algorithm selection approach for planning. International Journal on Artificial Intelligence Tools, 23(06):1460032, 2014.
  35. Active learning to optimise time-expensive algorithm selection. arXiv preprint arXiv:1909.03261, 2019.
  36. Active learning via query synthesis and nearest neighbour search. Neurocomputing, 147:426–434, 2015. Advances in Self-Organizing Maps Subtitle of the special issue: Selected Papers from the Workshop on Self-Organizing Maps 2012 (WSOM 2012). URL: https://www.sciencedirect.com/science/article/pii/S0925231214008145, doi:10.1016/j.neucom.2014.06.042.
  37. Satzilla: portfolio-based algorithm selection for sat. Journal of artificial intelligence research, 32:565–606, 2008.
  38. Satzilla2012: Improved algorithm selection based on cost-sensitive classification models. Proceedings of SAT Challenge, pages 57–58, 2012.
  39. Active learning with sampling by uncertainty and density for data annotations. IEEE Transactions on Audio, Speech, and Language Processing, 18(6):1323–1331, 2010. doi:10.1109/TASL.2009.2033421.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets