Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Interpretable Heuristics for WalkSAT (2307.04608v1)

Published 10 Jul 2023 in cs.AI

Abstract: Local search algorithms are well-known methods for solving large, hard instances of the satisfiability problem (SAT). The performance of these algorithms crucially depends on heuristics for setting noise parameters and scoring variables. The optimal setting for these heuristics varies for different instance distributions. In this paper, we present an approach for learning effective variable scoring functions and noise parameters by using reinforcement learning. We consider satisfiability problems from different instance distributions and learn specialized heuristics for each of them. Our experimental results show improvements with respect to both a WalkSAT baseline and another local search learned heuristic.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (26)
  1. 2012. Choosing probability distributions for stochastic local search and the role of make versus break. In Cimatti, A., and Sebastiani, R., eds., Theory and Applications of Satisfiability Testing – SAT 2012. Berlin, Heidelberg: Springer Berlin Heidelberg.
  2. 2016. Neural combinatorial optimization with reinforcement learning. arXiv preprint arXiv:1611.09940.
  3. 2005. Incremental compilation-to-sat procedures. In Hoos, H. H., and Mitchell, D. G., eds., Theory and Applications of Satisfiability Testing, 46–58. Berlin, Heidelberg: Springer Berlin Heidelberg.
  4. 2021. Machine learning for combinatorial optimization: a methodological tour d’horizon. European Journal of Operational Research 290(2):405–421.
  5. Domingos, P. 1999. The role of occam’s razor in knowledge discovery. Data mining and knowledge discovery 3(4):409–425.
  6. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.
  7. 2003. An extensible sat-solver. In Giunchiglia, E., and Tacchella, A., eds., SAT, volume 2919 of Lecture Notes in Computer Science, 502–518. Springer.
  8. 1994. The sat phase transition. In ECAI, volume 94, 105–109. PITMAN.
  9. 2022. Machine learning methods in solving the boolean satisfiability problem. ArXiv abs/2203.04755.
  10. Hoos, H. H. 1999. On the run-time behaviour of stochastic local search algorithms for SAT. In Hendler, J., and Subramanian, D., eds., Proceedings of the Sixteenth National Conference on Artificial Intelligence and Eleventh Conference on Innovative Applications of Artificial Intelligence, July 18-22, 1999, Orlando, Florida, USA, 661–666. AAAI Press / The MIT Press.
  11. Hoos, H. H. 2002. An adaptive noise mechanism for walksat. In AAAI/IAAI.
  12. 2009. Incomplete algorithms. In Biere, A.; Heule, M.; van Maaren, H.; and Walsh, T., eds., Handbook of Satisfiability, volume 185 of Frontiers in Artificial Intelligence and Applications. IOS Press. 185–203.
  13. 2017. Learning combinatorial optimization algorithms over graphs. Advances in neural information processing systems 30.
  14. 2017. Cnfgen: A generator of crafted benchmarks. In Theory and Applications of Satisfiability Testing – SAT 2017, Lecture Notes in Computer Science, 464–473. Germany: Springer. 20th International Conference on Theory and Applications of Satisfiability Testing, SAT 2017.
  15. 2017. Fixing weight decay regularization in adam. CoRR abs/1711.05101.
  16. 1997. Evidence for invariants in local search. In Kuipers, B., and Webber, B. L., eds., Proceedings of the Fourteenth National Conference on Artificial Intelligence and Ninth Innovative Applications of Artificial Intelligence Conference, AAAI 97, IAAI 97, July 27-31, 1997, Providence, Rhode Island, USA, 321–326. AAAI Press / The MIT Press.
  17. 1992. Hard and easy distributions of sat problems. In Aaai, volume 92, 459–465.
  18. 2008. A compact and efficient sat encoding for planning. In ICAPS, 296–303.
  19. 1993. Local search strategies for satisfiability testing. In Johnson, D. S., and Trick, M. A., eds., Cliques, Coloring, and Satisfiability, Proceedings of a DIMACS Workshop, New Brunswick, New Jersey, USA, October 11-13, 1993, volume 26 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science, 521–531. DIMACS/AMS.
  20. 1994. Noise strategies for improving local search. In Hayes-Roth, B., and Korf, R. E., eds., Proceedings of the 12th National Conference on Artificial Intelligence, Seattle, WA, USA, July 31 - August 4, 1994, Volume 1, 337–343. AAAI Press / The MIT Press.
  21. 1996. Generating hard satisfiability problems. Artificial intelligence 81(1-2):17–29.
  22. 2017. Super-convergence: Very fast training of residual networks using large learning rates. CoRR abs/1708.07120.
  23. 2018. Reinforcement Learning: An Introduction. The MIT Press, second edition.
  24. Williams, R. J. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning 8(3):229–256.
  25. 2019. Learning local search heuristics for boolean satisfiability. Advances in Neural Information Processing Systems 32.
  26. 2020. Nlocalsat: Boosting local search with solution prediction. CoRR abs/2001.09398.

Summary

We haven't generated a summary for this paper yet.