Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
143 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning to Stop Cut Generation for Efficient Mixed-Integer Linear Programming (2401.17527v2)

Published 31 Jan 2024 in cs.AI

Abstract: Cutting planes (cuts) play an important role in solving mixed-integer linear programs (MILPs), as they significantly tighten the dual bounds and improve the solving performance. A key problem for cuts is when to stop cuts generation, which is important for the efficiency of solving MILPs. However, many modern MILP solvers employ hard-coded heuristics to tackle this problem, which tends to neglect underlying patterns among MILPs from certain applications. To address this challenge, we formulate the cuts generation stopping problem as a reinforcement learning problem and propose a novel hybrid graph representation model (HYGRO) to learn effective stopping strategies. An appealing feature of HYGRO is that it can effectively capture both the dynamic and static features of MILPs, enabling dynamic decision-making for the stopping strategies. To the best of our knowledge, HYGRO is the first data-driven method to tackle the cuts generation stopping problem. By integrating our approach with modern solvers, experiments demonstrate that HYGRO significantly improves the efficiency of solving MILPs compared to competitive baselines, achieving up to 31% improvement.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (59)
  1. F. Abend. Facility location concepts models algorithms and case studies. 2016.
  2. T. Achterberg. Constraint integer programming. 2007.
  3. T. Achterberg. Constraint Integer Programming. PhD thesis, 2009.
  4. Automated dynamic algorithm configuration. J. Artif. Intell. Res., 75:1633–1699, 2022.
  5. A. Atamtürk. On the facets of the mixed–integer knapsack polyhedron. Mathematical Programming, 98(1):145–175, Sep 2003.
  6. E. Balas and A. Ho. Set covering algorithms using cutting planes, heuristics, and subgradient optimization: A computational study, pages 37–60. Springer Berlin Heidelberg, Berlin, Heidelberg, 1980.
  7. Sample complexity of tree search configuration: Cutting planes and beyond. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 4015–4027. Curran Associates, Inc., 2021.
  8. R. Bellman. A markovian decision process. Indiana University Mathematics Journal, 6:679–684, 1957.
  9. Machine learning for combinatorial optimization: A methodological tour d’horizon. European Journal of Operational Research, 290(2):405–421, 2021.
  10. Learning to use local cuts. arXiv preprint arXiv:2206.11618, 2022.
  11. A constraint integer programming approach for resource-constrained project scheduling. In Integration of AI and OR Techniques in Constraint Programming, 2010.
  12. The SCIP Optimization Suite 8.0. Technical report, Optimization Online, December 2021.
  13. Signature verification using a ”siamese” time delay neural network. In Proceedings of the 6th International Conference on Neural Information Processing Systems, NIPS’93, page 737–744, San Francisco, CA, USA, 1993. Morgan Kaufmann Publishers Inc.
  14. Principal neighbourhood aggregation for graph nets. Advances in Neural Information Processing Systems, 33:13260–13271, 2020.
  15. Convolutional neural networks on graphs with fast localized spectral filtering. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc., 2016.
  16. The machine learning for combinatorial optimization competition (ml4co): Results and insights. In D. Kiela, M. Ciccone, and B. Caputo, editors, Proceedings of the NeurIPS 2021 Competitions and Demonstrations Track, volume 176 of Proceedings of Machine Learning Research, pages 220–231. PMLR, 06–14 Dec 2022.
  17. Exact combinatorial optimization with graph convolutional neural networks. In Neural Information Processing Systems, 2019.
  18. Connections in networks: A hybrid approach. In L. Perron and M. A. Trick, editors, Integration of AI and OR Techniques in Constraint Programming for Combinatorial Optimization Problems, pages 303–307, Berlin, Heidelberg, 2008. Springer Berlin Heidelberg.
  19. R. Gomory. Outline of an algorithm for integer solutions to linear programs. Bulletin of the American Mathematical Society, 64:275–278, 09 1958.
  20. R. E. Gomory. An algorithm for integer solutions to linear programs. 1958.
  21. Deep Learning. MIT Press, 2016. http://www.deeplearningbook.org.
  22. Hybrid models for learning to branch. In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 18087–18097. Curran Associates, Inc., 2020.
  23. Lookback for learning to branch. arXiv preprint arXiv:2206.14987, 2022.
  24. Gurobi Optimization, LLC. Gurobi Optimizer Reference Manual, 2023.
  25. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In J. Dy and A. Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 1861–1870. PMLR, 10–15 Jul 2018.
  26. J. Hartmanis. Computers and intractability: A guide to the theory of np-completeness (michael r. garey and david s. johnson). SIAM Review, 24(1):90–91, 1982.
  27. Learning to select cuts for efficient mixed-integer programming, 2021.
  28. Learning to select cuts for efficient mixed-integer programming. Pattern Recognition, 123:108353, 2022.
  29. Knapsack Problems. 01 2004.
  30. D. Kingma and J. Ba. Adam: A method for stochastic optimization. International Conference on Learning Representations, 12 2014.
  31. T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations, 2017.
  32. Learning robust policy against disturbance in transition dynamics via state-conservative policy optimization. Proceedings of the AAAI Conference on Artificial Intelligence, 36:7247–7254, 06 2022.
  33. Learning to compare nodes in branch and bound with graph neural networks, 2022.
  34. G. Laporte. Fifty years of vehicle routing. Transp. Sci., 43:408–416, 2009.
  35. Robust representation learning by clustering with bisimulation metrics for visual reinforcement learning with distractions. In Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence and Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence and Thirteenth Symposium on Educational Advances in Artificial Intelligence, AAAI’23/IAAI’23/EAAI’23. AAAI Press, 2023.
  36. Branch-and-bound algorithms: A survey of recent advances in searching, branching, and pruning. Discrete Optimization, 19:79–102, 2016.
  37. Solving mixed integer programs using neural networks, 2021.
  38. Integer programming. 2020.
  39. M. Padberg and G. Rinaldi. A branch-and-cut algorithm for the resolution of large-scale symmetric traveling salesman problems. SIAM Review, 33(1):60–100, 1991.
  40. Pytorch: An imperative style, high-performance deep learning library. In Neural Information Processing Systems, 2019.
  41. Learning to cut by looking ahead: Cutting plane selection via imitation learning. In International conference on machine learning, pages 17584–17600. PMLR, 2022.
  42. Y. Pochet and L. A. Wolsey. Production planning by mixed integer programming. 2010.
  43. Ecole: A gym-like library for machine learning in combinatorial optimization solvers. In Learning Meets Combinatorial Algorithms at NeurIPS2020, 2020.
  44. I. Rechenberg. Evolutionsstrategie : Optimierung technischer systeme nach prinzipien der biologischen evolution. 1973.
  45. F. Rosenblatt. Principles of neurodynamics. perceptrons and the theory of brain mechanisms. American Journal of Psychology, 76:705, 1963.
  46. Learning representations by back-propagating errors. Nature, 323(6088):533–536, Oct 1986.
  47. Evolution strategies as a scalable alternative to reinforcement learning, 2017.
  48. H. Sun. Improving learning to branch via reinforcement learning. 2020.
  49. R. Sutton and A. Barto. Reinforcement learning: An introduction. IEEE Transactions on Neural Networks, 9(5):1054–1054, 1998.
  50. Reinforcement learning for integer programming: Learning to cut. In H. D. III and A. Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 9367–9376. PMLR, 13–18 Jul 2020.
  51. Reinforcement learning for integer programming: Learning to cut, 2020.
  52. Adaptive cut selection in mixed-integer linear programming. arXiv preprint arXiv:2202.10962, 2022.
  53. Learning cut selection for mixed-integer linear programming via hierarchical sequence model. In The Eleventh International Conference on Learning Representations, 2023.
  54. Efficient exploration in resource-restricted reinforcement learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8):10279–10287, Jun. 2023.
  55. Sample-efficient reinforcement learning via conservative model-based actor-critic. Proceedings of the AAAI Conference on Artificial Intelligence, 36(8):8612–8620, Jun. 2022.
  56. F. Wesselmann and U. Stuhl. Implementing cutting plane management and selection techniques. University of Paderborn, Tech. Rep, 2012.
  57. Learning task-relevant representations for generalization via characteristic functions of reward sequence distributions. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD ’22, page 2242–2252, New York, NY, USA, 2022. Association for Computing Machinery.
  58. Promoting stochasticity for expressive policies via a simple and efficient regularization method. In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 13504–13514. Curran Associates, Inc., 2020.
  59. Deep model-based reinforcement learning via estimated uncertainty and conservative policy optimization. Proceedings of the AAAI Conference on Artificial Intelligence, 34:6941–6948, 04 2020.
Citations (2)

Summary

We haven't generated a summary for this paper yet.