Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Tackling Prevalent Conditions in Unsupervised Combinatorial Optimization: Cardinality, Minimum, Covering, and More (2405.08424v2)

Published 14 May 2024 in cs.LG and math.OC

Abstract: Combinatorial optimization (CO) is naturally discrete, making machine learning based on differentiable optimization inapplicable. Karalias & Loukas (2020) adapted the probabilistic method to incorporate CO into differentiable optimization. Their work ignited the research on unsupervised learning for CO, composed of two main components: probabilistic objectives and derandomization. However, each component confronts unique challenges. First, deriving objectives under various conditions (e.g., cardinality constraints and minimum) is nontrivial. Second, the derandomization process is underexplored, and the existing derandomization methods are either random sampling or naive rounding. In this work, we aim to tackle prevalent (i.e., commonly involved) conditions in unsupervised CO. First, we concretize the targets for objective construction and derandomization with theoretical justification. Then, for various conditions commonly involved in different CO problems, we derive nontrivial objectives and derandomization to meet the targets. Finally, we apply the derivations to various CO problems. Via extensive experiments on synthetic and real-world graphs, we validate the correctness of our derivations and show our empirical superiority w.r.t. both optimization quality and speed.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (91)
  1. Learning what to defer for maximum independent sets. In ICML, 2020.
  2. Coverage and mobile sensor placement for vehicles on predetermined routes: A greedy heuristic approach. In WINSYS, 2017.
  3. The probabilistic method. John Wiley & Sons, 2016.
  4. Physics-inspired optimization for quadratic unconstrained problems using a digital annealer. Frontiers in Physics, 7:48, 2019.
  5. Bach, F. et al. Learning with submodular functions: A convex optimization perspective. Foundations and Trends® in machine learning, 6(2-3):145–373, 2013.
  6. Neural combinatorial optimization with reinforcement learning. In ICLR, 2016.
  7. Machine learning for combinatorial optimization: a methodological tour d’horizon. European Journal of Operational Research, 290(2):405–421, 2021.
  8. Learning with differentiable pertubed optimizers. In NeurIPS, 2020.
  9. RL4CO: an extensive reinforcement learning for combinatorial optimization benchmark. arXiv:2306.17100, 2023.
  10. The scip optimization suite 8.0. arXiv 2112.08872, 2021.
  11. Billionnet, A. Different formulations for solving the heaviest k-subgraph problem. INFOR: Information Systems and Operational Research, 43(3):171–186, 2005.
  12. The maximum clique problem. Handbook of Combinatorial Optimization: Supplement Volume A, pp.  1–74, 1999.
  13. Tackling prevalent conditions in unsupervised combinatorial optimization: Code and datasets. https://github.com/bokveizen/unsupervised-CO-ucom2, 2024.
  14. Submodular maximization with cardinality constraints. In SODA, 2014.
  15. Combinatorial optimization and reasoning with graph neural networks. J. Mach. Learn. Res., 24:130–1, 2023.
  16. Algorithms for the set covering problem. Annals of Operations Research, 98(1-4):353–371, 2000.
  17. Clustering uncertain graphs. In PVLDB, 2017.
  18. A lagrangian-based heuristic for large-scale set covering problems. Mathematical Programming, 81:215–228, 1998.
  19. Combinatorial optimization with policy adaptation using latent space search. In NeurIPS, 2023.
  20. Embedding uncertain knowledge graphs. In AAAI, 2019.
  21. Simulation-guided beam search for neural combinatorial optimization. In NeurIPS, 2022.
  22. Reinforcement learning with combinatorial actions: An application to vehicle routing. In NeurIPS, 2020.
  23. Diaby, M. The traveling salesman problem: a linear programming formulation. arXiv preprint cs/0609005, 2006.
  24. Bq-nco: Bisimulation quotienting for generalizable neural combinatorial optimization. In NeurIPS, 2023.
  25. Facility location: applications and theory. Springer Science & Business Media, 2004.
  26. Probabilistic methods in combinatorics. Akadémiai Kindó, 1974.
  27. The dense k-subgraph problem. Algorithmica, 29:410–421, 2001.
  28. Surco: Learning linear surrogates for combinatorial nonlinear optimization problems. In ICML, 2023.
  29. Unsupervised training for neural tsp solver. In LION, 2022.
  30. On the history of the minimum spanning tree problem. Annals of the History of Computing, 7(1):43–57, 1985.
  31. Data reduction and exact algorithms for clique cover. Journal of Experimental Algorithmics (JEA), 13:2–2, 2009.
  32. Winner takes it all: Training performant RL populations for combinatorial optimization. In NeurIPS, 2023.
  33. Approximation algorithms for connected dominating sets. Algorithmica, 20:374–387, 1998.
  34. Gurobi Optimization, LLC. Gurobi Optimizer Reference Manual, 2023. URL https://www.gurobi.com.
  35. Hong, Y. On computing the distribution function for the poisson binomial distribution. Computational Statistics & Data Analysis, 59:41–51, 2013.
  36. On embedding uncertain graphs. In CIKM, 2017.
  37. Graph coloring problems. John Wiley & Sons, 2011.
  38. Empowering graph representation learning with test-time graph transformation. In ICLR, 2023.
  39. Robust graph clustering via meta weighting for noisy graphs. In CIKM, 2023.
  40. Erdos goes neural: an unsupervised learning framework for combinatorial optimization on graphs. In NeurIPS, 2020.
  41. Neural set function extensions: Learning with discrete functions in high dimensions. In NeurIPS, 2022.
  42. Learning combinatorial optimization algorithms over graphs. In NeurIPS, 2017.
  43. The budgeted maximum coverage problem. Information processing letters, 70(1):39–45, 1999.
  44. Learning collaborative policies to solve NP-hard routing problems. In NeurIPS, 2021.
  45. Expected probabilistic hierarchies, 2024. URL https://openreview.net/forum?id=Q3Foe1fDjh.
  46. Attention, learn to solve routing problems! In ICLR, 2019.
  47. Classical coloring of graphs. Contemporary Mathematics, 352:1–20, 2004.
  48. Gradient-based neural dag learning. In ICLR, 2020.
  49. HardSATGEN: Understanding the difficulty of hard sat formula generation and a strong structure-hardness-aware baseline. In KDD, 2023a.
  50. T2T: From distribution learning in training to gradient search in testing for combinatorial optimization. In NeurIPS, 2023b.
  51. Robust graph coloring for uncertain supply chain management. In HICSS, 2005.
  52. Neural combinatorial optimization with heavy decoder: Toward large scale generalization. In NeurIPS, 2023.
  53. An evolutionary algorithm for large scale set covering problems with application to airline crew scheduling. In Workshops on Real-World Applications of Evolutionary Computation, 2000.
  54. Reinforcement learning for combinatorial optimization: A survey. Computers & Operations Research, 134:105400, 2021.
  55. Facility location and covering problems. In Proc. of the 7th International Multiconference Information Society, volume 500, 2004.
  56. Can hybrid geometric scattering networks help solve the maximum clique problem? In NeurIPS, 2022.
  57. Unsupervised learning for solving the travelling salesman problem. In NeurIPS, 2023.
  58. A simple reward-free approach to constrained reinforcement learning. In ICML, 2022.
  59. Challenges and opportunities in deep reinforcement learning with graph neural networks: A comprehensive review of algorithms and applications. IEEE Transactions on Neural Networks and Learning Systems, 2023.
  60. Reinforcement learning for solving the vehicle routing problem. In NeurIPS, 2018.
  61. DropGNN: Random dropouts increase the expressiveness of graph neural networks. In NeurIPS, 2021.
  62. CombOptNet: Fit the right np-hard problem by learning integer programming constraints. In ICML, 2021.
  63. Or-tools v9.7, 2023. URL https://developers.google.com/optimization/.
  64. An optimal minimum spanning tree algorithm. Journal of the ACM (JACM), 49(1):16–34, 2002.
  65. Differentiation of blackbox combinatorial solvers. In ICLR, 2019.
  66. DIMES: A differentiable meta solver for combinatorial optimization problems. In NeurIPS, 2022.
  67. Multi-scale attributed node embedding. Journal of Complex Networks, 9(2):cnab014, 2021.
  68. On maximum coverage in the streaming model & application to multi-topic blog-watch. In SDM, 2009.
  69. Variational annealing on graphs for combinatorial optimization. In NeurIPS, 2023.
  70. Combinatorial optimization with physics-inspired graph neural networks. Nature Machine Intelligence, 4(4):367–377, 2022a.
  71. Graph coloring with physics-inspired graph neural networks. Physical Review Research, 4(4):043131, 2022b.
  72. Understanding dropout for graph neural networks. In TheWebConf (WWW), 2022.
  73. Concerning nonnegative matrices and doubly stochastic matrices. Pacific Journal of Mathematics, 21(2):343–348, 1967.
  74. Meta-sage: Scale meta-learning scheduled adaptation with guided exploration for mitigating scale shift on combinatorial optimization. In ICML, 2023.
  75. Straka, M. Poisson binomial distribution for python (github repository). https://github.com/tsakim/poibin, 2017.
  76. Revisiting sampling for combinatorial optimization. In ICML, 2023.
  77. DIFUSCO: Graph-based diffusion solvers for combinatorial optimization. In NeurIPS, 2023.
  78. Policy-based optimization: Single-step policy gradient method seen as an evolution strategy. Neural Computing and Applications, 35(1):449–467, 2023.
  79. Unsupervised learning for combinatorial optimization needs meta-learning. In ICLR, 2023.
  80. Unsupervised learning for combinatorial optimization with principled objective relaxation. In NeurIPS, 2022.
  81. Towards one-shot neural combinatorial solvers: Theoretical and empirical notes on the cardinality-constrained case. In ICLR, 2023.
  82. Wang, Y. H. On the number of successes in independent trials. Statistica Sinica, pp.  295–312, 1993.
  83. Distilling autoregressive models to obtain high-performance non-autoregressive solvers for vehicle routing problems with faster inference speed. In AAAI, 2024.
  84. Random constraint satisfaction: Easy generation of hard (satisfiable) instances. Artificial intelligence, 171(8-9):514–534, 2007.
  85. The robust coloring problem. European Journal of Operational Research, 148(3):546–558, 2003.
  86. Yannakakis, M. Expressing combinatorial optimization problems by linear programs. In Proceedings of the twentieth annual ACM symposium on Theory of computing, pp.  223–228, 1988.
  87. DeepACO: Neural-enhanced ant systems for combinatorial optimization. In NeurIPS, 2023a.
  88. Glop: Learning global partition and local construction for solving large-scale routing problems in real-time. In AAAI, 2024.
  89. Towards quantum machine learning for constrained combinatorial optimization: a quantum qap solver. In ICML, 2023b.
  90. Let the flows tell: Solving graph combinatorial optimization problems with gflownets. In NeurIPS, 2023.
  91. A fast minimum spanning tree algorithm based on k-means. Information Sciences, 295:1–17, 2015.
Citations (2)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com