Papers
Topics
Authors
Recent
2000 character limit reached

Lifelong Learning for Neural powered Mixed Integer Programming (2208.12226v3)

Published 24 Aug 2022 in math.OC and cs.LG

Abstract: Mixed Integer programs (MIPs) are typically solved by the Branch-and-Bound algorithm. Recently, Learning to imitate fast approximations of the expert strong branching heuristic has gained attention due to its success in reducing the running time for solving MIPs. However, existing learning-to-branch methods assume that the entire training data is available in a single session of training. This assumption is often not true, and if the training data is supplied in continual fashion over time, existing techniques suffer from catastrophic forgetting. In this work, we study the hitherto unexplored paradigm of Lifelong Learning to Branch on Mixed Integer Programs. To mitigate catastrophic forgetting, we propose LIMIP, which is powered by the idea of modeling an MIP instance in the form of a bipartite graph, which we map to an embedding space using a bipartite Graph Attention Network. This rich embedding space avoids catastrophic forgetting through the application of knowledge distillation and elastic weight consolidation, wherein we learn the parameters key towards retaining efficacy and are therefore protected from significant drift. We evaluate LIMIP on a series of NP-hard problems and establish that in comparison to existing baselines, LIMIP is up to 50% better when confronted with lifelong learning.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (36)
  1. Achterberg, T. 2007. Constraint Integer Programming. Ph. D. Thesis, Technische Universitat Berlin.
  2. Statistical mechanics of complex networks. Reviews of modern physics, 74(1): 47.
  3. Memory aware synapses: Learning what (not) to forget. In Proceedings of the European Conference on Computer Vision (ECCV), 139–154.
  4. A supervised machine learning approach to variable branching in branch-and-bound. In In ecml. Citeseer.
  5. A machine learning-based approximation of strong branching. INFORMS Journal on Computing, 29(1): 185–195.
  6. Debunking the myths of influence maximization: An in-depth benchmarking study. In Proceedings of the 2017 ACM international conference on management of data, 651–666.
  7. Set covering algorithms using cutting planes, heuristics, and subgradient optimization: a computational study. In Combinatorial Optimization, 37–60. Springer.
  8. ML4CO: Is GCNN All You Need? Graph Convolutional Neural Networks Produce Strong Baselines For Combinatorial Optimization Problems, If Tuned and Trained Properly, on Appropriate Data. arXiv preprint arXiv:2112.12251.
  9. Dark experience for general continual learning: a strong, simple baseline. Advances in neural information processing systems, 33: 15920–15930.
  10. Efficient lifelong learning with a-gem. arXiv preprint arXiv:1812.00420.
  11. Neurogenesis deep learning: Extending deep networks to accommodate new classes. In 2017 International Joint Conference on Neural Networks (IJCNN), 526–533. IEEE.
  12. Graph Lifelong Learning: A Survey. arXiv preprint arXiv:2202.10688.
  13. The SCIP Optimization Suite 7.0. ZIB-Report.
  14. Exact combinatorial optimization with graph convolutional neural networks. Advances in Neural Information Processing Systems, 32.
  15. Hybrid models for learning to branch. Advances in neural information processing systems, 33: 18087–18097.
  16. Branch and bound in mixed integer linear programming problems: A survey of techniques and trends. arXiv preprint arXiv:2111.06257.
  17. NeuroMLR: Robust and Reliable Route Recommendation on Road Networks. Advances in Neural Information Processing Systems, 34: 22070–22082.
  18. Learning to branch in mixed integer programming. In AAAI, volume 30.
  19. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13): 3521–3526.
  20. Overcoming catastrophic forgetting in graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 8653–8661.
  21. Gradient episodic memory for continual learning. Advances in neural information processing systems, 30.
  22. On the Generalization of Neural Combinatorial Optimization Heuristics. arXiv preprint arXiv:2206.00787.
  23. GCOMB: Learning Budget-constrained Combinatorial Algorithms over Billion-sized Graphs. Advances in Neural Information Processing Systems, 33.
  24. Online learning for strong branching approximation in branch-and-bound.
  25. Solving mixed integer programs using neural networks. arXiv preprint arXiv:2012.13349.
  26. Continual lifelong learning with neural networks: A review. Neural Networks, 113: 54–71.
  27. GREED: A Neural Framework for Learning Graph Distance Functions. In Oh, A. H.; Agarwal, A.; Belgrave, D.; and Cho, K., eds., NeurIPS.
  28. icarl: Incremental classifier and representation learning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2001–2010.
  29. Learning to learn without forgetting by maximizing transfer and minimizing interference. arXiv preprint arXiv:1810.11910.
  30. Taha, H. A. 2014. Integer programming: theory, applications, and computations. Academic Press.
  31. Attention is all you need. Advances in neural information processing systems, 30.
  32. Streaming graph neural networks via continual learning. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, 1515–1524.
  33. Lifelong learning with dynamically expandable networks. arXiv preprint arXiv:1708.01547.
  34. Parameterizing branch-and-bound search trees to learn branching policies. arXiv preprint arXiv:2002.05120.
  35. Continual learning through synaptic intelligence. In International Conference on Machine Learning, 3987–3995. PMLR.
  36. Overcoming catastrophic forgetting in graph neural networks with experience replay. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 4714–4722.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube