Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 79 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 21 tok/s Pro
GPT-5 High 30 tok/s Pro
GPT-4o 98 tok/s Pro
Kimi K2 187 tok/s Pro
GPT OSS 120B 453 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Looking Ahead to Avoid Being Late: Solving Hard-Constrained Traveling Salesman Problem (2403.05318v1)

Published 8 Mar 2024 in cs.AI and cs.LG

Abstract: Many real-world problems can be formulated as a constrained Traveling Salesman Problem (TSP). However, the constraints are always complex and numerous, making the TSPs challenging to solve. When the number of complicated constraints grows, it is time-consuming for traditional heuristic algorithms to avoid illegitimate outcomes. Learning-based methods provide an alternative to solve TSPs in a soft manner, which also supports GPU acceleration to generate solutions quickly. Nevertheless, the soft manner inevitably results in difficulty solving hard-constrained problems with learning algorithms, and the conflicts between legality and optimality may substantially affect the optimality of the solution. To overcome this problem and to have an effective solution against hard constraints, we proposed a novel learning-based method that uses looking-ahead information as the feature to improve the legality of TSP with Time Windows (TSPTW) solutions. Besides, we constructed TSPTW datasets with hard constraints in order to accurately evaluate and benchmark the statistical performance of various approaches, which can serve the community for future research. With comprehensive experiments on diverse datasets, MUSLA outperforms existing baselines and shows generalizability potential.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (28)
  1. Solving traveling salesman problem with time windows using hybrid pointer networks with time features. Sustainability, 2021.
  2. Concorde tsp solver. http://www.math.uwaterloo.ca/tsp/concorde, 2006. Accessed: 2023-05-18.
  3. Layer normalization. ArXiv, abs/1607.06450, 2016.
  4. Relational inductive biases, deep learning, and graph networks. ArXiv, abs/1806.01261, 2018.
  5. Combining reinforcement learning and constraint programming for combinatorial optimization. ArXiv, abs/2006.01610, 2020.
  6. Learning to solve vehicle routing problems with time windows through joint attention. ArXiv, abs/2006.09100, 2020.
  7. Generalize a small pre-trained model to arbitrarily large tsp instances. ArXiv, abs/2012.10658, 2020.
  8. Keld Helsgaun. An extension of the lin-kernighan-helsgaun tsp solver for constrained traveling salesman and vehicle routing problems. Roskilde: Roskilde University, 12, 2017.
  9. An efficient graph convolutional network technique for the travelling salesman problem. ArXiv, abs/1906.01227, 2019.
  10. Rossmann-toolbox: a deep learning-based protocol for the prediction and design of cofactor specificity in rossmann fold proteins. Briefings in Bioinformatics, 23, 2021.
  11. Formulations for minimizing tour duration of the traveling salesman problem with time windows. Procedia. Economics and finance, 26:1026–1034, 2015.
  12. Sym-nco: Leveraging symmetricity for neural combinatorial optimization. Advances in Neural Information Processing Systems, 35:1936–1949, 2022.
  13. Semi-supervised classification with graph convolutional networks. ArXiv, abs/1609.02907, 2016.
  14. Attention, learn to solve routing problems! In International Conference on Learning Representations, 2018.
  15. Pomo: Policy optimization with multiple optima for reinforcement learning. Advances in Neural Information Processing Systems, 33:21188–21198, 2020.
  16. Decoupled weight decay regularization. In International Conference on Learning Representations, 2017.
  17. Combinatorial optimization by graph pointer networks and hierarchical reinforcement learning. ArXiv, abs/1911.04936, 2019.
  18. A note on learning algorithms for quadratic assignment with graph neural networks. ArXiv, abs/1706.07450, 2017.
  19. Initialization methods for the tsp with time windows using variable neighborhood search. 2015 6th International Conference on Information, Intelligence, Systems and Applications (IISA), pages 1–6, 2015.
  20. Stabilizing transformers for reinforcement learning. In International Conference on Machine Learning, 2019.
  21. Or-tools. https://developers.google.com/optimization/, 2011. Accessed: 2023-05-18.
  22. Learning to solve soft-constrained vehicle routing problems with lagrangian relaxation. arXiv preprint arXiv:2207.09860, 2022.
  23. Attention is all you need. In NIPS, 2017.
  24. Graph attention networks. ArXiv, abs/1710.10903, 2017.
  25. Pointer networks. In NIPS, 2015.
  26. A graph neural network assisted monte carlo tree search approach to traveling salesman problem. IEEE Access, 8:108418–108428, 2020.
  27. Deep reinforcement learning for traveling salesman problem with time windows and rejections. 2020 International Joint Conference on Neural Networks (IJCNN), pages 1–8, 2020.
  28. Reinforced lin-kernighan-helsgaun algorithms for the traveling salesman problems. Knowl. Based Syst., 260:110144, 2022.
Citations (2)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube