Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-Decoder Attention Model with Embedding Glimpse for Solving Vehicle Routing Problems (2012.10638v1)

Published 19 Dec 2020 in cs.LG and cs.AI

Abstract: We present a novel deep reinforcement learning method to learn construction heuristics for vehicle routing problems. In specific, we propose a Multi-Decoder Attention Model (MDAM) to train multiple diverse policies, which effectively increases the chance of finding good solutions compared with existing methods that train only one policy. A customized beam search strategy is designed to fully exploit the diversity of MDAM. In addition, we propose an Embedding Glimpse layer in MDAM based on the recursive nature of construction, which can improve the quality of each policy by providing more informative embeddings. Extensive experiments on six different routing problems show that our method significantly outperforms the state-of-the-art deep learning based models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Liang Xin (3 papers)
  2. Wen Song (24 papers)
  3. Zhiguang Cao (48 papers)
  4. Jie Zhang (847 papers)
Citations (131)

Summary

We haven't generated a summary for this paper yet.