Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Accelerating Diffusion-based Combinatorial Optimization Solvers by Progressive Distillation (2308.06644v2)

Published 12 Aug 2023 in cs.LG and cs.AI

Abstract: Graph-based diffusion models have shown promising results in terms of generating high-quality solutions to NP-complete (NPC) combinatorial optimization (CO) problems. However, those models are often inefficient in inference, due to the iterative evaluation nature of the denoising diffusion process. This paper proposes to use progressive distillation to speed up the inference by taking fewer steps (e.g., forecasting two steps ahead within a single step) during the denoising process. Our experimental results show that the progressively distilled model can perform inference 16 times faster with only 0.019% degradation in performance on the TSP-50 dataset.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Junwei Huang (12 papers)
  2. Zhiqing Sun (35 papers)
  3. Yiming Yang (151 papers)
Citations (2)