Papers
Topics
Authors
Recent
2000 character limit reached

SpecInF: Exploiting Idle GPU Resources in Distributed DL Training via Speculative Inference Filling (2503.02550v3)

Published 4 Mar 2025 in cs.DC

Abstract: Deep Learning (DL), especially with LLMs, brings benefits to various areas. However, DL training systems usually yield prominent idling GPU resources due to many factors, such as resource allocation and collective communication. To improve GPU utilization, we present SpecInF, which adopts a Speculative Inference Filling method to exploit idle GPU resources. It collocates each primary training instance with additional inference instances on the same GPU, detects the training bubbles and adaptively fills with online or offline inference workloads. Our results show that SpecInF can effectively enhance GPU utilization under mainstream parallel training modes, delivering additional up to 14$\times$ offline inference throughputs than TGS and 67\% reduction in online inference p95 latency than MPS, while guaranteeing collocated training throughput.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 tweet and received 0 likes.

Upgrade to Pro to view all of the tweets about this paper: