Papers
Topics
Authors
Recent
Search
2000 character limit reached

INF^2: High-Throughput Generative Inference of Large Language Models using Near-Storage Processing

Published 14 Feb 2025 in cs.AR | (2502.09921v1)

Abstract: The growing memory and computational demands of LLMs for generative inference present significant challenges for practical deployment. One promising solution to address these challenges is offloading-based batched inference, which leverages host memory and disk as an extended memory hierarchy for GPUs. While the approach cost-effectively enables LLM inference, its performance is limited by substantial I/O overhead, primarily due to the large key-value (KV) cache sizes, which increase with batch size and LLM context window length. In this paper, we introduce INFerence-INFinity (INF2), a framework that boosts generative inference throughput using computational storage devices (CSDs). The core of INF2 is attention-near storage, which offloads memory-intensive self-attention operations to near-storage accelerators, significantly reducing traffic through the system interconnect. We also propose delayed KV cache writeback to hide storage write latency by delaying newly generated KV cache writes until the cache reaches sufficient size in system memory. Additionally, we introduce cooperative X-cache, a technique designed to further trade off the remaining memory capacity for storage bandwidth. Our methods effectively minimize idle time for computation, improving the overall throughput. To demonstrate the effectiveness of our approach, \thiswork has been implemented on PyTorch and evaluated on a real system. Our experiments show that INF2 achieves up to 3.46$\times$ throughput improvement compared to state-of-the-art baselines. We will open-source INF2 to facilitate broader adoption.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We found no open problems mentioned in this paper.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 3 tweets with 26 likes about this paper.