Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

vAttention: Dynamic Memory Management for Serving LLMs without PagedAttention (2405.04437v2)

Published 7 May 2024 in cs.LG and cs.OS
vAttention: Dynamic Memory Management for Serving LLMs without PagedAttention

Abstract: Efficient management of GPU memory is essential for high throughput LLM inference. Prior systems used to reserve KV-cache memory ahead-of-time that resulted in wasted capacity due to internal fragmentation. Inspired by demand paging, vLLM proposed PagedAttention to enable dynamic memory allocation for KV-cache. This approach eliminates fragmentation and improves serving throughout. However, to be able to allocate physical memory dynamically, PagedAttention changes the layout of KV-cache from contiguous virtual memory to non-contiguous virtual memory. As a consequence, one needs to rewrite the attention kernels to support paging, and implement a memory manager in the serving framework. This results in both performance and programming overheads, as well as portability challenges in adopting state-of-the-art attention kernels. In this paper, we propose vAttention, a new approach for dynamic KV-cache memory management. In contrast to PagedAttention, vAttention stores KV-cache in contiguous virtual memory and leverages OS support for on-demand allocation of physical memory. vAttention thus enables one to use state-of-the art attention kernels out-of-the-box by adding support for dynamic allocation of physical memory without having to re-write their code. We implement vAttention in the vLLM serving stack to show that it also helps improve decode throughput by up to 1.99x over vLLM, and the end-to-end serving throughput by up to 1.22x and 1.29x, compared to using the state-of-the-art PagedAttention based kernels of FlashAttention and FlashInfer.

Exploring vAttention: Efficient Memory Management for LLMs

Overview of vAttention

Recent advancements in AI and machine learning have underscored the critical role of efficient memory management in serving LLMs. The new system described in the research, known as vAttention, addresses inefficiencies in previous LLM memory management systems, notably those using the PagedAttention method. This paper presents vAttention as a technique that dynamically manages memory while maintaining contiguous virtual memory allocation, simplifying the overall system complexity and improving execution speed.

The Drawbacks of PagedAttention

PagedAttention has been a popular approach for dynamically allocating memory in LLM inference tasks. It divides memory into blocks allocated only as needed. Despite its clear benefits in reducing memory waste, the paper highlights several pitfalls:

  • Software complexity: PagedAttention necessitates changes in both attention kernels and the memory management in the serving framework, adding layers of complexity.
  • Rewriting of attention kernels: A non-contiguous virtual memory layout requires significant modifications to the original, contiguous memory-based kernels.
  • Performance overhead: This memory management method introduces additional computation steps in attention operations, potentially slowing down the whole process.

By maintaining the concept of virtual contiguity, vAttention attempts to streamline these operations, thus avoiding the complexity and performance hits associated with PagedAttention.

How vAttention Works

vAttention optimizes GPU memory usage through on-demand physical memory allocation without prior reservation, leveraging existing system functionalities more effectively than PagedAttention. Here’s how vAttention operates:

  • Dynamic Physical Allocation: It allocates virtual memory for the whole potential batch size from the start but assigns physical memory dynamically as data flows in, thus avoiding upfront physical memory reservation.
  • Low-level System Utilization: The system uses low-level CUDA operations to separate the allocation of virtual and physical memory, which preserves contiguous memory access patterns and eliminates the need for extensive changes in attention kernels.

Practical Implications and Performance

The shift to vAttention has tangible benefits:

  • Simpler integration and maintenance: Developers can use existing GPU kernels without modification, reducing the need for specialized knowledge and maintenance resources.
  • Reduced latency and higher throughput: Benchmarks demonstrated that vAttention could process requests significantly faster—up to 1.97 times quicker than systems using the older PagedAttention approach.

The results reflect substantial potential for both improving LLM inference performance and simplifying the underlying software architecture.

Future Directions

While vAttention provides a robust framework for managing LLM memory efficiently, its integration with even lower-level system operations or exploring its adaptability across diverse hardware architectures could yield further improvements. Additionally, the community might explore the automatic tuning of page size based on model requirements and workload characteristics to optimize performance further.

Conclusion

vAttention redefines dynamic memory management in LLM deployment, addressing the critical limitations of previous systems like PagedAttention. By effectively leveraging built-in system capabilities to manage memory demand dynamically, it significantly simplifies the LLM serving pipeline and boosts operational efficiency. This innovation not only enhances current LLM applications but also sets a foundational approach that can influence future developments in machine learning infrastructure.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Ramya Prabhu (2 papers)
  2. Ajay Nayak (2 papers)
  3. Jayashree Mohan (17 papers)
  4. Ramachandran Ramjee (20 papers)
  5. Ashish Panwar (8 papers)
Citations (12)
Youtube Logo Streamline Icon: https://streamlinehq.com