Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Architectural and System Implications of CXL-enabled Tiered Memory (2503.17864v2)

Published 22 Mar 2025 in cs.AR and cs.ET

Abstract: Memory disaggregation is an emerging technology that decouples memory from traditional memory buses, enabling independent scaling of compute and memory. Compute Express Link (CXL), an open-standard interconnect technology, facilitates memory disaggregation by allowing processors to access remote memory through the PCIe bus while preserving the shared-memory programming model. This innovation creates a tiered memory architecture combining local DDR and remote CXL memory with distinct performance characteristics. In this paper, we investigate the architectural implications of CXL memory, focusing on its increased latency and performance heterogeneity, which can undermine the efficiency of existing processor designs optimized for (relatively) uniform memory latency. Using carefully designed micro-benchmarks, we identify bottlenecks such as limited hardware-level parallelism in CXL memory, unfair queuing in memory request handling, and its impact on DDR memory performance and inter-core synchronization. Our findings reveal that the disparity in memory tier parallelism can reduce DDR memory bandwidth by up to 81% under heavy loads. To address these challenges, we propose a Dynamic Memory Request Control mechanism, MIKU, that prioritizes DDR memory requests while serving CXL memory requests on a best-effort basis. By dynamically adjusting CXL request rates based on service time estimates, MIKU achieves near-peak DDR throughput while maintaining high performance for CXL memory. Our evaluation with micro-benchmarks and representative workloads demonstrates the potential of MIKU to enhance tiered memory system efficiency.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Yujie Yang (29 papers)
  2. Lingfeng Xiang (2 papers)
  3. Peiran Du (1 paper)
  4. Zhen Lin (31 papers)
  5. Weishu Deng (2 papers)
  6. Ren Wang (72 papers)
  7. Andrey Kudryavtsev (11 papers)
  8. Louis Ko (1 paper)
  9. Hui Lu (38 papers)
  10. Jia Rao (3 papers)

Summary

We haven't generated a summary for this paper yet.