Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Basic Performance Measurements of the Intel Optane DC Persistent Memory Module (1903.05714v3)

Published 13 Mar 2019 in cs.DC and cs.PF

Abstract: Scalable nonvolatile memory DIMMs will finally be commercially available with the release of the Intel Optane DC Persistent Memory Module (or just "Optane DC PMM"). This new nonvolatile DIMM supports byte-granularity accesses with access times on the order of DRAM, while also providing data storage that survives power outages. This work comprises the first in-depth, scholarly, performance review of Intel's Optane DC PMM, exploring its capabilities as a main memory device, and as persistent, byte-addressable memory exposed to user-space applications. This report details the technologies performance under a number of modes and scenarios, and across a wide variety of macro-scale benchmarks. Optane DC PMMs can be used as large memory devices with a DRAM cache to hide their lower bandwidth and higher latency. When used in this Memory (or cached) mode, Optane DC memory has little impact on applications with small memory footprints. Applications with larger memory footprints may experience some slow-down relative to DRAM, but are now able to keep much more data in memory. When used under a file system, Optane DC PMMs can result in significant performance gains, especially when the file system is optimized to use the load/store interface of the Optane DC PMM and the application uses many small, persistent writes. For instance, using the NOVA-relaxed NVMM file system, we can improve the performance of Kyoto Cabinet by almost 2x. Optane DC PMMs can also enable user-space persistence where the application explicitly controls its writes into persistent Optane DC media. In our experiments, modified applications that used user-space Optane DC persistence generally outperformed their file system counterparts. For instance, the persistent version of RocksDB performed almost 2x faster than the equivalent program utilizing an NVMM-aware file system.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Joseph Izraelevitz (6 papers)
  2. Jian Yang (505 papers)
  3. Lu Zhang (373 papers)
  4. Juno Kim (19 papers)
  5. Xiao Liu (402 papers)
  6. Amirsaman Memaripour (1 paper)
  7. Yun Joon Soh (4 papers)
  8. Zixuan Wang (83 papers)
  9. Yi Xu (304 papers)
  10. Subramanya R. Dulloor (5 papers)
  11. Jishen Zhao (24 papers)
  12. Steven Swanson (14 papers)
Citations (458)

Summary

  • The paper reveals that Intel Optane DC exhibits distinct latency and bandwidth profiles, with random read latency averaging 305 ns and significant read/write asymmetry.
  • The paper demonstrates that using Optane DC as a main memory extension with DRAM caching sustains performance for small memory footprints while larger applications face slowdowns.
  • The paper indicates that integrating Optane DC in persistent storage and app direct modes, via optimized file systems and libraries, generates notable performance gains by reducing system overhead.

Performance Evaluation of Intel Optane DC Persistent Memory Modules

The paper provides a comprehensive analysis of Intel's Optane DC Persistent Memory Modules (PMMs), examining their performance characteristics and potential impact on system architecture. The research addresses the behavior of Optane DC when utilized as main memory, persistent storage, and as a persistent byte-addressable medium.

Key Findings

  1. Latency and Bandwidth:
    • Optane DC exhibits higher latency compared to DRAM, with average random read latencies of 305 ns, approximately three times slower than DRAM. Sequential access latencies show improvement, suggesting internal buffering or caching.
    • Maximum read and write bandwidths stand at 39.4 GB/s and 13.9 GB/s, respectively, when fully utilizing six interleaved PMMs, revealing asymmetrical performance between reads and writes.
  2. Potential as Main Memory:
    • When used as a main memory extension with DRAM caching, Optane DC maintains performance for applications with small memory footprints. However, applications with large memory footprints experience slowdowns due to Optane DC’s intrinsic latency and bandwidth limitations.
  3. Use as Persistent Storage:
    • The integration of Optane DC into storage systems demonstrates performance gains over traditional SSDs. File systems optimized for Optane’s characteristics, such as NOVA-relaxed, show significant improvements in application-level performance, particularly with workloads involving small, persistent writes.
  4. Performance in App Direct Mode:
    • Enabling direct user-space persistence through libraries like PMDK allows applications like RocksDB to achieve substantial performance gains by circumventing traditional kernel and file system overheads.

Practical and Theoretical Implications

The analysis indicates that Optane DC can serve as a versatile memory tier, providing persistency while offering performance situated between DRAM and SSDs. This positions it as a valuable component in scenarios where large memory allocations are essential—enabling machines to maintain greater amounts of persistent data in-memory than feasible with DRAM alone.

However, the paper also highlights limitations and challenges. The anticipated performance benefits require careful consideration of access patterns due to Optane DC’s unique latency and bandwidth profiles. Ensuring software compatibility and leveraging emerging persistent memory file systems will be crucial to fully exploit Optane DC's capabilities.

Speculative Future Directions

Research into nonvolatile memory systems is poised to expand, focusing on optimizing existing data structures and algorithms specific to Optane DC's properties. As real-world integration progresses, addressing these challenges will likely lead to innovations in memory management and data persistence techniques. Further investigations into cache strategies and fine-grained memory management could yield solutions to mitigate bandwidth limitations, especially in write-heavy applications.

The current paper serves as a foundational contribution, inviting continued exploration into the complexities of integrating Optane DC into diverse computing environments, thus enhancing the understanding and utility of persistent memory technologies.

Youtube Logo Streamline Icon: https://streamlinehq.com