Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Proceedings of the 2015 International Workshop on the Lustre Ecosystem: Challenges and Opportunities (1506.05323v1)

Published 17 Jun 2015 in cs.DC

Abstract: The Lustre parallel file system has been widely adopted by high-performance computing (HPC) centers as an effective system for managing large-scale storage resources. Lustre achieves unprecedented aggregate performance by parallelizing I/O over file system clients and storage targets at extreme scales. Today, 7 out of 10 fastest supercomputers in the world use Lustre for high-performance storage. To date, Lustre development has focused on improving the performance and scalability of large-scale scientific workloads. In particular, large-scale checkpoint storage and retrieval, which is characterized by bursty I/O from coordinated parallel clients, has been the primary driver of Lustre development over the last decade. With the advent of extreme scale computing and Big Data computing, many HPC centers are seeing increased user interest in running diverse workloads that place new demands on Lustre. In March 2015, the International Workshop on the Lustre Ecosystem: Challenges and Opportunities was held in Annapolis, Maryland at the Historic Inns of Annapolis Governor Calvert House. This workshop series is intended to help explore improvements in the performance and flexibility of Lustre for supporting diverse application workloads. The 2015 workshop was the inaugural edition, and the goal was to initiate a discussion on the open challenges associated with enhancing Lustre for diverse applications, the technological advances necessary, and the associated impacts to the Lustre ecosystem. The workshop program featured a day of tutorials and a day of technical paper presentations.

Summary

We haven't generated a summary for this paper yet.