Towards Experiment Execution in Support of Community Benchmark Workflows for HPC (2507.22294v1)
Abstract: A key hurdle is demonstrating compute resource capability with limited benchmarks. We propose workflow templates as a solution, offering adaptable designs for specific scientific applications. Our paper identifies common usage patterns for these templates, drawn from decades of HPC experience, including recent work with the MLCommons Science working group. We found that focusing on simple experiment management tools within the broader computational workflow improves adaptability, especially in education. This concept, which we term benchmark carpentry, is validated by two independent tools: Cloudmesh's Experiment Executor and Hewlett Packard Enterprise's SmartSim. Both frameworks, with significant functional overlap, have been tested across various scientific applications, including conduction cloudmask, earthquake prediction, simulation-AI/ML interactions, and the development of computational fluid dynamics surrogates.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.