Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

WfBench: Automated Generation of Scientific Workflow Benchmarks (2210.03170v1)

Published 6 Oct 2022 in cs.DC

Abstract: The prevalence of scientific workflows with high computational demands calls for their execution on various distributed computing platforms, including large-scale leadership-class high-performance computing (HPC) clusters. To handle the deployment, monitoring, and optimization of workflow executions, many workflow systems have been developed over the past decade. There is a need for workflow benchmarks that can be used to evaluate the performance of workflow systems on current and future software stacks and hardware platforms. We present a generator of realistic workflow benchmark specifications that can be translated into benchmark code to be executed with current workflow systems. Our approach generates workflow tasks with arbitrary performance characteristics (CPU, memory, and I/O usage) and with realistic task dependency structures based on those seen in production workflows. We present experimental results that show that our approach generates benchmarks that are representative of production workflows, and conduct a case study to demonstrate the use and usefulness of our generated benchmarks to evaluate the performance of workflow systems under different configuration scenarios.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Tainã Coleman (18 papers)
  2. Henri Casanova (22 papers)
  3. Ketan Maheshwari (14 papers)
  4. Loïc Pottier (11 papers)
  5. Sean R. Wilkinson (11 papers)
  6. Justin Wozniak (4 papers)
  7. Frédéric Suter (32 papers)
  8. Mallikarjun Shankar (13 papers)
  9. Rafael Ferreira da Silva (31 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.