Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Large Language Models to the Rescue: Reducing the Complexity in Scientific Workflow Development Using ChatGPT (2311.01825v2)

Published 3 Nov 2023 in cs.DC, cs.CL, and cs.HC

Abstract: Scientific workflow systems are increasingly popular for expressing and executing complex data analysis pipelines over large datasets, as they offer reproducibility, dependability, and scalability of analyses by automatic parallelization on large compute clusters. However, implementing workflows is difficult due to the involvement of many black-box tools and the deep infrastructure stack necessary for their execution. Simultaneously, user-supporting tools are rare, and the number of available examples is much lower than in classical programming languages. To address these challenges, we investigate the efficiency of LLMs, specifically ChatGPT, to support users when dealing with scientific workflows. We performed three user studies in two scientific domains to evaluate ChatGPT for comprehending, adapting, and extending workflows. Our results indicate that LLMs efficiently interpret workflows but achieve lower performance for exchanging components or purposeful workflow extensions. We characterize their limitations in these challenging scenarios and suggest future research directions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Mario Sänger (6 papers)
  2. Ninon De Mecquenem (5 papers)
  3. Katarzyna Ewa Lewińska (2 papers)
  4. Vasilis Bountris (4 papers)
  5. Fabian Lehmann (21 papers)
  6. Ulf Leser (42 papers)
  7. Thomas Kosch (24 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.