Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

KramaBench: A Benchmark for AI Systems on Data-to-Insight Pipelines over Data Lakes (2506.06541v1)

Published 6 Jun 2025 in cs.DB, cs.AI, and cs.MA

Abstract: Constructing real-world data-to-insight pipelines often involves data extraction from data lakes, data integration across heterogeneous data sources, and diverse operations from data cleaning to analysis. The design and implementation of data science pipelines require domain knowledge, technical expertise, and even project-specific insights. AI systems have shown remarkable reasoning, coding, and understanding capabilities. However, it remains unclear to what extent these capabilities translate into successful design and execution of such complex pipelines. We introduce KRAMABENCH: a benchmark composed of 104 manually-curated real-world data science pipelines spanning 1700 data files from 24 data sources in 6 different domains. We show that these pipelines test the end-to-end capabilities of AI systems on data processing, requiring data discovery, wrangling and cleaning, efficient processing, statistical reasoning, and orchestrating data processing steps given a high-level task. Our evaluation tests 5 general models and 3 code generation models using our reference framework, DS-GURU, which instructs the AI model to decompose a question into a sequence of subtasks, reason through each step, and synthesize Python code that implements the proposed design. Our results on KRAMABENCH show that, although the models are sufficiently capable of solving well-specified data science code generation tasks, when extensive data processing and domain knowledge are required to construct real-world data science pipelines, existing out-of-box models fall short. Progress on KramaBench represents crucial steps towards developing autonomous data science agents for real-world applications. Our code, reference framework, and data are available at https://github.com/mitdbg/KramaBench.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (19)
  1. Eugenie Lai (2 papers)
  2. Gerardo Vitagliano (5 papers)
  3. Ziyu Zhang (35 papers)
  4. Sivaprasad Sudhir (2 papers)
  5. Om Chabra (3 papers)
  6. Anna Zeng (4 papers)
  7. Anton A. Zabreyko (3 papers)
  8. Chenning Li (8 papers)
  9. Ferdi Kossmann (4 papers)
  10. Jialin Ding (15 papers)
  11. Jun Chen (376 papers)
  12. Markos Markakis (4 papers)
  13. Matthew Russo (12 papers)
  14. Weiyang Wang (36 papers)
  15. Ziniu Wu (20 papers)
  16. Michael J. Cafarella (2 papers)
  17. Lei Cao (60 papers)
  18. Samuel Madden (56 papers)
  19. Tim Kraska (78 papers)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com