Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DSCodeBench: A Realistic Benchmark for Data Science Code Generation (2505.15621v2)

Published 21 May 2025 in cs.SE

Abstract: We introduce DSCodeBench, a new benchmark designed to evaluate LLMs on complicated and realistic data science code generation tasks. DSCodeBench consists of 1,000 carefully constructed problems sourced from realistic problems from GitHub across ten widely used Python data science libraries. Compared to the current state-of-the-art benchmark DS-1000, DSCodeBench offers a more challenging and representative testbed, longer code solutions, more comprehensive data science libraries, clearer and better structured problem descriptions, and stronger test suites. To construct the DSCodeBench, we develop a robust pipeline that combines task scope selection, code construction, test case generation, and problem description synthesis. The process is paired with rigorous manual editing to ensure alignment and enhance evaluation reliability. Experimental result shows that DSCodeBench exhibits robust scaling behavior, where larger models systematically outperform smaller ones, validating its ability to distinguish model capabilities. The best LLM we test, GPT-4o, has a pass@1 of 0.202, indicating that LLMs still have a large room to improve for realistic data science code generation tasks. We believe DSCodeBench will serve as a rigorous and trustworthy foundation for advancing LLM-based data science programming.

Summary

  • The paper introduces DS-bench, a realistic benchmark using complex GitHub-sourced problems and robust tests to evaluate large language models on data science code generation.
  • DS-bench features realistic tasks from GitHub and robust test suites (averaging ~200 tests/problem), revealing that state-of-the-art models like GPT-4o achieve only a 0.202 pass@1 score, indicating needed advancements.
  • The benchmark highlights significant limitations in current LLMs for practical data science coding tasks and suggests future extensions could include code repair and more intricate real-world scenarios.

DS-Bench: An In-Depth Analysis of Evaluation Criteria for Data Science Code Generation Models

The paper introduces DS-bench, a sophisticated benchmark aimed at evaluating LLMs on data science code generation tasks which are both complex and realistic. It specifically addresses current limitations found in benchmarks like DS-1000, enhancing the representation and evaluation of model capabilities in more authentic data science environments.

Key Enhancements over Existing Benchmarks

  1. Realistic Task Representation: DS-bench is built upon carefully curated problems sourced from GitHub, spanning ten widely used Python data science libraries such as NumPy, Pandas, SciPy, among others. It provides a range of complex tasks rather than the simplified problems often found in forums like StackOverflow. This ensures that the generated code reflects real-world programming complexities, with DS-bench averaging 22.5 lines of solution code compared to DS-1000's 3.6 lines.
  2. Robust Evaluation Framework: Strongly emphasizing evaluation rigor, DS-bench includes robust test suites with approximately 200 tests per problem, significantly surpassing DS-1000's average of 2.1 tests. This substantial increase ensures more comprehensive coverage and exploration of the behavior of generated code, helping distinguish model capabilities in handling corner cases.
  3. Clear and Structured Problem Descriptions: Problem descriptions in DS-bench are formulated with precision and clarity, averaging 276 words and providing detailed contexts, which helps prevent ambiguity. This layout allows for more accurate code synthesis by LLMs, supporting better alignment with real-world tasks.

Experimental Insights

The benchmark was utilized to evaluate ten state-of-the-art LLMs, including both open-source and commercial models. GPT-4o emerged as the top performer with a pass@1 score of 0.202, highlighting the benchmark's challenging nature. It is noted that larger models systematically outperformed smaller ones, showcasing DS-bench's capability to depict a consistent scaling behavior in terms of model performance.

Despite the superior performance of GPT-4o, even closed models like GPT faced challenges reflecting the benchmark's complexity. The need for improvement in realistic data science code generation is underscored by these results, as evidenced by relatively lower success rates compared to simpler benchmarks such as DS-1000.

Implications and Future Outlook

DS-bench sets a rigorous foundation, redefining the way LLMs are assessed in data science programming tasks. Its challenging tasks and robust evaluation framework align more closely with real-world scenarios, prompting models to advance beyond basic capabilities. The findings from DS-bench imply substantial room for improvement in LLMs to achieve functional reliability and practical applicability in complex coding environments.

The paper hints at extending DS-bench with tasks focused on code repair and exploring more intricate real-world scenarios, potentially widening its application and usefulness. This would help shape future research in AI-driven data science, steering the field towards developing tools that genuinely augment real-world programming tasks.

In conclusion, DS-bench proposes a paradigm shift in evaluating LLMs, challenging models to address the nuanced complexities of real data science practices. The outcomes associated with DS-bench will likely stimulate further academic discourse and development towards creating more reliable AI systems for programming purposes.