Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ML Research Benchmark (2410.22553v1)

Published 29 Oct 2024 in cs.AI

Abstract: Artificial intelligence agents are increasingly capable of performing complex tasks across various domains. As these agents advance, there is a growing need to accurately measure and benchmark their capabilities, particularly in accelerating AI research and development. Current benchmarks focus on general machine learning tasks, but lack comprehensive evaluation methods for assessing AI agents' abilities in tackling research-level problems and competition-level challenges in the field of AI. We present the ML Research Benchmark (MLRB), comprising 7 competition-level tasks derived from recent machine learning conference tracks. These tasks span activities typically undertaken by AI researchers, including model training efficiency, pretraining on limited data, domain specific fine-tuning, and model compression. This paper introduces a novel benchmark and evaluates it using agent scaffolds powered by frontier models, including Claude-3 and GPT-4o. The results indicate that the Claude-3.5 Sonnet agent performs best across our benchmark, excelling in planning and developing machine learning models. However, both tested agents struggled to perform non-trivial research iterations. We observed significant performance variations across tasks, highlighting the complexity of AI development and the challenges in creating versatile agent scaffolds. While current AI agents can successfully navigate complex instructions and produce baseline results, they fall short of the capabilities required for advanced AI research. The ML Research Benchmark provides a valuable framework for assessing and comparing AI agents on tasks mirroring real-world AI research challenges.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (36)
  1. David Owen. Interviewing ai researchers on automation of ai r&d, 2024. Accessed: 2024-08-29.
  2. Mlbench: Benchmarking machine learning services against human experts. Proc. VLDB Endow., 11:1220–1232, 2018.
  3. Mlagentbench: Evaluating language agents on machine learning experimentation. 2023.
  4. Gpt-4 technical report. 2023.
  5. Webshop: Towards scalable real-world web interaction with grounded language agents. ArXiv, abs/2207.01206, 2022.
  6. Mind2web: Towards a generalist agent for the web. ArXiv, abs/2306.06070, 2023.
  7. Webarena: A realistic web environment for building autonomous agents. ArXiv, abs/2307.13854, 2023.
  8. Agentbench: Evaluating llms as agents. ArXiv, abs/2308.03688, 2023.
  9. Api-bank: A comprehensive benchmark for tool-augmented llms. In Conference on Empirical Methods in Natural Language Processing, 2023.
  10. Evaluating language-model agents on realistic autonomous tasks. ArXiv, abs/2312.11671, 2023.
  11. Hugginggpt: Solving ai tasks with chatgpt and its friends in hugging face. ArXiv, abs/2303.17580, 2023.
  12. Auto-gpt for online decision making: Benchmarks and additional opinions. ArXiv, abs/2306.02224, 2023.
  13. Swe-agent: Agent-computer interfaces enable automated software engineering, 2024.
  14. GitHub - Aider-AI/aider: aider is AI pair programming in your terminal — github.com. https://github.com/Aider-AI/aider. [Accessed 21-10-2024].
  15. Evaluating large language models trained on code. ArXiv, abs/2107.03374, 2021.
  16. SWE-bench: Can language models resolve real-world github issues? In The Twelfth International Conference on Learning Representations, 2024.
  17. Automl-gpt: Automatic machine learning with gpt. ArXiv, abs/2305.02499, 2023.
  18. Mlcopilot: Unleashing the power of large language models in solving machine learning tasks. In Conference of the European Chapter of the Association for Computational Linguistics, 2023.
  19. The ai scientist: Towards fully automated open-ended scientific discovery. ArXiv, abs/2408.06292, 2024.
  20. Large language monkeys: Scaling inference compute with repeated sampling. ArXiv, abs/2407.21787, 2024.
  21. Bleu: a method for automatic evaluation of machine translation. In Annual Meeting of the Association for Computational Linguistics, 2002.
  22. Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Annual Meeting of the Association for Computational Linguistics, 2004.
  23. Jean Kaddour. The minipile challenge for data-efficient language models. ArXiv, abs/2304.08442, 2023.
  24. Edge-Device Large Language Model Competition. https://edge-llms-challenge.github.io/edge-llm-challenge.github.io/challenge. [Accessed 30-09-2024].
  25. Ai for math workshop @ icml 2024. https://sites.google.com/view/ai4mathworkshopicml2024. [Accessed 30-09-2024].
  26. NeurIPS Large Language Model Efficiency Challenge:1 LLM + 1GPU + 1Day — llm-efficiency-challenge.github.io. https://llm-efficiency-challenge.github.io/. [Accessed 30-09-2024].
  27. BabyLM Challenge. https://babylm.github.io/index.html. [Accessed 30-09-2024].
  28. React: Synergizing reasoning and acting in language models. ArXiv, abs/2210.03629, 2022.
  29. Scaling instruction-finetuned language models, 2022.
  30. Language models are unsupervised multitask learners. 2019.
  31. Opt: Open pre-trained transformer language models, 2022.
  32. Lora: Low-rank adaptation of large language models. ArXiv, abs/2106.09685, 2021.
  33. Parameter-efficient fine-tuning of large-scale pre-trained language models. Nature Machine Intelligence, 5:220–235, 2023.
  34. Pythia: A suite for analyzing large language models across training and scaling. ArXiv, abs/2304.01373, 2023.
  35. Llama 2: Open foundation and fine-tuned chat models. ArXiv, abs/2307.09288, 2023.
  36. Lambda | GPU Compute for AI — lambdalabs.com. https://lambdalabs.com/. [Accessed 20-10-2024].
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (1)
  1. Matthew Kenney (2 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com