Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LAB-Bench: Measuring Capabilities of Language Models for Biology Research (2407.10362v3)

Published 14 Jul 2024 in cs.AI

Abstract: There is widespread optimism that frontier LLMs and LLM-augmented systems have the potential to rapidly accelerate scientific discovery across disciplines. Today, many benchmarks exist to measure LLM knowledge and reasoning on textbook-style science questions, but few if any benchmarks are designed to evaluate LLM performance on practical tasks required for scientific research, such as literature search, protocol planning, and data analysis. As a step toward building such benchmarks, we introduce the Language Agent Biology Benchmark (LAB-Bench), a broad dataset of over 2,400 multiple choice questions for evaluating AI systems on a range of practical biology research capabilities, including recall and reasoning over literature, interpretation of figures, access and navigation of databases, and comprehension and manipulation of DNA and protein sequences. Importantly, in contrast to previous scientific benchmarks, we expect that an AI system that can achieve consistently high scores on the more difficult LAB-Bench tasks would serve as a useful assistant for researchers in areas such as literature search and molecular cloning. As an initial assessment of the emergent scientific task capabilities of frontier LLMs, we measure performance of several against our benchmark and report results compared to human expert biology researchers. We will continue to update and expand LAB-Bench over time, and expect it to serve as a useful tool in the development of automated research systems going forward. A public subset of LAB-Bench is available for use at the following URL: https://huggingface.co/datasets/futurehouse/lab-bench

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Jon M. Laurent (3 papers)
  2. Joseph D. Janizek (5 papers)
  3. Michael Ruzo (1 paper)
  4. Michaela M. Hinks (2 papers)
  5. Michael J. Hammerling (2 papers)
  6. Siddharth Narayanan (4 papers)
  7. Manvitha Ponnapati (4 papers)
  8. Andrew D. White (29 papers)
  9. Samuel G. Rodriques (10 papers)
Citations (13)