Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 175 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 27 tok/s Pro
GPT-5 High 28 tok/s Pro
GPT-4o 67 tok/s Pro
Kimi K2 179 tok/s Pro
GPT OSS 120B 442 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

LiveMCPBench: Scalable LLM Benchmark

Updated 5 November 2025
  • LiveMCPBench is a benchmark suite that evaluates LLM agents on real-world, multi-step tasks using 527 tools over 70 MCP servers.
  • It employs a novel two-stage task formulation and automated LLM-as-a-Judge evaluation to ensure reproducibility and high human alignment.
  • The benchmark drives research on dynamic tool use and multi-tool orchestration, with performance metrics like an 81% agreement rate with expert reviewers.

LiveMCPBench is a large-scale benchmark suite and ecosystem for evaluating LLM-based agents on complex, real-world tasks involving tool use in the Model Context Protocol (MCP) environment. Its principal goal is to enable reproducible, scalable, and ecologically valid agent benchmarking at scale, targeting multi-step, cross-domain problem solving with thousands of real MCP tools and live, dynamic tasks (Mo et al., 3 Aug 2025).

1. Scope and Design Motivation

LiveMCPBench addresses fundamental limitations in existing MCP benchmarks, which typically evaluate single-server or small-toolset scenarios with limited realism and scale. As the MCP ecosystem has grown beyond 10,000 servers, effective agent evaluation requires exposure to highly diverse, dynamic, and compositional tool landscapes. LiveMCPBench introduces a framework that reflects the operational complexity and dynamism encountered by aspiring general-purpose agents deployed in production MCP infrastructures.

Key design objectives include:

  • Scale: Evaluation against 95 real-world tasks sourced from 70 live MCP servers and 527 tools.
  • Diversity: Task domains range across office automation, lifestyle, leisure, finance, travel, and shopping.
  • Realism: Tasks are designed for authentic utility, often requiring up-to-date information and cross-tool workflows.
  • Plug-and-play deployment: Eliminates external key dependencies, supporting easy and reproducible experiments for research communities.

2. Dataset and Tool Collection Construction

LiveMCPBench’s dataset construction features a two-stage process. Proposers—practitioners and CS students—formulate tasks inspired by daily user needs or industrial workflows, sometimes leveraging LLM ideation but strictly verifying execution viability on the actual MCP infrastructure. Validators review each task, ensuring that tool-chains are feasible and rejecting duplicates or lower-quality proposals.

Characteristics:

  • 95 challenging tasks, each annotated with “key points” (explicit sub-goal requirements for ground truth).
  • Tasks are highly compositional, long-horizon, and time-sensitive, often requiring orchestrated usage of several tools over multiple turns.
  • Task pool spans diverse real-world domains, as summarized:
Domain Servers Sample Tasks
Office 15 Spreadsheet, Word automation, email
Lifestyle 9 News, trend analysis, summarization
Leisure 8 Games, entertainment, hobby planning
Finance 13 Stock, crypto info extraction
Travel 10 Hotel/flight search, itinerary
Shopping 15 Product comparison, deal finding

LiveMCPTool, the toolset, is curated from public MCP registries. Starting with 5,588 candidates, all servers requiring private keys are excluded, yielding 70 servers and 527 executable tools. Tools are manually vetted and categorized, ensuring out-of-the-box operability and redundancy across task domains.

3. Automated Evaluation: LiveMCPEval

Benchmarking agents in time-varying, real-world tool settings necessitates robust, adaptive evaluation. LiveMCPEval is an LLM-as-a-Judge framework that automates the assessment of agent execution “trajectories” (step-by-step tool interactions and outputs) for every task in LiveMCPBench.

Features:

  • Binary success/failure metric per task: An agent's trajectory is successful if and only if all annotated key points are satisfied based on actions and outputs.
  • Adaptive to dynamic outputs (e.g., current news, dynamic results), emphasizing logical completeness and adherence to explicit requirements, not static answers.
  • Prompts enforce strict conformance (e.g., agents must actually create required files, and cannot use internal knowledge when tools are mandated).
  • Agreement rate with expert human reviewers for final outcome scoring is reported at 81%, validating evaluation reliability.

Core formula:

O=Evaluator(T,P,A,D)\mathcal{O} = \text{Evaluator}(T, P, A, D)

where TT = task description, PP = set of key points, AA = trajectory, DD = tool descriptions, and O\mathcal{O} is the outcome.

Key point extraction may be manual or LLM-generated, enhancing scalability for novel tasks.

4. Baseline Agent and Methodology

The MCP Copilot Agent—a ReACT-based, multi-step agent—serves as the canonical baseline in LiveMCPBench. The agent’s reasoning loop is modeled as a Partially Observable Markov Decision Process (POMDP), with states, observations, actions (tool exploration, retrieval, and execution), transitions, and reward structures corresponding to task completion and trajectory quality.

Agent workflow:

  • “Routing” stage: Selects candidate tools using semantic similarity based on server/tool metadata.
  • “Execution” stage: Invokes tools with dynamically determined parameters, interprets feedback, adapts plans, retries on failure, and synthesizes responses.
  • Adheres strictly to MCP parameter and invocation schemas, handling explicit parameterization and tool chaining.
  • Prompts and execution protocols are standardized and public, ensuring reproducibility.

Empirical observations indicate that top-performing agents (e.g., Claude models) dynamically explore broader sets of tools per task and execute more complex plans compared to baseline LLMs, which tend to under-explore.

5. Benchmarking Results: Performance and Error Analysis

Ten leading models—including Claude-Sonnet-4, OpenAI GPT-4.1, Gemini, Deepseek, Qwen—are benchmarked on LiveMCPBench, each limited to 30 interactive rounds per task.

Selected outcomes:

Model Success Rate (%) Notable Behaviors
Claude-Sonnet-4 78.95 Frequent multi-tool/route exploration, robust adaptation
Claude-Opus-4 70.53 Similar, but slightly less dynamic
Most other LLMs 30–50 Limited meta-tool learning, over-reliance on one tool

Further findings:

  • Expanded tool pool increases robustness requirement; strong agents are resilient, while weaker models see decreasing performance as task noise grows.
  • Agents’ Pareto profiles (cost vs. performance) reveal a near-linear tradeoff, informing model selection for deployment scenarios.
  • Error analysis delineates categories (Query, Retrieve, Tool, Other), with actionable diagnostic insights available for failure cases.

6. Technical Innovations and Comparison

LiveMCPBench exhibits several distinguishing features within the MCP benchmarking landscape:

  • Comprehensive scale: 70 servers and 527 tools, versus ≤42 tools for prior MCP-based testbeds.
  • Real, persistent tool interfacing: Avoids simulated/fake tools, eliminating “tool rot” and reproducibility issues from prior work.
  • Time-varying, multi-path, and compositional task structure: Solutions may involve a range of tool sequences, without penalizing creative routes that achieve all goals.
  • Automated, scalable LLM-based trajectory evaluation with validated human alignment.

Summary comparison with prior benchmarks:

Benchmark Servers Tools Plug & Play Task Type Time-Varying Eval
MCPBench 10 10 No Real No Rule
MCP-RADAR 9 42 Yes Real No Rule
MCPEval 12 77 No Synthetic Yes LLM
LiveMCPBench 70 527 Yes Real Yes LLM

This positioning demonstrates LiveMCPBench’s distinctiveness in scale, realism, and infrastructure compatibility relative to the state of the art.

7. Implications for Research and Deployment

LiveMCPBench provides a reproducible, scalable foundation for diagnosing, comparing, and improving LLM agent architectures targeting generalizable, real-world tool use. Its granular error codes, comprehensive logs, and automated assessment enable robust ablation studies, meta-tool learning research, and development of cost-efficient deployment strategies.

A plausible implication is that future agent research will increasingly require performance validation on plug-and-play, time-varying MCP benchmarks like LiveMCPBench to establish claims of dynamic, robust tool use. The combination of dynamic data, compositional workflow demands, and automated, high-agreement evaluation creates a unified proving ground for broad-spectrum model-agent advancement.

The code and data are slated for public release at https://icip-cas.github.io/LiveMCPBench (Mo et al., 3 Aug 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to LiveMCPBench Dataset.