Papers
Topics
Authors
Recent
Search
2000 character limit reached

Test-Time Tool Evolution (TTE)

Updated 17 January 2026
  • The paper demonstrates TTE’s core contribution by dynamically synthesizing, verifying, and evolving tools during inference, thereby improving accuracy and efficiency on scientific benchmarks.
  • Test-Time Tool Evolution is a dynamic paradigm that creates and refines problem-specific computational tools on-the-fly, addressing the limitations of static tool libraries in scientific reasoning.
  • The approach employs both atomic and multi-agent frameworks to optimize tool reuse and scalability, achieving state-of-the-art performance across diverse cross-domain applications.

Test-Time Tool Evolution (TTE) is a paradigm that enables LLM-based agents to synthesize, verify, and evolve executable tools dynamically during inference, fundamentally addressing limitations of static, pre-defined tool libraries in scientific reasoning. TTE regards computational tools not as fixed resources but as problem-driven artifacts, thereby enabling agents to overcome rigidity, sparsity, heterogeneity, and long-tail incompleteness—challenges intrinsic to scientific domains where tasks routinely require bespoke primitives. TTE has proved effective in both single-agent (atomic tool evolution) and multi-agent (tool-use mixture) frameworks, such as SciEvo (Lu et al., 12 Jan 2026) and TUMIX (Chen et al., 30 Sep 2025), achieving state-of-the-art performance on accuracy and tool efficiency, with substantial implications for cross-domain adaptation and scalable scientific automation.

1. Problem Setting and Motivation

Modern LLM agents exhibit flexible reasoning capabilities yet are constrained by the executable rigor necessary for scientific work. Static, manually curated tool libraries suffer from sparsity—the scattering of scientific primitives across non-standardized, domain-specific implementations—and long-tail incompleteness, where novel problems lack suitable computational solutions. This rigidity in static libraries confines agents to passive selection rather than active discovery and creation of tools. TTE directly addresses these limitations by enabling test-time, on-the-fly synthesis, refinement, and validation of computational methods—mirroring the iterative methodology of scientific practice (Lu et al., 12 Jan 2026).

2. Formal Definition and Objective

Given a stream of scientific problems, P={P1,,PT}\mathcal{P} = \{P_1, \ldots, P_T\}, and an evolving atomic tool library LtL_t available just before solving problem PtP_t, the central optimization is:

max{Lt}t=1Tt=1T(I(Solved(Pt,Lt))    λLt)\max_{\{L_t\}_{t=1}^T} \sum_{t=1}^{T}\Bigl(\mathbb{I}\bigl(\text{Solved}(P_t,L_t)\bigr)\;-\;\lambda\,|L_t|\Bigr)

where I(Solved(Pt,Lt))\mathbb{I}(\text{Solved}(P_t,L_t)) is the correctness indicator, λ>0\lambda > 0 penalizes library size, and Lt|L_t| is the number of atomic tools (Lu et al., 12 Jan 2026). Globally optimizing this is intractable; TTE uses a greedy evolution strategy with no parameter fine-tuning, instantiated as:

  • TTE-Zero: Starts from an empty library (L0=L_0 = \varnothing); tools created ab initio.
  • TTE-Adapt: Starts from a pre-defined source library (L0=LpreL_0 = L_{\text{pre}}) for cross-domain adaptation.

In multi-agent TTE, as implemented in TUMIX, the agent pool S={s1,,sK}\mathcal{S} = \{s_1, \ldots, s_K\} each adopts distinct tool-use configurations, collaboratively refining answers through rounds of message-passing and agent auto-optimization (Chen et al., 30 Sep 2025).

3. Algorithmic Pipeline and Components

A typical atomic TTE workflow consists of five tightly integrated stages (Lu et al., 12 Jan 2026):

  1. Structured Task Decomposition: Problem PP is decomposed into sub-operations {Oi}\{O_i\} by a Problem Analyzer.
  2. Dynamic Tool Retrieval: For each OiO_i, compute cosine similarity between embeddings of description dOid_{O_i} and tool documentation dTjd_{T_j}; retrieve if above threshold τret\tau_{\mathrm{ret}}, otherwise trigger synthesis.
  3. Generative Tool Synthesis: When no suitable tool exists, a candidate TnewT_{\mathrm{new}} is sampled using chain-of-thought reasoning over code components (signature, body, docstring, test).
  4. Verification and Refinement: Candidate tools undergo syntactic, runtime, and domain validation. Tools are decomposed into atomic units to maximize reuse, deduplicated using code similarity (τdup\tau_{\mathrm{dup}}), and seldom-used tools are pruned to meet capacity constraints (CC).
  5. Runtime Execution: The evolved tool chain is executed to obtain the final solution.

In TUMIX, multi-agent TTE orchestrates KK heterogeneous tool-augmented agents in parallel, sharing intermediate answers and auto-optimizing agent configurations (pre-designed and LLM-proposed) for coverage and average accuracy, as defined by the combined score metric (Chen et al., 30 Sep 2025). Confidence-driven halting and majority-vote aggregation reduce unnecessary cost while preserving accuracy.

4. Implementation Strategies and Verification

Empirical instantiations of TTE favor high-precision backbone models—GPT-4o, Qwen2.5-7B-Instruct, and GPT-3.5-turbo—with prompting and sampling temperature set to 0.3, enforcing strict output formats (JSON/XML). Tool retrieval leverages dense code/documentation embeddings (bge-m3; re-ranking with bge-reranker-v2-m3), with deduplication via CodeBERT and similarity threshold τdup=0.8\tau_{\mathrm{dup}}=0.8. Execution proceeds in a Python sandbox with answer correctness judged by GPT-4.1-nano to a 10510^{-5} absolute tolerance, and library capacity is capped at C=500C=500 atomic tools (Lu et al., 12 Jan 2026). In multi-agent TTE (TUMIX), each agent is parameterized by its tool-use configuration (θi\theta_i), tool-interaction budget (Rtool=5R_{\mathrm{tool}}=5), and code time-limit (τcode=60\tau_{\mathrm{code}}=60 s), with round-wise message passing and adaptive halting by an LLM-based judge (Chen et al., 30 Sep 2025).

5. Benchmarking and Evaluation Metrics

The SciEvo benchmark comprises 1,590 scientific reasoning tasks supported by 925 automatically evolved atomic tools, covering 25 sub-disciplines in Physics, Chemistry, Mathematics, and Materials (Lu et al., 12 Jan 2026). Stratified sampling, embedding-based clustering, and chain-of-thought synthesis provide coverage and diversity. Evaluation uses:

  • Accuracy (Acc): Fraction of correctly solved tasks.
  • Tool Reuse Rate (TRR@k): Proportion of tools used at least kk times:

TRR@k={tT:h(t)k}T\mathrm{TRR}@k = \frac{|\{t\in\mathcal{T}:h(t)\ge k\}|}{|\mathcal{T}|}

  • Cross-Domain Metrics: TRRevol@k\mathrm{TRR}_{\mathrm{evol}@k} for newly evolved tools, TRRtrans@k\mathrm{TRR}_{\mathrm{trans}@k} for transferred tools.
  • In TUMIX, success rate, majority voting, and inference cost (#LLM calls, total tokens) are utilized on HLE, GPQA Diamond, and AIME benchmarks (Chen et al., 30 Sep 2025).

6. Empirical Results and Comparative Performance

Atomic TTE achieves state-of-the-art performance: on SciBench, TTE-Zero attains 0.45 accuracy vs. best baseline KTCE at 0.37; on SciEval, 0.30 vs. 0.24; on SciEvo, 0.62 vs. 0.56 (Lu et al., 12 Jan 2026). Tool reuse is maximized (TRR@1 ≈ 0.99, TRR@2 ≈ 0.94, TRR@5 ≈ 0.66, TRR@10 ≈ 0.41), while direct baselines exhibit heavy redundancy (KTCE TRR@1=0.31). Sub-goal-driven tool retrieval (“S+Tools”) consistently outperforms direct question querying (“Q+Tools”). In cross-domain adaptation, TTE-Adapt increases accuracy (Chemistry: 0.595 vs. 0.561/source, Physics: 0.618 vs. 0.585), mitigates negative transfer (TRRtrans@1\mathrm{TRR}_{\mathrm{trans}@1} from 0.26 to 0.23), and consolidates new knowledge (TRRevol@1\mathrm{TRR}_{\mathrm{evol}@1} up to 0.32).

In multi-agent TTE (TUMIX), under equal cost, accuracy is improved by +3.55 points over the best baseline (normalized average, 72.3% vs. 70.3%) on Gemini-2.5-Pro. Adaptive halting reduces per-round inference cost to approximately 49%, with no accuracy loss and sometimes a minor gain (+0.2 pts) (Chen et al., 30 Sep 2025). LLM auto-optimization of agent diversity further increases accuracy (+1.2%).

Method/Benchmark HLE GPQA AIME Norm. Avg.
Best Baseline 29.5% 86.9% 95.0% 70.3%
TUMIX 32.3% 87.9% 96.7% 72.3%
TUMIX+ 34.1% 88.3% 96.7% 73.0%

Dynamic synthesis and verification introduce latency and computational overhead compared to static retrieval. Agent coding quality is contingent on backbone LLM capability; models below 7B parameters underperform in tool generation. Executing arbitrary code introduces safety concerns; robust sandboxing and formal verification are necessary for reliable deployment. TTE’s distinct advance over prior test-time scaling methods and tool-augmented LLMs lies in dynamic tool synthesis, multi-modal agent diversity, iterative message passing, and agent auto-optimization for coverage and accuracy.

Future research avenues include hierarchical library lifecycle management, enhanced semantic and uncertainty-aware verification, expansion to vision-based and graph-analysis modalities, and meta-modeling for preemptive trivial tool predictions (Lu et al., 12 Jan 2026). Tool-diversity, as opposed to model backbone diversity, is established as the primary driver of coverage and accuracy in multi-agent scaling (Chen et al., 30 Sep 2025). A plausible implication is an emerging paradigm for open-ended, cross-domain scientific automation and reasoning at scale.


References:

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Test-Time Tool Evolution (TTE).