Papers
Topics
Authors
Recent
Search
2000 character limit reached

Test-Time Scaling in MT

Updated 22 December 2025
  • Test-Time Scaling (TTS) is a framework that adjusts the number of internal reasoning tokens during inference to enhance translation quality.
  • It employs distinct workflows such as direct translation, forced-reasoning extrapolation, and post-editing to manage a preset token budget for optimal performance.
  • Empirical results indicate that fine-tuning on in-domain data and careful budget selection significantly improve metrics like COMET and GEA.

Test-Time Scaling (TTS) Workflow

Test-Time Scaling (TTS) enables reasoning models for machine translation (MT) to dynamically allocate inference-time computation by controlling the number of internal reasoning steps, called “thinking” tokens, during decoding. This paradigm aims to leverage extra computation at inference—without modifying model parameters—to improve translation quality, particularly in tasks demanding non-trivial error correction and post-editing. The following sections provide a comprehensive technical account of TTS in MT, including its operational definition, core algorithms, formal reasoning depth model, necessary fine-tuning regimes, empirical findings, and operational recommendations, with all claims substantiated by (Li et al., 7 Oct 2025).

1. Definition and Operationalization

Test-Time Scaling (TTS) in the context of MT with Reasoning Models (RMs) is the allocation of additional inference-time “reasoning tokens” within a designated > ... </think> span during generation. The only hyperparameter is the thinking-token budget BB, which limits the maximum number of tokens the model can emit within this reasoning span. During decoding:

  • At each step inside <think>, a logit processor counts tokens toward BB.
  • When approximately 0.95B0.95\cdot B tokens have been generated, the logit processor softly upweights the likelihood of emitting </think> or a newline to encourage termination.
  • At exactly BB tokens, the processor forcibly inserts a newline and </think>, then switches to translation-mode decoding.

This framework enables systematic investigation of how increased internal computation correlates with translation quality and identifies the optimal inference depth for a given task, in direct analogy to reasoning budget control in models with a reasoning_effort parameter (e.g., Grok-3-Mini).

2. Scenario-Specific Algorithmic Workflows

TTS is implemented in three distinct experimental paradigms, each with dedicated inference logic.

2.1 Direct Translation

In this scenario, the model is simply prompted with the source text within a <think> context, and decoding proceeds with budget BB. Decoding stops either when </think> is emitted (naturally or forcibly at token BB), and then the output switches to translation mode.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
def translate_with_TTS(source_text, model, B):
    prompt = f"<think>\n{source_text}"
    tokens = []
    thinking_count = 0
    in_think = True
    while True:
        logits = model.forward(tokens, prompt)
        if in_think and thinking_count >= B:
            next_token = "</think>"
            in_think = False
        else:
            next_token = select_token(logits)
            if in_think and next_token == "</think>":
                in_think = False
            if in_think:
                thinking_count += 1
        tokens.append(next_token)
        if not in_think and is_end_of_translation(next_token):
            break
    return detokenize(tokens)

2.2 Forced-Reasoning Extrapolation

Here, even when the model attempts to terminate early, it is forced to continue reasoning by inserting additional tokens (e.g., "wait") after </think>, up to the capped budget, before emitting the translation.

2.3 Post-Editing (Self-Correction)

TTS post-editing entails a two-stage pipeline: first, an initial translation draft is generated without TTS; second, the draft and quality score (optional) are input as the prompt for TTS refinement.

1
2
3
4
5
6
7
8
def post_edit_with_TTS(source_text, model, B, prompt_type):
    draft = model.generate(source_text)
    if prompt_type == "QS":
        quality = score_with_GEMINI(source_text, draft)
        edit_prompt = f"<think>\nSource: {source_text}\nDraft: {draft}\nScore: {quality}"
    else:
        edit_prompt = f"<think>\nSource: {source_text}\nDraft: {draft}"
    return translate_with_TTS(edit_prompt, model, B)

3. Mathematical Model of Reasoning Depth

Let S(d)S(d) denote the translation metric (e.g., COMET, GEA) achieved after dd reasoning steps. The TTS optimization seeks the depth dd^* that maximizes S(d)S(d), or equivalently, halts when the marginal gain ΔS(d)=S(d)S(d1)\Delta S(d) = S(d) - S(d-1) drops below a threshold ϵ\epsilon, or when the model self-terminates:

d=argmaxd=0BmaxS(d)d^* = \arg\max_{d = 0 \ldots B_{max}} S(d)

In practice, S(d)S(d) is empirically monitored for B{0,100,200,,2000}B \in \{0,100,200,\ldots,2000\} and the reasoning plateau is detected.

4. Domain-Specific Fine-Tuning

General-purpose RMs do not realize significant TTS gains by default. To induce TTS responsiveness, models are fine-tuned on in-domain corpora with explicit chain-of-thought (CoT) annotations. For example, training on the MetaphorTrans English–Chinese dataset involves minimizing cross-entropy over concatenated sequences—with <think> ... reasoning spans followed by translation—using teacher-forcing on all tokens, a learning rate of 1×1051\times 10^{-5}, three epochs, and batch size 16. This protocol aligns a model's internal reasoning policy with the structure and token budget optimal for the target domain (i.e., dd^*).

5. Empirical Findings

5.1 General Models, Direct Translation

General models such as Qwen-3 and Cogito (0.6B–32B) show only minor quality improvements (ΔCOMET ~0.01–0.02) as BB increases from 0 to 100. Performance then plateaus; in Grok-3-Mini, high "reasoning_effort" yields negligible or negative differential in GRB/GRF metrics.

5.2 Fine-Tuned Models

On MetaphorTrans, increasing BB from 100 to 500 raises GEA100 from ~76 to ~82; beyond 500, both reasoning steps and translation quality saturate. Out-of-domain, additional thinking tokens do not consistently map to quality increase.

5.3 Forced Extrapolation

Forcing reasoning depth beyond the model’s natural stopping point (e.g., inserting a "wait" after </think></think>) increases reasoning chain length but universally degrades translation quality across all metric-model-budget configurations tested.

5.4 Post-Editing

Post-editing (B=0 initial draft, B=500–1000 refinement) is where TTS is most effective for general models. For Qwen-3 (1.7–14B), gains in GRB range from +0.6 to +1.2 when using B=500 in the refinement stage, especially when combined with quality scoring in the prompt.

6. Practical Guidelines for MT Pipelines

  • Model Selection: TTS in single-pass translation is not cost-effective for general RMs; domain-specific fine-tuning is required to realize TTS gains.
  • Budget Selection: Start with B100B \approx 100–200, sweep up to B500B \approx 500–1000; set budget at the plateau point where S(B)S(B) stops improving.
  • Stopping Criterion: Prefer the model’s own </think></think> emission or halt when ΔS(dmax)<ϵ\Delta S(d_{max}) < \epsilon (e.g., 0.01 COMET).
  • Post-Editing: Use a two-stage process: draft (B=0B=0) + refine (B=500B=500–1000), optionally with a prompt-level quality score.
  • Monitoring: Plot S(B)S(B) against BB on validation data and monitor actual reasoning token use to confirm that the model is not forced far beyond dd^*.

By following these TTS operational steps—domain-aligned fine-tuning, judicious thinking budget, and post-editing refinement—translation quality can be improved reliably without retraining or upscaling model parameters (Li et al., 7 Oct 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Test-Time Scaling (TTS) Workflow.