Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
11 tokens/sec
Gemini 2.5 Pro Pro
52 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
10 tokens/sec
DeepSeek R1 via Azure Pro
33 tokens/sec
2000 character limit reached

One Token to Fool LLM-as-a-Judge (2507.08794v1)

Published 11 Jul 2025 in cs.LG and cs.CL

Abstract: Generative reward models (also known as LLMs-as-judges), which use LLMs to evaluate answer quality, are increasingly adopted in reinforcement learning with verifiable rewards (RLVR). They are often preferred over rigid rule-based metrics, especially for complex reasoning tasks involving free-form outputs. In this paradigm, an LLM is typically prompted to compare a candidate answer against a ground-truth reference and assign a binary reward indicating correctness. Despite the seeming simplicity of this comparison task, we find that generative reward models exhibit surprising vulnerabilities to superficial manipulations: non-word symbols (e.g., ":" or ".") or reasoning openers like "Thought process:" and "Let's solve this problem step by step." can often lead to false positive rewards. We demonstrate that this weakness is widespread across LLMs, datasets, and prompt formats, posing a serious threat for core algorithmic paradigms that rely on generative reward models, such as rejection sampling, preference optimization, and RLVR. To mitigate this issue, we introduce a simple yet effective data augmentation strategy and train a new generative reward model with substantially improved robustness. Our findings highlight the urgent need for more reliable LLM-based evaluation methods. We release our robust, general-domain reward model and its synthetic training data at https://huggingface.co/sarosavo/Master-RM and https://huggingface.co/datasets/sarosavo/Master-RM.

Summary

  • The paper identifies vulnerabilities in LLM-as-a-judge, demonstrating that simple tokens can induce up to 90% false positive rates in evaluation pipelines.
  • It provides a comprehensive empirical evaluation across multiple models and benchmarks, underlining the systemic risk of superficial triggers in reinforcement learning.
  • The study introduces a data augmentation strategy that fine-tunes reward models to achieve near-zero false positives, significantly enhancing model robustness.

Analysis of "One Token to Fool LLM-as-a-Judge" (2507.08794)

"One Token to Fool LLM-as-a-Judge" presents a systematic paper of vulnerabilities in LLMs when used as generative reward models (LLM-as-a-judge) in reinforcement learning with verifiable rewards (RLVR). The authors demonstrate that these models are highly susceptible to superficial manipulations—such as non-word symbols or generic reasoning openers—which can trigger false positive reward signals, thereby undermining the reliability of LLM-based evaluation pipelines.

Core Contributions

The paper makes several key contributions:

  • Identification of Systemic Vulnerabilities: The authors show that trivial responses (e.g., ":", ".", "Thought process:", "Solution") can elicit positive rewards from a wide range of LLM judges, including state-of-the-art commercial and open-source models. This phenomenon is robust across datasets, prompt formats, and languages, and is not mitigated by inference-time strategies such as chain-of-thought prompting or majority voting.
  • Comprehensive Empirical Evaluation: The paper evaluates both general-purpose LLMs (e.g., GPT-4o, Claude-4, LLaMA3, Qwen2.5) and specialized reward models (e.g., Omni-Judge, General-Verifier, Multi-sub RM) on five diverse reasoning benchmarks. False positive rates (FPRs) for "master key" attacks reach as high as 80–90% in some models and settings.
  • Simple and Effective Mitigation: The authors propose a data augmentation strategy: generating adversarial negative samples by truncating model-generated solutions to their first sentence (typically a reasoning opener) and labeling them as incorrect. Fine-tuning with this augmented dataset yields a new reward model, Master-RM, which achieves near-zero FPRs on all tested attacks and benchmarks.
  • Open Resources: The robust reward model (Master-RM) and its synthetic training data are released for the community.

Empirical Findings

The paper's experimental results are particularly notable for their breadth and clarity:

  • Prevalence of Vulnerability: Table 1 shows that general-purpose LLMs, including GPT-4o and Claude-4, are highly vulnerable to "master key" attacks, with FPRs up to 90% for certain triggers. Even specialized reward models exhibit non-negligible FPRs (e.g., General-Verifier at 66.8% on MATH for a blank space).
  • Effectiveness of Data Augmentation: Master-RM, trained with adversarial augmentation, consistently achieves FPRs near 0% across all datasets and attack types, without sacrificing agreement with GPT-4o on standard evaluation (96% consistency).
  • Scaling Behavior: The vulnerability does not monotonically decrease with model size. Smaller models may be less vulnerable due to literal matching, while larger models sometimes "self-solve" and compare their own answer to the reference, leading to increased FPRs at scale.
  • Generality of Attacks: The attack generalizes across languages and can be extended by retrieving semantically similar sentences via embedding search, indicating that the vulnerability is not limited to a fixed set of triggers.

Practical Implications

The findings have significant implications for the deployment of LLM-based reward models in RL and evaluation pipelines:

  • Reliability of RLVR Pipelines: RLVR systems that rely on LLM-as-a-judge are at risk of reward hacking, where the policy learns to exploit superficial patterns rather than solving the underlying task. This can lead to collapsed training and meaningless outputs, as empirically observed in the paper.
  • Evaluation Practices: The widespread use of LLMs as evaluation baselines (e.g., GPT-4o) is called into question, as these models can be trivially manipulated. Agreement with such models is not a sufficient indicator of robustness.
  • Mitigation Strategies: The proposed data augmentation method is simple, computationally efficient, and effective. It can be readily incorporated into existing reward model training pipelines. The approach is model-agnostic and does not require architectural changes or additional inference-time computation.
  • Open-Source Resources: The release of Master-RM and its training data provides a practical starting point for robust reward modeling in both academic and industrial settings.

Theoretical Implications and Future Directions

The paper highlights a fundamental challenge in aligning LLM-based evaluators with human intent: models can be easily misled by surface-level cues, especially when the evaluation task is less constrained than generation. This suggests that:

  • Reward Model Robustness should be a primary consideration in RLHF and RLVR research, with systematic adversarial testing as a standard evaluation protocol.
  • Generalization of Robustness: While the current augmentation targets reasoning openers, future work should explore broader classes of adversarial patterns, including those embedded within or at the end of reasoning chains, and more sophisticated attacks.
  • Automated Adversarial Data Generation: Embedding-based retrieval of new "master keys" demonstrates a scalable path for adversarial data mining, which could be integrated into continual reward model training.
  • Beyond Supervised Fine-Tuning: While SFT with adversarial augmentation is effective, further research into adversarial training, contrastive learning, or explicit uncertainty modeling may yield even more robust evaluators.

Conclusion

"One Token to Fool LLM-as-a-Judge" provides a rigorous and actionable analysis of a critical vulnerability in LLM-based reward modeling. The work demonstrates that simple, targeted data augmentation can dramatically improve robustness, and sets a new standard for evaluating and training LLM-as-a-judge systems. The implications extend to any application where LLMs are used for automated evaluation, ranking, or reward assignment, and the open-source release of Master-RM will facilitate further research and deployment of robust reward models.

Youtube Logo Streamline Icon: https://streamlinehq.com

HackerNews

  1. One Token to Fool LLM-as-a-Judge (2 points, 0 comments)