Papers
Topics
Authors
Recent
2000 character limit reached

CodeJudgeBench: LLM Code Evaluation

Updated 12 January 2026
  • CodeJudgeBench is a dedicated benchmark that evaluates LLM-as-a-Judge performance across code generation, repair, and test generation tasks.
  • It employs a paired evaluation methodology with precise bias and fairness metrics to ensure robust and reliable automated code assessments.
  • The framework integrates pair-wise and point-wise scoring protocols, highlighting the importance of chain-of-thought reasoning and input retention in improving judge accuracy.

CodeJudgeBench is a dedicated benchmark for evaluating the LLM-as-a-Judge (LLM-as-a-Judge) paradigm on coding tasks, providing a systematic methodology for measuring reliability, robustness, and fairness of automated code assessment in diverse scenarios. The benchmark framework addresses code generation, code repair, and unit test generation, and has established foundational methodologies and key empirical insights for both algorithmic benchmarking and practical deployment in code-centric model evaluation (Jiang et al., 14 Jul 2025).

1. Motivations and Conceptual Foundation

LLMs have demonstrated significant capabilities not only in generating and repairing code, but also in evaluating code artifacts—i.e., acting as “judges” for functional correctness, relative quality, or utility of solutions produced by humans or other models. This @@@@1@@@@ is increasingly adopted for scalable evaluation, automated benchmarking, and as an integral component in tools for code review and response ranking (Jiang et al., 14 Jul 2025). However, standard benchmarks focused on code generation (e.g., HumanEval, MBPP) cannot assess whether LLM judges reliably discern correctness across functionally similar but superficially different solutions, and often fail to probe the susceptibility of judges to non-semantic cues (Moon et al., 22 May 2025).

CodeJudgeBench was created to fill this gap by providing:

  • Task coverage across core modalities (generation, repair, test generation).
  • Data diversity through response pairings and difficulty gradation.
  • Diagnostic power with metrics for not just accuracy, but also bias, position sensitivity, and fairness.

2. Benchmark Suite Design and Methodological Structure

The CodeJudgeBench suite is constructed to enable rigorous evaluation across the following primary coding scenarios (Jiang et al., 14 Jul 2025):

Task Judge’s Input Required Decision
CodeGen Problem + test cases, 2 code responses Select correct implementation
CodeRepair Problem + erroneous code + error msg, 2 fixes Identify bug-fixing repair
TestGen Problem + input, 2 candidate outputs Select correct output
  • Each task is instantiated using paired “good” (all tests passed/correct) and “bad” (at least one test failed/incorrect) model-generated responses for each coding problem.
  • The dataset is sampled from LiveCodeBench-v6 (1,055 LeetCode/AtCoder/CodeForces problems, predominantly medium/hard) and comprises 1,011 (CodeGen), 2,409 (CodeRepair), and 840 (TestGen) curated pairs, totaling 4,260 pairs.

Difficulty stratification reflects the distribution of original problem complexity, supporting fine-grained and robust benchmarking.

3. Experimental Setup and Evaluation Paradigms

3.1 Judging and Prompting Protocols

Two principal LLM-as-a-Judge evaluation setups are supported:

  • Pair-wise Comparison: The judge receives both candidate responses and must decide which is preferable. This procedure is robust to tie ambiguity and enables direct comparative reasoning (prompt: “Which is better?”), asking for stepwise justification and a definitive verdict.
  • Point-wise Scoring: Each candidate is independently scored (typically 1–5). Decisions require computing the argmax over scores, but lead to a high frequency of ties (up to 50%) and reduced discriminative accuracy.

Pre-processing strategies for CodeGen further refine judging inputs:

  • Raw Response (RR): Full output (code, comments, rationale).
  • Full Code (FC): Only code, markdown-wrapped.
  • No Comments (NC): Code stripped of comments.

Retaining the full LLM response (RR) yields superior judge performance across models (e.g., Gemini-2.5-Pro obtains 80.6% with RR vs. 82.0% with FC; overall average 71.4% for RR) (Jiang et al., 14 Jul 2025).

3.2 Accuracy and Diagnostic Metrics

Core and robustness metrics include:

Metric Definition/Formula
Accuracy (Acc) Acc=(# correct judgments)/(# total)Acc = \text{(\# correct judgments)/(\# total)}
Tie Rate (point-wise only) Tie=(# ties)/(# total pairs)Tie = \text{(\# ties)/(\# total pairs)}
Position Sensitivity AccAAccB|Acc_A - Acc_B| (accuracy difference if good response moves slot)
Cross-Programmer Variance VarJ=maxk(AccJk)mink(AccJk)Var_J = \max_k (Acc_J^k) - \min_k (Acc_J^k) across code sources

Empirical results show substantial position sensitivity (up to 11pp difference for some judges), and notable programmer sensitivity (judges more accurate on Claude-generated code than on outputs from other programmers by ≈5pp on average).

4. Empirical Insights and Robustness Analysis

Across 26 benchmarked judge models, empirical findings include:

  • Thinking Models Superiority: Models with chain-of-thought (CoT) reasoning (e.g., Gemini-2.5-Pro, Claude-4-Sonnet, Qwen3-32B, AceReason-14B) consistently outperform non-thinking models. Gemini-2.5-Pro achieves up to 82% average on all tasks; Qwen3-32B, 72.4% (Jiang et al., 14 Jul 2025).
  • Prompting Effects: Pair-wise evaluation exceeds point-wise accuracy by 15-20pp and eliminates the high tie rate seen in scalar judging.
  • Input Retention: Removing comments or rationales from candidate code degrades judge accuracy.
  • Model Scale vs. Reasoning Performance: Small “thinking” open-source models (Qwen3-8B: 71.3%-71.5%) can outperform larger proprietary or judge-tuned models (Prometheus-14B, Self-Taught-70B: 56-57%), attributed to improved internal reasoning traces and pre-training on code.
  • Randomness Factors: Randomization in response order and repeated evaluations are necessary to attain stable, unbiased metrics, due to persisting sensitivities to superficial cues (formatting, verbosity, or source).

5. Diagnosis of Superficial Bias and Fairness

Systematic studies of CodeJudgeBench reveal essential vulnerabilities of current LLM judges to superficial presentation features—variable names, comments, and code formatting (Moon et al., 22 May 2025). Six bias types are defined:

Bias Type Effect on Judging Robustness Degradation Δ
Authority Inflates perceived quality Positive bias, Δ > 0
Self-declared correctness Strong shortcutting Strongest positive Δ (up to 28.7pp)
Variable change Non-semantic renaming Δ rises with identifier length
Reverse authority Overly skeptical Negative bias (Δ < 0)
Misleading task Misdirected reasoning Strong negative, MAD ≈ 15pp
Illusory complexity Dummy function insertion Weak/variable, length bias emerges

Mean Absolute Deviation (MAD) and per-bias robustness degradation quantify judge susceptibility. All tested judges (both open- and closed-source) display nonzero MAD (avg ~7.8pp). Test-case prompting reduces MAD (e.g., from 8.44pp to 4.09pp on incorrect code), but directional bias is not eliminated.

A fairness score is defined as

Fairness=1(biasΔbias)/(B100)Fairness = 1 - (\sum_\text{bias} |\Delta_\text{bias}|) / (B\cdot 100)

with B=6B=6 (all bias categories).

Key recommendations include diversifying superficial cues, explicitly instructing LLMs to disregard non-semantic elements, adopting multi-judge consensus mechanisms, and reporting bias and MAD metrics alongside standard accuracy (Moon et al., 22 May 2025).

6. Automated Benchmark Generation and Self-Consistency Framework

A major methodological advancement is the automated LLM-agent pipeline for constructing high-quality multi-language code tasks and “golden” artifacts (Farchi et al., 2024). The process comprises:

  • Seed idea expansion, leading to detailed, language-neutral problem descriptions.
  • Multi-agent generation of code implementations and summaries in diverse programming languages (e.g., COBOL, Java, Python, PL/1).
  • Formalization via a labeled multigraph G=(V,E,L)G=(V,E,L), where nodes encompass descriptions, code artifacts, and summaries, and labeled edges denote agent-generated transformations.
  • Exploiting cycles in GG to establish self-consistency claims (e.g., round-trip translation or description-summary-code consistency) as automated benchmarks for LLM-judge validation.

Quantitative evaluation shows that LLM-as-Judge systems can achieve 100% agreement with self-consistency claims in closed settings, but automation exposes errors early and allows scalable, reliable testing of judge implementations (Farchi et al., 2024).

7. Practice Recommendations and Future Directions

Evidence from the benchmark leads to several operational best practices:

  • Prefer pair-wise prompt protocols retaining entire model output (code, comments, reasoning) as judge input.
  • Employ small, high-reasoning open-source models (e.g., Qwen3-8B) for strong performance under resource constraints.
  • Randomize response positions and average over multiple runs to reduce bias.
  • Validate judges on responses from diverse code generators and problem sources to detect sensitivity.
  • Expand CodeJudgeBench with additional coding tasks, languages, and adversarial perturbations for increased realism and coverage (Jiang et al., 14 Jul 2025).

Future research directions identified include integrating dynamic testing, extending to non-code LLM domains, expanding error taxonomies, and refining instruction tuning and robustness against adversarial editing (Moon et al., 22 May 2025, Farchi et al., 2024). The benchmark continues to serve as an essential diagnostic and metric driver for next-generation, bias-aware LLM-based code evaluators.

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to CodeJudgeBench.