Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 152 tok/s
Gemini 2.5 Pro 47 tok/s Pro
GPT-5 Medium 26 tok/s Pro
GPT-5 High 30 tok/s Pro
GPT-4o 101 tok/s Pro
Kimi K2 199 tok/s Pro
GPT OSS 120B 430 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

UniREditBench: Unified Model Editing Benchmark

Updated 9 November 2025
  • UniREditBench is a unified, large-scale benchmark that measures model editing across language and image tasks with diverse, domain-spanning datasets.
  • Its methodology employs multi-hop chain sampling and dual-reference evaluations to assess key metrics such as reliability, generality, and locality.
  • Empirical results highlight strengths in compositional reasoning and reveal challenges like unintended ripple effects and scalability in edit propagation.

UniREditBench refers to a family of unified, large-scale benchmarks designed to evaluate model editing in both language and image modalities, with recent works addressing LLMs (Chen et al., 18 May 2025) and multimodal reasoning-based image editing (Han et al., 3 Nov 2025). These benchmarks address the core challenge of measuring and improving a model’s ability to accommodate edits—whether the insertion, removal, or correction of knowledge and behaviors—while preserving global consistency, compositionality, and minimal unintended side effects. The term "UniREditBench" has been used interchangeably with "DUnE" (Akyürek et al., 2023) in the context of LLM editing, and most recently as the definitive benchmark for systematic evaluation of reasoning-based image editing models.

1. Motivation and Scope

UniREditBench arises from two deficiencies in prior evaluation paradigms for model editing:

  1. Narrow Task Framing: Previous benchmarks typically fixate on simple factual triple updates (subject, relation, object) in LLMs or single-object stylistic edits in images, lacking breadth in both domain coverage and the types of editing required (e.g., reasoning, debiasing, multi-hop knowledge propagation, puzzle-solving).
  2. Inadequate Evaluation Modalities: Text-only references fail in assessing edits that involve spatial, causal, or logical structure, particularly acute in complex multimodal settings.

The unified aim is to create systematically controlled, diverse, and large-scale datasets with explicit generality (propagation), locality (containment), and ripple effect detection, applicable to both knowledge and reasoning-intensive edits.

2. Dataset Construction and Scale

LLM Editing (LLMs)

  • Source: Construction leverages open-domain knowledge graphs (Wikidata; ≈113.7M entities, 12.3k properties) filtered for editorial value, focusing on seven data types (wikibase-item, string, quantity, time, math, coordinate, monolingual-text).
  • Domain Balance: Entities stratified into 25 domains (5 major sectors), using GIGA-scale keyword retrieval and stratified weighted sampling to maximize diversity and penalize redundancy.
  • Triplet Sampling: For each domain, 30,000 head entities are sequentially sampled, yielding over 311k edit triples; full benchmark constitutes 317k edit entries, 933,426 cloze tests, and hundreds of thousands of unique entities and relations.

Image Editing

  • Curated Benchmark (2,700 samples): Split equally between real-world photography (single-object, viewpoint, temporal, material, interaction, spatial) and game-world reasoning tasks (long-horizon planning, Sokoban, spatial intelligence, logical puzzle-solving).
  • Synthetic Data (UniREdit-Data-100K): Contains >100k samples with original and edited images, instructions, chain-of-thought (CoT) reasoning, and reference outputs. Real-world data use vision-LLMs and GPT-4o for expansion and filtering; game-world data programmatically generated/finalized with code-based simulation and subsequent natural language translation.
  • Multimodal Dual-Reference: For every instance, both a textual edit description and a ground-truth (GT) image are provided, enabling VLM-based evaluation of instruction following and fidelity.

3. Sampling and Evaluation Methodology

Sampling Algorithms

  • Neighborhood Multi-hop Chain Sampling (NMCS): For LLMs, NMCS systematically produces subgraphs around an edit triple, constructing both "generality" chains (to test edit propagation via up to 4-hop reasoning) and "locality" chains (to probe for ripple effects/unintended side effects).
  • Programmatic/Simulator Pipelines: For image data, programmatic generation allows synthesizing complex puzzles, multi-object interactions, and compositional reasoning traces, with rigorous filtering for correctness and diversity.

Evaluation Protocols

  • Language:
    • Reliability: Reliability=1E(qε,oε)EI[fLLM(qε)=oε]\mathrm{Reliability} = \frac{1}{|\mathcal E|} \sum_{(q_{\varepsilon}, o_{\varepsilon}) \in \mathcal{E}} \mathbb{I}[f_\mathrm{LLM'}(q_{\varepsilon}) = o_{\varepsilon}]
    • Generality: Score on NMCS-generated related queries, quantifying edit propagation.
    • Locality: Score on disjoint NMCS queries, quantifying the preservation of unrelated knowledge.
    • Ripple Effects: Locality decay as a function of hop depth.
  • Image:
    • Instruction Following (SIFS_{IF}): VLM scores on 1–5 scale based on the alignment of the output image with the attribution pair (input, instruction, GT/image reference, textual reference).
    • Visual Consistency (SVCS_{VC}): Assessment of the preservation of unedited regions.
    • Visual Quality (SVQS_{VQ}): Realism, lack of artifacts, and physical plausibility.
    • Overall Score: SOverall=0.5SIF+0.3SVC+0.2SVQS_{\mathrm{Overall}} = 0.5\,S_{IF} + 0.3\,S_{VC} + 0.2\,S_{VQ}

A table of dimensions for image editing in UniREditBench:

Primary Dimension Sub-Dimensions Example Edit Type
Real-World: Single-Object Viewpoint, Pose, Temporal, Material Top-view, bend limb
Real-World: Multi-Object Integrity, Motion, Mechanical, Medium, Spatial Break glass, roll ball
Game-World Scenarios Planning, Puzzle, Strategy, Spatial Intelligence Sokoban, maze, 3D blocks

4. Editing Algorithms and Backbones

LLM Editing Approaches

  • Fine-Tuning: Model weights updated over augmented data; achieves near-perfect direct edit reliability but tends to overfit, sacrificing generality.
  • Locate-then-Edit (ROME, AlphaEdit): Localizes relevant weights and performs minimal parameter changes; scores poorly on generality, especially for multi-hop or composite edits.
  • Edit-Training/In-Context Correction (SERAC, IKE): Trains a counterfactual or context-aware module, achieving substantially higher generality; in-context correction and chain-of-thought strategies show promise.
  • Sparse Token Retrieval (GRACE): Retrieves minimal relevant spans to maximize locality.
  • External Module Approaches (T-Patcher, GRACE): Attachments to base model, supporting greater edit modularity.

Image Model Baselines

  • Bagel/Bagel-Think: Multimodal transformer with explicit CoT mode; enhanced with UniREdit-Data-100K
  • Qwen-Image-Edit, Step1X-Edit: Multimodal open-source baselines.
  • Closed-Source: GPT-4o, Nano Banana, Wan 2.5, etc.

5. Empirical Results and Observations

  • Edit Reliability: All editors (FT, ROME, AlphaEdit, IKE, SERAC) achieve ≈100% on direct edits; FT achieves this via overfitting.
  • Generality: Locate-then-edit plateaus at ~35–45%; edit-training and in-context methods reach 76–82%.
  • Locality: Most methods preserve unrelated facts at >85–95%; GRACE approaches 100% due to sparsity and targeted edits; some locality is lost as edit complexity increases (notably in multi-hop).
  • Domain-Specific Trends: Generality higher in Natural Sciences and Humanities; lower in Social and Applied Sciences, indicating corpus bias.
  • Ripple Effects: Locality decays with hop depth, exposing vulnerability to unintended propagation.
Model Overall Score Real-World Game-World
UniREdit-Bagel (fine) ~78.9 ~76.5 ~82.3
GPT-4o ~71.6
Nano Banana ~68.4
Bagel-Think (vanilla) ~51.3
  • UniREdit-Bagel outperforms closed-source models in both real and especially game-world scenarios by a significant margin (~17 points in game-world).
  • Out-of-Distribution (OOD): Superior generalization of UniREdit-Bagel on rule-driven and compositional tasks, although factual/conceptual performance lags behind proprietary GPT-4o.
  • Qualitative Patterns: Most models succeed on direct attribute changes; structured puzzles, logical inference, and spatial reasoning remain challenging.

6. Open Challenges and Future Directions

  • Reversibility: Designing editors that allow precise “undo” of prior modifications.
  • Atomicity and Composition: Robust handling of composite or conflicting edits, and precise tracking of edit provenance.
  • Personalization and Safety: Edits expressing personal preferences, exclusion rules, or safety constraints require nuanced, often interactive, evaluation paradigms.
  • Scalability: Retrieval-augmented editing’s efficiency declines as the set of edits scales; efficient memory management and query-time filtering are necessary.
  • Robustness to Paraphrase: Edits must generalize across diverse query phrasings and contexts, motivating further advances in data diversity and edit propagation analysis.
  • Quantitative Visual Metrics: Future benchmarks may incorporate measures such as LPIPS or IoU in addition to VLM-based scoring.
  • 3D and Video Editing: Extension to temporally and spatially variant edits in richer modalities.
  • Explicit Reasoning Components: Architectures integrating symbolic or algorithmic planning to address logical/puzzle-style edits.

7. Significance and Impact

UniREditBench establishes a rigorous, extensible foundation for evaluating—and advancing—the field of model editing. By providing strongly controlled, domain-diverse, and large-scale datasets that test reliability, generality, and locality, it reveals clear limitations of both classical and state-of-the-art model updating techniques. Chain-of-thought and programmatic reasoning augmentations, dual-reference multimodal evaluation, and exhaustive domain coverage collectively set a new empirical standard for future research in both LLM and multimodal editing. The benchmark’s open-source ecosystem (for language: https://github.com/feyzaakyurek/dune; for image: synthetic and curated sets prepared for Bagel and related frameworks) supports reproducibility and iterative methodological progress.

A plausible implication is that the generalization gap between direct-edit success and actual compositional knowledge transfer will remain an active area of inquiry, especially as model architectures broaden and application domains become increasingly complex. UniREditBench will likely serve as a central resource for the systematic comparison of approaches and for diagnosing subtle weaknesses in model editing algorithms as both practical demands and theoretical frameworks evolve further.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (3)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to UniREditBench.