Papers
Topics
Authors
Recent
Search
2000 character limit reached

MiLDEEval: Multi-Layer Editing Evaluation

Updated 15 January 2026
  • The paper introduces MiLDEEval, a bespoke protocol that provides detailed, multi-dimensional assessment for reasoning-intensive, multi-layer document editing systems.
  • It systematically decomposes performance into instruction following, layout consistency, aesthetics, and text rendering, using tailored metrics for each dimension.
  • The aggregated MiLDEScore, reinforced by gating and synergy, strongly aligns with human judgments and offers a robust benchmark for future multimodal editing research.

MiLDEEval is a bespoke evaluation protocol designed for the assessment of reasoning-intensive, multi-layer document editing systems. Developed alongside @@@@1@@@@, MiLDEEval introduces a multidimensional, perceptually driven framework to benchmark and diagnose model performance in tasks where natural-language instructions guide fine-grained edits across complex, multi-layer design documents consisting of text, images, and decorative elements. Unlike previous approaches that evaluate only flat canvas image edits, MiLDEEval systematically decomposes editing success into four dimensions—Instruction Following, Layout Consistency, Aesthetics, and Text Rendering—and recombines these into a single MiLDEScore that exhibits strong alignment with human judgments (&&&0&&&).

1. Role and Motivation

MiLDEEval serves two primary functions within the MiLDEBench suite: diagnostic analysis and unified system ranking. The protocol provides per-dimension scores to reveal failure modes, such as models that preserve spatial arrangement but ignore instructions. In addition, these dimensions are aggregated into a summary MiLDEScore, enabling straightforward comparison and ranking of end-to-end editing agents. This dual capability allows researchers to distinguish between models that superficially satisfy visual or structural criteria and those that perform authentic, user-intended, layer-aware edits (Lin et al., 8 Jan 2026).

2. The Four Evaluation Dimensions

Each dimension is precisely defined with tailored metrics. This decomposition enables granular scrutiny and robust aggregation.

2.1 Instruction Following

This dimension quantifies whether the model accomplished the layer-specific content changes requested. For each layer flagged for editing in the gold annotation, InternVL3-38B generates a binary question ("Has the main image been changed to a museum scene?") and auto-judges the answer. The raw score is computed as:

IF=1001Nediti=1Nedit1[Ai="yes"]\mathrm{IF} = 100 \cdot \frac{1}{N_{\text{edit}}} \sum_{i=1}^{N_{\text{edit}}} \mathbb{1}[A_i = \text{"yes"}]

where NeditN_{\text{edit}} is the number of layers requiring edit (Lin et al., 8 Jan 2026).

2.2 Layout Consistency

This dimension measures fidelity of the output’s spatial arrangement versus the original. Masks are extracted via Adopd Doc2Mask for both original (D)(D) and edited documents (D)(D'):

  • Compute IoU similarity matrix SijS_{ij}.
  • Solve maximum-weight assignment with Hungarian algorithm, retaining matches with IoUτIoU=0.1\text{IoU} \geq \tau_{IoU}=0.1.
  • For each matched mask pair Mi,MjM_i, M'_j, calculate:
    • Position: cpos=1centroid(Mi)centroid(Mj)2H2+W2c_{\text{pos}} = 1 - \frac{\|\text{centroid}(M_i) - \text{centroid}(M'_j)\|_2}{\sqrt{H^2 + W^2}}
    • Shape: cshape=IoU(Mi,Mj)c_{\text{shape}} = \mathrm{IoU}(M_i, M'_j)
    • Area: carea=min(area(Mi),area(Mj))/max(area(Mi),area(Mj))c_{\text{area}} = \mathrm{min}(area(M_i), area(M'_j))/\mathrm{max}(area(M_i), area(M'_j))

Unmatched-layer penalties are computed for disappeared (pdisp_{\text{dis}}) and newly created layers (pnewp_{\text{new}}):

LayoutConsistency=max{0,ωmatchrmatch+ωposcpos+ωshapecshape+ωareacareaωpenalty(pdis+pnew)}100\mathrm{LayoutConsistency} = \max \left\{ 0, \omega_{\text{match}}\cdot r_{\text{match}} + \omega_{\text{pos}} \cdot \overline{c_{\text{pos}}} + \omega_{\text{shape}} \cdot \overline{c_{\text{shape}}} + \omega_{\text{area}} \cdot \overline{c_{\text{area}}} - \omega_{\text{penalty}}\cdot(p_{\text{dis}} + p_{\text{new}}) \right\} \cdot 100

Weights are empirically assigned: ωmatch=0.25\omega_{\text{match}}=0.25, ωpos=0.20\omega_{\text{pos}}=0.20, ωshape=0.20\omega_{\text{shape}}=0.20, ωarea=0.20\omega_{\text{area}}=0.20, ωpenalty=0.15\omega_{\text{penalty}}=0.15 (Lin et al., 8 Jan 2026).

2.3 Aesthetics

The edited document is fed to a frozen Aesthetic Predictor V2.5. The raw score Araw[1,10]A_{\text{raw}} \in [1,10] is linearly rescaled for normalization: Ah=Araw10A_h = \frac{A_{\text{raw}}}{10} This dimension judges if the edit maintains or improves the document's visual appeal.

2.4 Text Rendering

Text rendering ensures all required textual edits are correctly executed. The protocol applies Adopd Doc2BBox for detection and InternVL3-38B for OCR and edit judgment. Each text region is scored:

  • 0 = incorrect
  • 0.5 = partially correct
  • 1 = correct

Final score:

TR=1001Tk=1Tscorek\mathrm{TR} = 100 \cdot \frac{1}{T} \sum_{k=1}^{T} \text{score}_k

with TT the total number of text regions needing edits (Lin et al., 8 Jan 2026).

3. Aggregation: The MiLDEScore

To synthesize the four dimensions into an overall measure, MiLDEEval employs gated, synergistic aggregation. Raw scores are normalized to [0,1][0,1]. A sigmoid gate on instruction-following (g(IFh)g(IF_h), with IFh=IF/100IF_h=IF/100, threshold τ=0.3\tau=0.3, steepness k=10k=10) suppresses irrelevant layout and aesthetics contributions if instructions are ignored. The final MiLDEScore is:

MiLDEScore=  wifIFh+wtrTRh +g(IFh)(wlcLCh+waAh) +wsyg(IFh)IFhLCh\begin{aligned} \mathrm{MiLDEScore} =\; & w_{\text{if}} \cdot IF_h + w_{\text{tr}} \cdot TR_h \ & + g(IF_h) \cdot (w_{\text{lc}} \cdot LC_h + w_a \cdot A_h) \ & + w_{\text{sy}} \cdot g(IF_h) \cdot IF_h \cdot LC_h \end{aligned}

with wif=0.30w_{\text{if}}=0.30, wtr=0.30w_{\text{tr}}=0.30, wlc=0.30w_{\text{lc}}=0.30, wa=0.10w_a=0.10, wsy=0.15w_{\text{sy}}=0.15. This configuration yields maximal Spearman correlation (ρ0.88\rho\approx0.88) with human ratings, outperforming alternatives such as weighted sum, geometric, or harmonic mean aggregation (Lin et al., 8 Jan 2026).

4. Data and Annotation Protocol

The MiLDEBench dataset comprises 19.6 K multi-layer documents (avg. 4.45 layers/document), with 17.7 K for training and 1.9 K for testing. 50 K editing instructions are generated via persona-based and document-based pipelines with InternVL3-38B and further refined by human validation. Layer-wise decomposition and alignment to instructions are verified by multimodal model matching and expert annotation.

For MiLDEEval, all 1.9 K test samples are assessed. Human annotation for MiLDEScore validation uses 100 sampled test cases with two independent annotators per system (PhD/master’s students in multimodal research or professional designers), rating each dimension (0–3 scale) and overall outcome. Inter-annotator agreement statistics: Instruction Following κ=0.75\kappa=0.75, Layout Consistency κ=0.71\kappa=0.71, Aesthetics κ=0.61\kappa=0.61, Text Rendering κ=0.72\kappa=0.72, Overall κ=0.69\kappa=0.69 (Lin et al., 8 Jan 2026).

5. Experimental Protocols and Exemplars

MiLDEEval discriminates sharply among model behaviors. Illustrative cases include:

  • A diffusion model returning the input unedited scores LC100%\mathrm{LC}\approx100\%, IF=0\mathrm{IF}=0; MiLDEScore gates out layout, yielding near-zero overall.
  • Partial text edits (e.g., “piano”→“harpsichord” but not “concert”) result in TR=0.5\mathrm{TR}=0.5, IF0.5\mathrm{IF}\approx0.5, lowering MiLDEScore accordingly.
  • Closed-source models may follow instructions (IF0.9\mathrm{IF}\approx0.9) with slight layout drift (LCh0.6\mathrm{LC}_h\approx0.6), combining for high MiLDEScore via synergy and gating; open-source models may follow instructions (IF0.8\mathrm{IF}\approx0.8) but poorly preserve layout (LCh0.2\mathrm{LC}_h\approx0.2), resulting in moderate overall scores.

No hypothesis testing or p-values are computed; metric validation is limited to correlation and inter-annotator statistics (Lin et al., 8 Jan 2026).

6. Conceptual Distinctions and Implications

MiLDEEval provides a uniquely fine-grained, human-aligned yardstick for multi-layer document editing, going beyond flat image metrics or superficial “looks good” criteria, and instead enforcing rigorous assessment of intent execution, layer structure, and aesthetic/typographic integrity. The gating and synergy mechanisms in MiLDEScore prevent vacuous success on irrelevant dimensions and drive correct interpretability. A plausible implication is that MiLDEEval’s holistic rigidity makes it suited as a primary benchmark for future multimodal editing research, especially in settings where reasoning about layered structure is necessary for authentic document modification (Lin et al., 8 Jan 2026).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to MiLDEEval.