Papers
Topics
Authors
Recent
Search
2000 character limit reached

PaperBanana Framework

Updated 3 February 2026
  • PaperBanana is an agentic framework for generating publication-ready academic illustrations, eliminating manual bottlenecks in creating methodological diagrams and statistical plots.
  • It integrates five specialized agents—Retriever, Planner, Stylist, Visualizer, and Critic—using state-of-the-art vision-language models for iterative self-critique and plan refinement.
  • Empirical evaluations on PaperBananaBench demonstrate significant improvements in faithfulness, conciseness, readability, and aesthetics over existing baselines.

PaperBanana is an agentic framework for automated generation of publication-ready academic illustrations, designed to eliminate the manual bottleneck in creating methodology diagrams and statistical plots for AI research. It formalizes illustration generation as the mapping of a source context (e.g., a method description) and communicative intent (e.g., figure caption) into a scholarly-quality figure, leveraging reference examples through zero-shot or retrieval-augmented approaches. The system orchestrates five specialized agents, each operating via state-of-the-art vision-LLMs (VLMs) and image generation modules, with iterative self-critique to ensure faithfulness and visual quality. Rigorous empirical evaluation using the PaperBananaBench demonstrates significant enhancements over prior baselines in faithfulness, conciseness, readability, and aesthetics, both for methodological diagrams and statistical plots (Zhu et al., 30 Jan 2026).

1. Task Formalization and System Pipeline

PaperBanana aims to convert a source context SS (textual method description) and communicative intent CC (figure caption) into an academic illustration II, potentially conditioned on a set of reference triplets E={(Sn,Cn,In)}n=1N\mathcal{E} = \{(S_n, C_n, I_n)\}_{n=1}^N. The formalization is: I=f(S,C,E),I = f(S, C, \mathcal{E}), enabling both zero-shot and retrieval-augmented generation. The architecture comprises five sequential agents: Retriever, Planner, Stylist, Visualizer, and Critic. The orchestration follows a linear planning phase and a three-round iterative self-critique loop.

Agent Inputs Outputs
Retriever SS, CC, {(Si,Ci,Ii)}R\{(S_i,C_i,I_i)\}\subset\mathcal{R} E\mathcal{E} (top-matched reference triplets)
Planner SS, CC, E\mathcal{E} PP (textual diagram plan)
Stylist PP, G\mathcal{G} (style guide) PP^* (styled plan)
Visualizer PtP_t It=ImageGen(Pt)I_t = \mathrm{ImageGen}(P_t)
Critic ItI_t, SS, CC, PtP_t Pt+1P_{t+1} (refined plan)

The pipeline first retrieves similar prior diagrams based on visual structure, plans and styles a diagram, then iteratively visualizes and critiques, aiming for a final, publication-ready output.

2. Agent Specialization and Workflow

Retriever

The Retriever agent employs a generative retrieval strategy with a dedicated vision–LLM, VLMRetVLM_\mathrm{Ret}. It computes a learned matching score for each potential reference (Si,Ci)(S_i, C_i), emphasizing visual structural similarity over topical correlation, and selects a top-ranked set E\mathcal{E}.

Planner

The Planner agent uses a large VLM to synthesize a detailed textual plan PP of the prospective diagram. It performs few-shot prompting with selected examples:

1
2
3
4
function PLAN_DIAGRAM(S, C, E):
  prompt ← few‐shot examples from E plus (S, C)
  P ← VLM_generate(prompt)
  return P
This stage enumerates diagram elements and interconnections for the illustrators downstream.

Stylist

The Stylist agent constructs an auto-summarized style guide G\mathcal{G} from the reference set, incorporating conventions for color palettes, typography, shapes, and layout (“NeurIPS look”). It stylizes the plan: P=VLMstyle(P,G)P^* = \mathrm{VLM}_{\mathrm{style}}(P, \mathcal{G})

Visualizer

The Visualizer translates styled plans to pixel-based illustrations using either Nano-Banana-Pro (v2025) or GPT-Image-1.5, both bespoke image generation models with high diagrammatic fidelity. For statistical plots, Visualizer can emit executable Matplotlib code.

Critic

The Critic agent evaluates the generated image ItI_t in context against SS, CC, and the plan, producing plan refinements Pt+1P_{t+1} via VLM-based feedback. A fixed number of (T=3T=3) iterative rounds balance faithfulness with aesthetics.

3. Model Selection, Prompting, and Evaluation Metrics

The framework’s backbone is Gemini-3-Pro (VLM) for retrieval, planning, styling, and judgment; Nano-Banana-Pro and GPT-Image-1.5 serve as the Visualizer. System prompts are engineered for each agent—no fine-tuning is performed, relying solely on in-context learning.

Evaluation employs the PaperBananaBench: 292 test cases of methodology diagrams from NeurIPS 2025, stratified by domain and style. Key metrics (range: [0,100][0,100]) include:

  • Faithfulness F(I;S,C)F(I;S,C): correspondence of II to SS, CC
  • Conciseness C(I;S,C)C(I;S,C): signal-to-noise ratio
  • Readability R(I)R(I): visual clarity and non-overlapping elements
  • Aesthetics A(I)A(I): compliance with the domain style guide

The hierarchical overall score Ω\Omega prioritizes faithfulness and readability, implementing tie-breaking based on conciseness and aesthetic adherence.

4. Empirical Performance and Ablation Analysis

On PaperBananaBench, PaperBanana achieves improvements over existing baselines:

Method FF\uparrow CC\uparrow RR\uparrow AA\uparrow Overall
GPT-Image-1.5 4.5 37.5 30.0 37.0 11.5
Nano-Banana-Pro 43.0 43.5 38.5 65.5 43.2
Few-shot Nano-Banana 41.6 49.6 37.6 60.5 41.8
Paper2Any (Nano-Banana) 6.5 44.0 20.5 40.0 8.5
PaperBanana (Ours) 45.8 80.7 51.4 72.1 60.2
Human Reference 50.0 50.0 50.0 50.0 50.0

All improvements are statistically significant (p<0.01p<0.01). Removal of the Stylist component reduces conciseness by 17.5 points; omission of the Critic component decreases faithfulness by 15.1 points. Qualitative assessment indicates increased logical structure preservation and domain-appropriate palettes in PaperBanana outputs (Zhu et al., 30 Jan 2026).

5. Extension to Statistical Plots

PaperBanana directly extends to statistical plot synthesis by generating Matplotlib code in the Visualizer phase. The system applies a dedicated plot-style guide and retains the Retriever and Planner stages. On ChartMimic subset benchmarks (240 test cases), PaperBanana demonstrates an overall 4.1-point improvement over vanilla Gemini-3-Pro, with some instances surpassing human references in clarity and visual discipline.

6. Limitations and Prospects

Identified limitations include:

  • Raster versus vector output: PaperBanana’s bitmap illustrations are less amenable to post-generation editing. Potential advances involve deploying autonomous agents for vector graphic tools.
  • Style rigidity versus creative diversity: The reliance on a fixed, auto-inferred style guide can diminish variety. Parameterizable or user-driven style selection is a target for further work.
  • Fine-grained faithfulness: Certain failures, e.g., details such as precise arrow endpoints, remain challenging. Enhanced VLM perception or graph-based validation is proposed.
  • Evaluation paradigms: "VLM-as-Judge" is susceptible to subjective bias. Structure-based metrics and reward learning are suggested alternatives.
  • Test-time preference adaptation: Present design generates a single illustration; future iterations may incorporate candidate generation with preference-ranking.
  • Applicability beyond academic illustrations: Potential applications extend to patent schematics, UI/UX mockups, and industrial diagrams.

PaperBanana offers a unified, modular, and empirically validated approach for automating academic illustration, setting a foundation for agentic scientific workflows (Zhu et al., 30 Jan 2026).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to PaperBanana Framework.