Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 33 tok/s Pro
GPT-5 High 39 tok/s Pro
GPT-4o 93 tok/s Pro
Kimi K2 229 tok/s Pro
GPT OSS 120B 428 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

PRBench: Automated Research Promo Benchmark

Updated 15 October 2025
  • PRBench is a multimodal evaluation benchmark that pairs academic papers with promotional posts, ensuring high fidelity, audience engagement, and platform alignment.
  • It employs a structured three-stage process—content extraction, collaborative synthesis, and platform-specific adaptation—to optimize automated scholarly promotion.
  • Empirical results demonstrate that agent-based systems like PRAgent significantly boost engagement metrics, including up to 604% increase in watch time and 438% rise in likes.

PRBench is a multimodal evaluation benchmark designed to rigorously assess automatic research promotion systems that transform peer-reviewed academic papers into engaging and contextually optimized social media posts. PRBench links each paper to its corresponding promotional content and provides a measurement platform along three axes: Fidelity (accuracy and tone), Engagement (audience interest and response), and Alignment (adaptation to platform norms and timing). It serves both as a standardized dataset for benchmarking and as an experimental bed for developing agent-based systems—such as PRAgent—that seek to automate scholarly communication workflows.

1. Benchmark Structure and Data Generation

PRBench comprises a dataset of 512 peer-reviewed articles paired with high-quality human-authored promotional posts. Each task instance specifies a research document D\mathcal{D} and dissemination parameters (Tp,Ta)(\mathbb{T}_p, \mathbb{T}_a), with the system’s goal to generate an optimal post P^\hat{P} for target platform and audience:

P^=argmaxPPr(PD,Tp,Ta)\hat{P} = \arg\max_P \Pr(P | \mathcal{D}, \mathbb{T}_p, \mathbb{T}_a)

The dataset is constructed via three steps:

  • Data Collection: Sourcing documents from scholarly repositories and collecting corresponding promotional posts from official or reputable social channels.
  • Pairing and Curation: Manual and semi-automated verification ensure accurate mapping between documents and their authentic promotional counterparts, considering multimodal content (textual, visual, and supplementary media).
  • Quality Control: Human annotation, including fact-checking, fidelity estimation, and textual-visual coherence assessment, filters out incomplete or noisy pairings.

PRBench supports experimentation in AutoPR—the task of automatic academic promotion—by establishing the data-ground for both baseline and novel agent-based approaches.

2. Evaluation Protocol and Objective Function

The evaluation is framed as a multi-objective optimization problem, with a scoring function comprising three principal axes:

maxP{α1SFidelity(PD)+α2SAlign(PTp)+α3SEngage(PTa)}\max_P \left\{ \alpha_1 \cdot \mathcal{S}_{Fidelity}(P|\mathcal{D}) + \alpha_2 \cdot \mathcal{S}_{Align}(P|\mathbb{T}_p) + \alpha_3 \cdot \mathcal{S}_{Engage}(P|\mathbb{T}_a) \right\}

Where:

  • Fidelity (SFidelity\mathcal{S}_{Fidelity}): Measures factual correctness, author/title accuracy, and completeness using a weighted factual checklist:

SChecklist(PD)=iwiv(Pci,D)iwi\mathcal{S}_{Checklist}(P|\mathcal{D}) = \frac{\sum_i w_i \cdot v(P|c_i, \mathcal{D})}{\sum_i w_i}

Each v(Pci,D)v(P|c_i, \mathcal{D}) is a normalized fact coverage score, wiw_i is a fact importance weight.

  • Engagement (SEngage\mathcal{S}_{Engage}): Quantifies attention-capturing strength (hook score), narrative flow (logical attractiveness), integrated visuals, and effectiveness of call-to-action elements. Metrics include both expert and layperson audience preferences, measured via pairwise comparison and behavioral analytics (e.g., watch time, likes).
  • Alignment (SAlign\mathcal{S}_{Align}): Evaluates conformance to dissemination platform norms, including tone, technical style, visual-text integration, proper hashtag/mention strategy, and timing optimization.

Each axis is scored by expert raters with well-defined rubrics; ablation studies validate sub-metric contributions.

3. Agent-Based Content Generation: The PRAgent Framework

PRAgent is a multi-agent framework orchestrated over PRBench for automated, scalable content generation and adaptation:

  • Stage 1: Content Extraction and Structuring
    • Converts raw PDFs into machine-readable formats for text and visuals.
    • Applies hierarchical parsing and summarization (DTsum=Summarize(Parse(DTraw))\mathcal{D}^{{sum}}_T = \text{Summarize}(\text{Parse}(\mathcal{D}^{{raw}}_T))) to aggregate principal document claims; visuals extracted using PDF2Img and layout segmentation.
  • Stage 2: Collaborative Content Synthesis
    • Logical Draft Agent produces a structured research summary.
    • Visual Analysis Agent interprets paired images and captions using multimodal LLMs.
    • Textual Enriching Agent transforms the draft into a platform-appropriate post, incorporating hooks and calls-to-action.
    • Visual-Text-Interleaved Combination Agent merges narrative and imagery, generating placeholders for real image integration.
  • Stage 3: Platform-Specific Adaptation
    • Orchestration Agent tailors outputs—format, emoji/hashtag usage, mentions—to audience and dissemination channel.
    • Final post assembled in Markdown-ready format, with visuals inserted per channel conventions.

Prompt engineering is formalized, and comprehensive instructions for agent specialization are provided in the appendix. Each agent’s outputs are quantitatively evaluated per the PRBench protocol.

4. Formal Metric Definitions and Experimental Results

Performance on PRBench is evaluated both in controlled offline settings and in real-world social dissemination experiments. Notable empirical findings include:

  • PRAgent achieves a 604% increase in total watch time, a 438% rise in likes, and at least 2.9x higher overall engagement relative to direct LLM pipeline baselines.
  • Fidelity, Engagement, and Alignment scores consistently favor the multi-agent, staged approach, as outlined in main and ablation results tables.
  • The platform-specific adaptation stage is especially impactful, with ablation dropping alignment scores from 79.38 to 71.36.

These improvements are confirmed across diverse platforms and audience demographics.

5. Ablation and Component Studies

Ablation studies distinguish contributions of each pipeline stage:

  • Eliminating Stage 1 (Content Extraction) reduces Fidelity scores by more than four points, confirming the necessity of hierarchical summarization and multi-modal extraction for accurate fact coverage.
  • Omitting Stage 2 (Content Synthesis) negatively affects all axes, with Alignment suffering most, highlighting the benefit of collaborative agent-driven draft refinement.
  • Removing Stage 3 (Platform-Specific Adaptation) results in the strongest decrease in outreach-related metrics, establishing the importance of social channel modeling for successful engagement.

Component-wise evaluation quantifies the marginal utility of hooks, calls-to-action, imagery integration, and audience targeting.

6. Implications and Directions for Automated Scholarly Communication

PRBench positions AutoPR as a tractable, measurable research task for multimodal adaptation of academic work to social dissemination channels. The robust benchmark, backed by reproducible scoring and agent-based generation, enables:

  • Comparative studies across architectures, agent designs, platform targeting strategies.
  • Reduced reliance on manual promotion, facilitating scalable outreach for the scholarly community.
  • Roadmap for integration of visual analytics, personalization, and broader dissemination contexts such as policy and journalism.

It is anticipated that PRBench will seed new advances in adaptive language modeling, multimodal content synthesis, and automated communication, fostering cross-disciplinary impact in academia and beyond.


PRBench offers a rigorous, empirical framework for evaluating automated scholarly communication, enabling comparison and optimization for multi-objective tasks central to research visibility and outreach (Chen et al., 10 Oct 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to PRBench Evaluation Benchmark.