ComPABench: Multi-Domain ML Benchmarks
- ComPABench is a suite of benchmarks evaluating compositional and comparative reasoning in vision-language models, multimodal LLMs, and scientific software.
- It employs methodologies such as reinforcement learning, supervised fine-tuning, and constraint programming to assess complex, multi-task challenges.
- Key findings reveal significant gaps in out-of-distribution generalization and underline the need for advanced multi-stage reasoning protocols.
ComPABench refers to several distinct, high-visibility benchmarks and frameworks in machine learning and AI, each targeting compositional, comparative, performance, or constraint-programming abilities. The term appears as a shorthand or in-title variant across disparate domains: compositional reasoning in vision–LLMs, comparative reasoning in multimodal LLMs, complex instruction-guided image editing, continuous performance benchmarking for scientific codebases, and LLM-driven constraint modeling. This article provides a detailed synthesis of the ComPABench landscape, focusing on the principal usages, data specifications, methodologies, evaluation metrics, and empirical findings as established in leading arXiv research.
1. ComPABench for Compositional Reasoning in Vision–LLMs
Originally introduced in "Unveiling the Compositional Ability Gap in Vision-Language Reasoning Model" (Li et al., 26 May 2025), ComPABench is a diagnostic benchmark for probing compositional generalization in vision–LLMs (VLMs). It systematically separates (i) cross-modal composition, (ii) cross-task composition, and (iii) out-of-distribution (OOD) compositional generalization.
The task suite is constructed around two unimodal skills—Shape Area (A) and Grid Position (B)—and their composition B∘A. Datasets are partitioned along modality (PT- for pure-text, MM- for multimodal), composition, and OOD variants. Each individual split contains 4,000 training and 500 evaluation samples, with 500 evaluation samples in compositional splits.
Evaluation metrics include:
- Accuracy: averaged over dataset , defined as .
- Compositional generalization gap: difference in accuracy between in-distribution (ID) and OOD composition.
- Reinforcement Learning reward: sum of a terminal reward for a correct answer and progress rewards for verifiable subgoals.
ComPABench enables controlled, verifiable analysis of how training protocols (supervised fine-tuning vs. reinforcement learning, visual-to-text grounding, progressive reward shaping) affect skill integration and transfer in VLMs.
2. Design and Evaluation Protocols
The ComPABench compositional suite defines precise transformations over synthetic visual–textual tasks:
- Shape Area (A): reasoning over geometric area given text or image.
- Grid Position (B): localization tasks (nearest/farthest neighbor) in grid-based coordinates.
- Composition (B∘A): requiring models to sequentially perform B then A (e.g., sum areas of a target and its nearest neighbor).
Formal compositional splits are established: $\begin{array}{lll} \text{Task Type} & \text{Train} & \text{Test} \ \hline \text{Cross-Modal} & \{\mathrm{PT\mbox{-}GR,PT\mbox{-}SR}\} & \{\mathrm{PT\mbox{-}GR,PT\mbox{-}SR,MM\mbox{-}GR,MM\mbox{-}SR}\} \ \text{Cross-Task Pure-Text} & \{\mathrm{PT\mbox{-}GR,PT\mbox{-}SR}\} & \{\mathrm{PT\mbox{-}GR,PT\mbox{-}SR,PT\mbox{-}Comp}\} \ \text{Cross-Task Multimodal} & \{\mathrm{MM\mbox{-}GR,MM\mbox{-}SR}\} & \{\mathrm{MM\mbox{-}GR,MM\mbox{-}SR,MM\mbox{-}Comp}\} \ \text{OOD Composition} & \{\mathrm{MM\mbox{-}GR,MM\mbox{-}SR}\} & \{\mathrm{MM\mbox{-}GR\mbox{-}OOD,MM\mbox{-}SR\mbox{-}OOD,MM\mbox{-}Comp\mbox{-}OOD}\} \ \end{array}$
Training strategies include supervised fine-tuning (), vanilla RL, group relative policy optimization (GRPO), and RL with explicit grounding ("caption-before-thinking").
Key findings:
- RL-trained VLMs outperform SFT on compositionality but large modality gaps remain (e.g., SFT 16.2% vs. RL 28% for cross-modal transfer, 7B model).
- Caption-before-thinking and progressive rewards yield the largest compositional performance gains, up to +21.6 pts.
- OOD compositional generalization remains a significant bottleneck, only partially overcome by RL+grounding protocols.
3. Related ComPABench Benchmarks: Comparative Reasoning in Multimodal LLMs
A distinct benchmark, "MLLM-CompBench" (Kil et al., 23 Jul 2024), also sometimes referenced as ComPABench, focuses on visually grounded comparative reasoning in multimodal LLMs. It comprises 39,800 annotated image pairs (triplets: (I₁, I₂, Q, A*)) spanning eight dimensions: attribute, existence, state, emotion, temporality, spatiality, quantity, and quality. The construction pipeline mines image pairs from 14 datasets, applies CLIP similarity-based filtering (), and combines expert annotation (Cohen’s κ ≥ 0.75 required) for ground-truth answers.
The evaluation protocol emphasizes accuracy, micro-F1 for multi-class tasks, and a transitive consistency metric on logical triads:
This taxonomy uniquely addresses paired, cross-image reasoning (e.g., "Which image contains more umbrellas?") and exposes MLLM failures in cross-instance spatial, counting, and subtle change detection.
4. Continuous Performance Benchmarking (ComPABench in Scientific Software)
A separate usage, "Continuous Performance Benchmarking Framework for ROOT" (Shadura et al., 2018), presents ComPABench as a performance benchmarking infrastructure for ROOT, the HEP scientific data analysis platform. Here, ComPABench instantiates:
- Fine-grained, Google Benchmark–style micro-benchmarks for hot routines (e.g., GenVector operations).
- Data collection via Jenkins-integrated builds, running on reserved hardware, emitting results to InfluxDB time series, and dashboarded in Grafana.
- Metrics include real/cpu time, memory usage, vectorization speedup, and confidence intervals; performance regressions are tracked by Δ% = (T_current – T_baseline) / T_baseline × 100%.
This ComPABench variant is thus a software engineering tool, not a multimodal or reasoning challenge.
5. Additional Notable Usages: Constraint Programming and Image Editing
Constraint Modelling (CP-Bench/ComPABench)
"CP-Bench: Evaluating LLMs for Constraint Modelling" (Michailidis et al., 6 Jun 2025) sometimes references its dataset as ComPABench. It comprises 101 combinatorial problem instances, covering domains such as CSPLib, CPMpy, and OR-Tools with varying abstraction levels. Evaluation is on solution-level accuracy (i.e., an LLM-generated model’s feasibility and optimality match the ground truth), and key findings are:
- Python-based libraries (CPMpy: 63% accuracy for GPT-4.1-mini) yield higher LLM accuracy than domain-specific languages (MiniZinc: 50% for GPT-4.1-mini).
- Documentation-rich prompts and sampling/self-verification at inference time raise the best LLM accuracy to 70%.
Complex Instruction-Guided Image Editing (CompBench)
"CompBench: Benchmarking Complex Instruction-guided Image Editing" (Jia et al., 18 May 2025) is a large-scale, MLLM–human collaborative benchmark specifically for complex, fine-grained image editing. It delineates nine sub-tasks spanning local editing, multi-object/turn, action/dynamics, spatial, viewpoint, and implicit reasoning categories. Instruction decoupling is formalized as
with tailored metrics for foreground edit accuracy (LC-I), instruction faithfulness (LC-T), and background consistency (PSNR, SSIM, LPIPS). No single model leads on all tasks; Step1X-Edit is the most consistent across 17/29 metrics, but substantial gaps remain on implicit and 3D viewpoint edits.
6. Comparative Table of Key ComPABench-Inspired Benchmarks
| Context | ComPABench Usage | Primary Goal |
|---|---|---|
| Vision–Language Compositionality (Li et al., 26 May 2025) | Diagnostic benchmark for VLMs | Evaluate & close compositional ability gaps |
| Multimodal Comparative Reasoning (Kil et al., 23 Jul 2024) | Large-scale paired reasoning | Quantify relative reasoning over image pairs |
| ROOT Perf. Benchmarking (Shadura et al., 2018) | Continuous perf. benchmarking | Track hot-spot function speed/memory/correctness |
| Constraint Modelling (Michailidis et al., 6 Jun 2025) | CP model synthesis evaluation | Assess LLMs for NL→CP code generation |
| Instructional Image Editing (Jia et al., 18 May 2025) | Real-world edit challenge | Stress-test fine-grained, multi-factor edit fidelity |
7. Limitations, Challenges, and Research Directions
The various ComPABench incarnations identify recurrent limitations:
- Vision–Language compositional generalization remains partially solved: OOD transfer is suboptimal even for RL-grounded VLMs, with catastrophic forgetting common under SFT (Li et al., 26 May 2025).
- Comparative and paired reasoning (attribute, existence, spatiality) reveal persistent error clusters in even state-of-the-art MLLMs, particularly for spatial, counting, and nuanced object changes (Kil et al., 23 Jul 2024).
- Performance benchmarking frameworks depend on stable, reserved hardware and do not scale well to macro-benchmarks or every architectural target (Shadura et al., 2018).
- LLM-driven NL-to-CP modeling, while rapidly improving, requires dedicated code-level prompting, ensembling, and verification for robust outputs, and human review still cannot be fully eliminated (Michailidis et al., 6 Jun 2025).
- Instruction-guided image editing struggles with dynamic actions, 3D viewpoint manipulations, and spatial precision—underscoring the need for next-generation architectures with grounded visual reasoning (Jia et al., 18 May 2025).
Open research directions include explicit 3D scene grounding, development of multi-stage reasoning protocols for both vision and code tasks, and architectural innovations that facilitate robust cross-modal and cross-instance skill transfer.
For further technical specification, benchmarking code, and data availability, each cited arXiv source provides full details and reproducibility checkpoints for ComPABench and its domain-specific implementations.