Papers
Topics
Authors
Recent
Search
2000 character limit reached

OC-CLIP: Object-Centric Binding

Updated 23 January 2026
  • The paper introduces OC-CLIP, a novel framework that overcomes CLIP’s binding limitations by aligning parsed scene-graph nodes with competitive visual slots.
  • It employs slot-structured visual representations and structured similarity scoring to accurately bind multi-object attributes and relational cues.
  • Empirical evaluations show OC-CLIP delivers up to 35-point improvements on compositional tasks and significant gains in zero-shot settings.

Object-Centric Binding (OC-CLIP) is a neural architectural framework designed to address compositional limitations in contrastive vision–LLMs, primarily CLIP. The central aim is to enable robust multi-object, attribute, and relational binding—going beyond “bag-of-words” representations—by explicitly aligning scene-graph structures from text with slot-structured visual representations. OC-CLIP departs from existing reliance on hard-negative mining and instead integrates inductive architectural biases and structured similarity objectives.

1. Motivations and Foundational Binding Shortcomings

Standard contrastive models such as CLIP learn vision–language alignment by maximizing similarity between paired image–caption embeddings while minimizing similarity to other (non-matching) pairs. However, these models consistently fail to bind attributes and relational structures to the correct objects in multi-entity scenes. Quantitative assessments on synthetic, compositional benchmarks delineate the breakdown: CLIP achieves ≈92% on single-object novel (attribute, noun) pairs but falls to chance (≈31%) for two-object attribute binding and to 0% for order-sensitive relational composition (Lewis et al., 2022).

This failure arises because CLIP representations tend towards flat, commutative “bags of concepts.” Attempts to fix this via additional negative sampling ("hard negatives") or basic architectural tweaks have been unsuccessful (Gurung et al., 10 Jul 2025). The binding deficit is further compounded by natural data properties such as low attribute density, incomplete captions, and saliency bias. These tendencies cause CLIP to collapse to label-centric shortcuts and ignore role-sensitive structure in the image and caption.

2. Slot-Structured Visual Representation and Scene-Graph Text Parsing

OC-CLIP fundamentally restructures the image embedding pipeline by imposing slot competition for multi-entity segmentation. Instead of a global context (CLS) token, the backbone produces KK “visual slots” via competitive (“inverted”) cross-attention:

  • Given an image xx, extract NN patch embeddings X=[x1,,xN]=Evis(x)RN×dX = [x_1,\dots,x_N] = E_{vis}(x) \in \mathbb{R}^{N\times d}.
  • Scene-graph nodes parsed from the caption (tt) produce MM noun-phrase queries. Additional NdN_d default queries (“null” slots) are concatenated.
  • Slot queries (QQ') attend over keys/values projected from patches, yielding MM slot embeddings SRM×dS\in \mathbb{R}^{M\times d} each meant to capture one noun-phrase’s content.

Scene-graph extraction relies on either LLM-based or supervised parsers, producing a directed graph GG with MM nodes and PP typed relations (edges) between noun phrases. Each node and relation is encoded using shared text transformers (N=Etext.nodes,R=Etext.relsN=E_{text.nodes}, R=E_{text.rels}).

3. Binding Architecture and Structured Similarity Scoring

The binding module operationalizes slot–node pairing and relation constraint enforcement. Structured similarity is defined as:

S(x,G)=αi=1Mcos(Ni,Si)+βj=1Pfo(rj,Ssj,Soj)S(x,G) = \alpha\sum_{i=1}^M\cos(N_i,S_i) + \beta\sum_{j=1}^P f_o(r_j,S_{s_j},S_{o_j})

where NiN_i and SiS_i are the i-th node and slot, rjr_j encodes each relation, and Ssj,SojS_{s_j},S_{o_j} are slot embeddings for subject/object. The relation function is:

fo(r,Ss,So)=cos(r,fs([r;Ss])+fo([r;So]))f_o(r,S_s,S_o) = \cos\big(r, f_s([r;S_s]) + f_o([r;S_o])\big)

with fs,fof_s, f_o being MLPs mapping from concatenated input to the slot dimension. This binding design explicitly links image regions to nodes in the scene graph and enforces role-sensitive pairwise relationships. Each edge in the scene graph exerts a constraint, improving segregation of attributes and spatial relationships.

4. Training Objectives: Global and Local Graph Contrastive Loss

OC-CLIP is optimized using two complementary objectives:

  • Global contrastive loss (LitcL_{itc}): For a batch of (xi,Gi)(x_i,G_i), maximize structured similarity for matched pairs, penalizing mismatches against all other GjG_j in the batch.
  • Local directionality loss (LrelL_{rel}): For each (xi,Gi)(x_i,G_i), two hard negatives are constructed by swapping or shuffling edge roles. The model is trained to prefer correct scene graphs over both perturbations.

Formally,

Litc=1Bi=1B[logexpS(xi,Gi)j=1BexpS(xi,Gj)+logexpS(xi,Gi)j=1BexpS(xj,Gi)]L_{itc} = -\frac{1}{B}\sum_{i=1}^B\left[\log\frac{\exp S(x_i,G_i)}{\sum_{j=1}^B \exp S(x_i,G_j)} + \log\frac{\exp S(x_i,G_i)}{\sum_{j=1}^B \exp S(x_j,G_i)}\right]

Lrel=1Bi=1BlogexpS(xi,Gi)expS(xi,Gi)+expS(xi,Giswap)+expS(xi,Gishuffle)L_{rel} = -\frac{1}{B}\sum_{i=1}^B \log \frac{ \exp S(x_i,G_i) }{ \exp S(x_i,G_i) + \exp S(x_i,G_i^{swap}) + \exp S(x_i,G_i^{shuffle}) }

The total loss is L=Litc+LrelL = L_{itc} + L_{rel} (Assouel et al., 19 Feb 2025).

5. Implementation Parameters and Computational Tradeoffs

OC-CLIP dynamically sets slot counts per caption (K=MK=M, typically 363\text{–}6), employs Nd3N_d \approx 3 default slots, and chooses slot dimensions D=256512D=256\text{–}512 (main experiments use 256). Patch-embedding dimension from the ViT backbone is standard (d=768d=768). The cross-attention uses four heads and softmax temperature 1/D1/\sqrt{D}. Relation MLPs fs,fof_s, f_o are two layers deep with hidden size 128 and GELU activations.

OC-CLIP shares its text encoder backbone with OpenCLIP, but uses reduced layers and dimensions for from-scratch experiments. Training leverages AdamW with β1=0.9\beta_1=0.9, β2=0.95\beta_2=0.95, weight decay of 0.2, differentiated learning rates, and large distributed batch sizes (128 per GPU, total 8192). Training throughput is within 1.3×1.3\times2×2\times of CLIP due to slot dimension reductions and aggressive batching (Assouel et al., 19 Feb 2025).

6. Empirical Performance Across Synthetic and Real-World Benchmarks

OC-CLIP’s effectiveness is demonstrated against both synthetic and real-world compositional datasets:

  • Synthetic PUG dataset: OC-CLIP attains 97% attribute-binding accuracy without hard negatives; vanilla OpenCLIP plateaus at 81% even with heavy negative mining.
  • Generalization: OC-CLIP maintains 90%\gg90\% on unseen attribute-object pairs (OpenCLIP 70%\ll70\%).
  • SugarCrepe, COCO-spatial, GQA-spatial, ARO: OC-CLIP outperforms baseline CLIP/OpenCLIP by 15–35 points in swap-attribute, swap-object, spatial, and relational tasks.
  • Zero-shot transfer: When trained on CC3M/CC12M OC-CLIP beats CLIP by 12.7% on ImageNet zero-shot and achieves major gains on compositional swaps.
  • Ablation studies: Removing competitive cross-attention or structured losses significantly degrades binding and relation accuracy. Increasing cross-attention layers provides further relation performance benefits.

A plausible implication is that inductive slot and graph biases provide binding capabilities that cannot be achieved by architectural scaling or additional hard negatives alone (Assouel et al., 19 Feb 2025).

7. Insights, Data Dependence, and Limitations

OC-CLIP’s improvements are attributed primarily to architectural structure (slot attention and scene-graph binding) rather than data scaling or hard-negative construction. The module forces feature segregation and role-specific alignment, overcoming bag-of-words collapse. However, dependence on external parsers (often LLM-based) introduces possible error and bias, and the vision–language space remains local to training vocabulary; out-of-domain generalization is not fully resolved.

Research into the data properties responsible for binding failure (low attribute density, incomplete captions, saliency bias) reveals that neither architectural variation nor batch size scaling—nor even extensive hard-negative mining—can ensure binding under natural data regimes. Only by augmenting data to increase multi-object captions, attribute density, and randomness in object saliency can off-the-shelf contrastive models approach reliable binding (Gurung et al., 10 Jul 2025). The pairing of slot-based architecture with judicious data curation thus emerges as a central recommendation for future OC-CLIP work.

This suggests that robust vision–language binding is an entangled property of both architecture and corpus structure, and that further improvements may require joint learning of parsing, adaptive slot construction, and new contrastive losses sensitive to binding errors and relation swaps.

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Object-Centric Binding (OC-CLIP).