Papers
Topics
Authors
Recent
Search
2000 character limit reached

Test-Time Optimization for OVSS

Updated 25 February 2026
  • The paper introduces test-time optimization techniques (Seg-TTO and MLMP) that adapt prompt embeddings and normalization layers, substantially improving OVSS accuracy.
  • It employs spatial view selection combined with entropy-based self-supervised objectives to align visual and text features, enhancing segmentation robustness under domain shift.
  • Empirical results show mIoU gains up to 7.35% across multiple specialized-domain datasets, enabling plug-and-play adaptation without supervised finetuning.

Open-vocabulary semantic segmentation (OVSS) requires models to assign dense per-pixel semantic labels where the vocabulary is open-ended and specified at inference. Standard zero-shot OVSS models generalize to novel categories by aligning visual features to text embeddings produced by a frozen LLM. However, in domain-specialized scenarios—e.g., biomedical, industrial imaging, under domain shift—zero-shot OVSS exhibits a substantial gap to supervised segmentation. Test-time optimization (TTO) for OVSS addresses this by adapting model components on-the-fly, leveraging unlabeled target-domain data, to improve robustness and accuracy without retraining or supervised finetuning. Recent research has established segmentation-specific TTO methodologies that deliver state-of-the-art domain adaptation for OVSS through plug-and-play procedures and self-supervised objectives (Silva et al., 8 Jan 2025, Noori et al., 28 May 2025).

1. Core Methodologies in Test-Time Optimization for OVSS

Test-time optimization in OVSS fundamentally diverges from previous TTA (test-time adaptation) work in classification by addressing unique challenges: spatial label structure, multi-concept per-image, and the open-vocabulary nature. The Seg-TTO framework (Silva et al., 8 Jan 2025) exemplifies this approach with the following pipeline:

  • Visual Augmentation & Selection: mm random crops or augmentations {x~i}i=1m\{\tilde x_i\}_{i=1}^m of input xx are processed to generate per-crop pixel features ai=V(x~i)a_i = V(\tilde x_i). A self-supervised uncertainty score—combining entropy and pseudo-label cross-entropy—is computed for each view, and the mm’ lowest-uncertainty (highest-confidence) crops are selected for loss aggregation, preserving spatial locality.
  • Textual Prompt and Attribute Augmentation: For every category jj, the method maintains pp learnable prompts {gk}k=1p\{g_k\}_{k=1}^p and one category seed cjc_j, producing prompt-conditioned text features bkj=T(Promptkyj)b^j_k = T(\mathrm{Prompt}_k \oplus y_j). Additional category attributes αj\alpha_j are generated via LLM augmentation.
  • Parameter Tuning via Self-Supervised Objective: Prompts and category seeds are jointly optimized by minimizing a self-supervised segmentation loss on selected views, with PCGrad used for gradient de-confliction among multi-task objectives. Spatial feature aggregation yields an adapted visual representation, while tuned textual embeddings integrate attribute information.
  • Plug-and-Play Integration: The resulting Seg-TTO module can wrap any state-of-the-art OVSS model (e.g., CAT-Seg, CLIP-DINOiser) at inference, improving predictions on out-of-distribution or specialized-domain data.

An alternative stream, MLMP (multi-level, multi-prompt), updates only LightNorm parameters in the vision encoder, fusing visual features from the last LL’ layers with entropy-based weighting and applying entropy minimization at both global ([CLS]) and spatial (patch-wise) levels, along multiple text prompt templates (Noori et al., 28 May 2025).

2. Mathematical Formulation and Optimization Process

Test-time optimization objectives for OVSS employ self-supervised losses that operate over both the visual and textual modalities and retain spatial and multi-concept structure. For Seg-TTO:

  • Cross-modal Probabilities: For each pixel embedding v=ai(q)v = a_i(q) and each prompt-conditioned text embedding bkjb^j_k,

P(bkjv)=exp(sim(bkj,v)/τ)j,kexp(sim(bkj,v)/τ)P(b^j_k \mid v) = \frac{\exp(\operatorname{sim}(b_k^j, v) / \tau)}{\sum_{j’,k’} \exp(\operatorname{sim}(b_{k’}^{j’}, v) / \tau)}

where sim(u,v)\operatorname{sim}(u, v) is cosine similarity and τ\tau a temperature.

  • Entropy and Pseudo-label Cross-Entropy Losses:

Lent=j=1nk=1pP(bkjv)logP(bkjv)L_{\mathrm{ent}} = -\sum_{j=1}^n\sum_{k=1}^p P(b^j_k|v) \log P(b^j_k|v)

Lce=j=1nk=1py^[j]logP(bkjv)L_{\mathrm{ce}} = -\sum_{j=1}^n\sum_{k=1}^p \hat y[j] \log P(b^j_k|v)

with y^\hat y a pseudo-label distribution.

  • PCGrad Gradient De-confliction: The loss per pixel is LSSLq,i=ϕ(Lentq,i,Lceq,i)L_{\mathrm{SSL}}^{q,i} = \phi(L_{\mathrm{ent}}^{q,i}, L_{\mathrm{ce}}^{q,i}), where ϕ\phi denotes PCGrad.
  • View Selection and Aggregation: Losses are spatially averaged per view, lowest-uncertainty views selected, then aggregated across views for the final objective:

Ltotal=1SiSLˉSSLiL_{\mathrm{total}} = \frac{1}{|\mathcal S|} \sum_{i\in\mathcal S} \bar L^{i}_{SSL}

with learning rate η\eta, prompt and category seed updates are gradient-based:

cjcjηLtotalcj,gkgkηLtotalgkc_j \leftarrow c_j - \eta \frac{\partial L_{total}}{\partial c_j}, \quad g_k \leftarrow g_k - \eta \frac{\partial L_{total}}{\partial g_k}

  • Final Embedding Synthesis: β\beta-controlled mixing of optimized prompts and LLM attributes,

tj=β1pk=1pbkj+(1β)αjt'_j = \beta \frac{1}{p} \sum_{k=1}^p b^j_k + (1-\beta)\alpha_j

MLMP, by contrast, adapts only LayerNorm parameters by minimizing patch-level (UAML) and global ([CLS], ILE) entropy, using a fusion of final ViT layers weighted by their entropy-derived confidence (Noori et al., 28 May 2025).

3. Empirical Results and Ablation Studies

Extensive empirical evaluation of test-time optimization for OVSS demonstrates substantial performance improvements:

  • Seg-TTO on the MESS Benchmark (22 specialized-domain datasets):
    • CAT-Seg-B: mIoU improves from 33.74% → 35.54% (+1.80%)
    • CAT-Seg-L: 38.14% → 40.17% (+2.03%)
    • Gains are observed on 19/22 tasks (e.g., Corrosion CS: +4.83%, CryoNuSeg: +7.35%). Mask-free OVSS (CLIP-DINOiser) obtains +1.09% mean mIoU.
  • Ablations (on Dark Zurich, DRAM, ZeroWaste-F):
    • Removing textual tuning (TFT): −2.0% mIoU
    • Skipping visual feature aggregation (VFA): −1.5%
    • Excluding attributes (CAA): −1.3%
    • PCGrad vs. single-loss fusion: PCGrad adds +0.2%
    • Mean spatial aggregation outperforms max/median by ~0.2%

MLMP on the OVSS-TTA benchmark (7 datasets, 15 corruptions = 82 scenarios) achieves 3–9 mIoU improvements over the strongest classification baselines, with robust gains under domain shift and synthetic corruptions.

Method mIoU (Cityscapes Orig) mIoU (V20 Orig)
NoAdapt (ZS) 29.5 75.9
TENT 30.5 77.0
MLMP (Ours) 33.4 83.8

Both Seg-TTO and MLMP provide state-of-the-art plug-and-play OVSS adaptation without requiring supervised test-domain data.

4. Implementation and Computational Properties

Seg-TTO requires no supervised labels or retraining and operates per-image at inference. The main computational steps are:

  • $1$–$2$ gradient steps over p×np \times n prompts and mm visual crops per image, incurring $20$–$40$ ms overhead on modern GPUs.
  • Memory is dominated by storage and backpropagation through mm visual views and pnp \cdot n text embeddings.
  • No gradients propagate into the pre-trained visual or text encoder; only prompt and category seed parameters are adapted.

MLMP adapts only LayerNorm parameters (<0.1%<0.1\% of model weights), requiring 10 adaptation steps per test sample or batch. Prompts are fixed and text encoder is frozen, with all text templates precomputed.

5. Differences Between Test-Time Optimization Strategies

Test-time optimization in OVSS has two primary paradigms:

  • Seg-TTO: Learns prompts and category seeds through self-supervised segmentation losses, integrating spatial view selection and aggregation, LLM-augmented attributes, and per-pixel PCGrad de-confliction.
  • MLMP: Applies multi-prompt, multi-level entropy minimization, fusing deep vision features, explicitly targeting model confidence at both global and local levels, and adapting only statistical normalization layers.

A key distinction is that Seg-TTO optimizes textual prompt embeddings and category seeds, while MLMP adapts normalization statistics in the vision encoder.

6. Limitations and Future Directions

The primary limitations of current test-time optimization for OVSS include:

  • Inference Overhead: Despite fast adaptation, scaling to large open vocabularies and high mm can incur nontrivial memory and latency costs.
  • Attribute Quality Sensitivity: LLM-generated attributes for category augmentation may mislead model adaptation if poorly constructed.
  • Static View and Prompt Selection: Both frameworks presently select views, prompts, and adaptation steps statically per-instance.
  • No Visual Encoder Finetuning: Only peripheral parameters (prompts, LayerNorm) are adapted; deeper adaptation of the vision encoder remains unexplored.

Future research directions include dynamic selection of critical views and prompt templates, spatially-aware regularization (e.g., boundary continuity), joint adaptation of encoder submodules for highly divergent domains, and reinforcement-based LLM attribute selection (Silva et al., 8 Jan 2025).

7. Practical Guidelines and Deployment Recommendations

  • Choice of Adaptation Target: For Seg-TTO, optimize prompts and category seeds; for MLMP, limit updates to normalization parameters.
  • Prompt/Attribute Selection: Employ $5$–$10$ diverse prompt templates and high-quality LLM attributes.
  • Layer Selection: Fuse the last 50%50\%75%75\% of ViT layers for MLMP; always include pixel-level and [CLS]-level entropy objectives.
  • Batch Size: Both frameworks support batch size as low as $1$ sample with robust adaptation efficacy.
  • Evaluation Protocol: Reset adaptively-tuned parameters between test samples or batches to prevent accumulation of corrupt statistics.
  • Integration: Both Seg-TTO and MLMP are plug-and-play, requiring no model retraining or supervised data in the target domain and applicable with current state-of-the-art OVSS backbones (Silva et al., 8 Jan 2025, Noori et al., 28 May 2025).

Test-time optimization provides a highly effective, label-free, and computationally efficient adaptation toolkit for robust OVSS in domain specialization and under distribution shift, significantly reducing the performance gap to supervised segmentation.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Test-Time Optimization for OVSS.