Prompt robustness in training-free OVCD

Investigate and characterize the robustness of training-free open-vocabulary change detection frameworks such as CoRegOVCD to variations in user-specified text prompts, determining how prediction quality depends on semantically related prompt choices and how to ensure stable performance across lexical variants.

Background

The paper evaluates lexical variants for user queries and observes that performance can be class-dependent, indicating sensitivity to wording. For example, alternative terms for a class (e.g., building vs. roof/rooftop) can yield noticeably different results, and a prompt set may outperform single prompts for visually diverse categories. This motivates the need to systematically understand and improve prompt robustness in training-free OVCD systems like CoRegOVCD.

References

Prompt robustness, threshold transfer, and efficient multi-concept inference remain open, but the central conclusion is clear: posterior differencing, once properly regularized, provides a stronger foundation for training-free OVCD.

CoRegOVCD: Consistency-Regularized Open-Vocabulary Change Detection  (2604.02160 - Tang et al., 2 Apr 2026) in Conclusion (final paragraph)