Prompt robustness in training-free OVCD
Investigate and characterize the robustness of training-free open-vocabulary change detection frameworks such as CoRegOVCD to variations in user-specified text prompts, determining how prediction quality depends on semantically related prompt choices and how to ensure stable performance across lexical variants.
References
Prompt robustness, threshold transfer, and efficient multi-concept inference remain open, but the central conclusion is clear: posterior differencing, once properly regularized, provides a stronger foundation for training-free OVCD.
— CoRegOVCD: Consistency-Regularized Open-Vocabulary Change Detection
(2604.02160 - Tang et al., 2 Apr 2026) in Conclusion (final paragraph)