RESMatch: Referring Expression Segmentation in a Semi-Supervised Manner (2402.05589v2)
Abstract: Referring expression segmentation (RES), a task that involves localizing specific instance-level objects based on free-form linguistic descriptions, has emerged as a crucial frontier in human-AI interaction. It demands an intricate understanding of both visual and textual contexts and often requires extensive training data. This paper introduces RESMatch, the first semi-supervised learning (SSL) approach for RES, aimed at reducing reliance on exhaustive data annotation. Extensive validation on multiple RES datasets demonstrates that RESMatch significantly outperforms baseline approaches, establishing a new state-of-the-art. Although existing SSL techniques are effective in image segmentation, we find that they fall short in RES. Facing the challenges including the comprehension of free-form linguistic descriptions and the variability in object attributes, RESMatch introduces a trifecta of adaptations: revised strong perturbation, text augmentation, and adjustments for pseudo-label quality and strong-weak supervision. This pioneering work lays the groundwork for future research in semi-supervised learning for referring expression segmentation.
- Ying Zang (16 papers)
- Chenglong Fu (31 papers)
- Runlong Cao (5 papers)
- Didi Zhu (19 papers)
- Min Zhang (630 papers)
- Wenjun Hu (14 papers)
- Lanyun Zhu (30 papers)
- Tianrun Chen (31 papers)