2000 character limit reached
UniVSE: Robust Visual Semantic Embeddings via Structured Semantic Representations (1904.05521v2)
Published 11 Apr 2019 in cs.CV, cs.CL, and cs.LG
Abstract: We propose Unified Visual-Semantic Embeddings (UniVSE) for learning a joint space of visual and textual concepts. The space unifies the concepts at different levels, including objects, attributes, relations, and full scenes. A contrastive learning approach is proposed for the fine-grained alignment from only image-caption pairs. Moreover, we present an effective approach for enforcing the coverage of semantic components that appear in the sentence. We demonstrate the robustness of Unified VSE in defending text-domain adversarial attacks on cross-modal retrieval tasks. Such robustness also empowers the use of visual cues to resolve word dependencies in novel sentences.
- Hao Wu (623 papers)
- Jiayuan Mao (55 papers)
- Yufeng Zhang (67 papers)
- Yuning Jiang (106 papers)
- Lei Li (1293 papers)
- Weiwei Sun (93 papers)
- Wei-Ying Ma (39 papers)