RelationVLM: Making Large Vision-Language Models Understand Visual Relations (2403.12801v1)
Abstract: The development of Large Vision-LLMs (LVLMs) is striving to catch up with the success of LLMs, yet it faces more challenges to be resolved. Very recent works enable LVLMs to localize object-level visual contents and ground text to them. Nonetheless, current LVLMs still struggle to precisely understand visual relations due to the lack of relevant data. In this work, we present RelationVLM, a large vision-LLM capable of comprehending various levels and types of relations whether across multiple images or within a video. Specifically, we devise a multi-stage relation-aware training scheme and a series of corresponding data configuration strategies to bestow RelationVLM with the capabilities of understanding semantic relations, temporal associations and geometric transforms. Extensive case studies and quantitative evaluations show RelationVLM has strong capability in understanding such relations and emerges impressive in-context capability of reasoning from few-shot examples by comparison. This work fosters the advancements of LVLMs by enabling them to support a wider range of downstream applications toward artificial general intelligence.
- Zhipeng Huang (34 papers)
- Zhizheng Zhang (60 papers)
- Zheng-Jun Zha (143 papers)
- Yan Lu (179 papers)
- Baining Guo (53 papers)