Semi-supervised Visual Feature Integration for Pre-trained Language Models (1912.00336v2)
Abstract: Integrating visual features has been proved useful for natural language understanding tasks. Nevertheless, in most existing multimodal LLMs, the alignment of visual and textual data is expensive. In this paper, we propose a novel semi-supervised visual integration framework for pre-trained LLMs. In the framework, the visual features are obtained through a visualization and fusion mechanism. The uniqueness includes: 1) the integration is conducted via a semi-supervised approach, which does not require aligned images for every sentences 2) the visual features are integrated as an external component and can be directly used by pre-trained LLMs. To verify the efficacy of the proposed framework, we conduct the experiments on both natural language inference and reading comprehension tasks. The results demonstrate that our mechanism brings improvement to two strong baseline models. Considering that our framework only requires an image database, and no not requires further alignments, it provides an efficient and feasible way for multimodal language learning.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.