Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Modality-invariant and Specific Prompting for Multimodal Human Perception Understanding (2311.10791v1)

Published 17 Nov 2023 in cs.MM and cs.HC

Abstract: Understanding human perceptions presents a formidable multimodal challenge for computers, encompassing aspects such as sentiment tendencies and sense of humor. While various methods have recently been introduced to extract modality-invariant and specific information from diverse modalities, with the goal of enhancing the efficacy of multimodal learning, few works emphasize this aspect in LLMs. In this paper, we introduce a novel multimodal prompt strategy tailored for tuning LLMs. Our method assesses the correlation among different modalities and isolates the modality-invariant and specific components, which are then utilized for prompt tuning. This approach enables LLMs to efficiently and effectively assimilate information from various modalities. Furthermore, our strategy is designed with scalability in mind, allowing the integration of features from any modality into pretrained LLMs. Experimental results on public datasets demonstrate that our proposed method significantly improves performance compared to previous methods.

Summary

We haven't generated a summary for this paper yet.