RPLKG: Robust Prompt Learning with Knowledge Graph (2304.10805v2)
Abstract: Large-scale pre-trained models surpass in transferability and robust generalization across diverse datasets. The emergence of multimodal pre-trained models like CLIP has significantly boosted performance in various experiments. However, generalizing to new datasets or domains remains challenging, especially with limited labeled data. Also, existing methods often lack interpretability and impose high computational costs. To address this, we propose Robust Prompt Learning with Knowledge Graph (RPLKG), leveraging the knowledge graph to curate diverse, interpretable prompt sets automatically. Our method autonomously selects the optimal interpretable prompt based on dataset characteristics, achieving performance improvements over zero-shot learning and competitive performance compared to various prompt learning methods. Also, RPLKG efficiently reuses cached prompt embeddings from a single model pass and optimizes prompt selection via Gumbel-Softmax, enabling low-memory, fast training. Moreover, RPLKG advances few-shot learning effectiveness while enhancing interpretability and efficiency in model adaptation. Our