Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LG-CAV: Train Any Concept Activation Vector with Language Guidance (2410.10308v1)

Published 14 Oct 2024 in cs.CV

Abstract: Concept activation vector (CAV) has attracted broad research interest in explainable AI, by elegantly attributing model predictions to specific concepts. However, the training of CAV often necessitates a large number of high-quality images, which are expensive to curate and thus limited to a predefined set of concepts. To address this issue, we propose Language-Guided CAV (LG-CAV) to harness the abundant concept knowledge within the certain pre-trained vision-LLMs (e.g., CLIP). This method allows training any CAV without labeled data, by utilizing the corresponding concept descriptions as guidance. To bridge the gap between vision-LLM and the target model, we calculate the activation values of concept descriptions on a common pool of images (probe images) with vision-LLM and utilize them as language guidance to train the LG-CAV. Furthermore, after training high-quality LG-CAVs related to all the predicted classes in the target model, we propose the activation sample reweighting (ASR), serving as a model correction technique, to improve the performance of the target model in return. Experiments on four datasets across nine architectures demonstrate that LG-CAV achieves significantly superior quality to previous CAV methods given any concept, and our model correction method achieves state-of-the-art performance compared to existing concept-based methods. Our code is available at https://github.com/hqhQAQ/LG-CAV.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Qihan Huang (10 papers)
  2. Jie Song (217 papers)
  3. Mengqi Xue (18 papers)
  4. Haofei Zhang (20 papers)
  5. Bingde Hu (3 papers)
  6. Huiqiong Wang (11 papers)
  7. Hao Jiang (230 papers)
  8. Xingen Wang (11 papers)
  9. Mingli Song (163 papers)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com