Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Concept Alignment in Vision-Language Concept Bottleneck Models (2405.01825v2)

Published 3 May 2024 in cs.CV

Abstract: Concept Bottleneck Models (CBM) map images to human-interpretable concepts before making class predictions. Recent approaches automate CBM construction by prompting LLMs to generate text concepts and employing Vision LLMs (VLMs) to score these concepts for CBM training. However, it is desired to build CBMs with concepts defined by human experts rather than LLM-generated ones to make them more trustworthy. In this work, we closely examine the faithfulness of VLM concept scores for such expert-defined concepts in domains like fine-grained bird species and animal classification. Our investigations reveal that VLMs like CLIP often struggle to correctly associate a concept with the corresponding visual input, despite achieving a high classification performance. This misalignment renders the resulting models difficult to interpret and less reliable. To address this issue, we propose a novel Contrastive Semi-Supervised (CSS) learning method that leverages a few labeled concept samples to activate truthful visual concepts and improve concept alignment in the CLIP model. Extensive experiments on three benchmark datasets demonstrate that our method significantly enhances both concept (+29.95) and classification (+3.84) accuracies yet requires only a fraction of human-annotated concept labels. To further improve the classification performance, we introduce a class-level intervention procedure for fine-grained classification problems that identifies the confounding classes and intervenes in their concept space to reduce errors.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Nithish Muthuchamy Selvaraj (5 papers)
  2. Xiaobao Guo (10 papers)
  3. Adams Wai-Kin Kong (19 papers)
  4. Alex Kot (31 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com