Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning to Receive Help: Intervention-Aware Concept Embedding Models (2309.16928v3)

Published 29 Sep 2023 in cs.LG and cs.AI

Abstract: Concept Bottleneck Models (CBMs) tackle the opacity of neural architectures by constructing and explaining their predictions using a set of high-level concepts. A special property of these models is that they permit concept interventions, wherein users can correct mispredicted concepts and thus improve the model's performance. Recent work, however, has shown that intervention efficacy can be highly dependent on the order in which concepts are intervened on and on the model's architecture and training hyperparameters. We argue that this is rooted in a CBM's lack of train-time incentives for the model to be appropriately receptive to concept interventions. To address this, we propose Intervention-aware Concept Embedding models (IntCEMs), a novel CBM-based architecture and training paradigm that improves a model's receptiveness to test-time interventions. Our model learns a concept intervention policy in an end-to-end fashion from where it can sample meaningful intervention trajectories at train-time. This conditions IntCEMs to effectively select and receive concept interventions when deployed at test-time. Our experiments show that IntCEMs significantly outperform state-of-the-art concept-interpretable models when provided with test-time concept interventions, demonstrating the effectiveness of our approach.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Mateo Espinosa Zarlenga (14 papers)
  2. Katherine M. Collins (32 papers)
  3. Krishnamurthy Dvijotham (58 papers)
  4. Adrian Weller (150 papers)
  5. Zohreh Shams (12 papers)
  6. Mateja Jamnik (57 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.