Papers
Topics
Authors
Recent
2000 character limit reached

Concept Bottleneck Sparse Autoencoders

Updated 18 December 2025
  • CB-SAE is a neural architecture that integrates user-specified concepts into sparse autoencoders to produce interpretable, monosemantic latent representations.
  • It synergizes pruning, post-hoc concept alignment, and geometric criteria to enhance steerability and reconstruction performance over standard SAEs.
  • This approach enables precise causal interventions in vision-language and large language models by allowing controlled edits in concept-specific dimensions.

Concept Bottleneck Sparse Autoencoders (CB-SAE) are neural architectures that explicitly enforce alignment between latent representations and a human-specified set of concepts, while retaining the structural strengths of sparse autoencoders. CB-SAEs have emerged as a principled solution to the limitations of both standard sparse autoencoders (SAEs), which often discover uninterpretable or entangled representations, and supervised concept bottleneck models (CBMs), which lack the unsupervised expressivity and variation capture of SAEs. CB-SAEs combine pruning, post-hoc concept alignment, and geometric criteria to yield models that provide both interpretable axes and steerable latent representations, enabling precise mechanistic interpretability and model control in applications ranging from vision-LLMs (LVLMs) to LLMs (Kulkarni et al., 11 Dec 2025, Yang et al., 1 Dec 2025, Rocchi--Henry et al., 8 Dec 2025).

1. Architectural Foundations and Formal Structure

A Concept Bottleneck Sparse Autoencoder augments a conventional sparse autoencoder with a user-aligned concept bottleneck. The steps are as follows (Kulkarni et al., 11 Dec 2025):

  • Base Input: Let vRdv \in \mathbb{R}^d be an activation from a frozen vision encoder ff (e.g., CLIP-ViT).
  • Pruned SAE Branch: The overcomplete SAE latent zRωz \in \mathbb{R}^{\omega} is pruned to retain only the RR most “useful” neurons, yielding zRωMz' \in \mathbb{R}^{\omega-M} and reconstructions vsaev^{\wedge}_\text{sae}.
  • Concept Bottleneck Branch: An encoder EcbE_{cb} produces logits cRCc \in \mathbb{R}^{|C|} for the set of user-specified concepts CC, with hard sparsification (e.g., Top-k\text{Top-}k) to obtain c^\hat{c}. A decoder DcbD_{cb} maps c^\hat{c} back to the original feature space, producing vcbv^\wedge_{cb}.
  • Final Reconstruction: v=vsae+vcbv' = v^{\wedge}_\text{sae} + v^{\wedge}_{cb}, with both branches sharing a bias bb.

The architecture preserves the high-capacity, non-conceptual “dictionary” of the unsupervised SAE, while guaranteeing the inclusion of concept-aligned directions through the bottleneck. Pruning is guided by the sum of neuron interpretability (II) and steerability (SS), as detailed in Section 3.

In implementations such as AlignSAE, latent code zz is explicitly partitioned into a concept-aligned block zconceptRRz_\text{concept} \in \mathbb{R}^R (one slot per target concept) and a free block zrestRKRz_\text{rest} \in \mathbb{R}^{K-R} for monosemantic factors not covered by concepts. Dedicated value heads may be attached to each concept slot (Yang et al., 1 Dec 2025).

2. Mathematical Objectives and Training Paradigms

CB-SAE employs a composite loss function that combines:

  • Sparse Autoencoder Loss: Standard reconstruction and sparsity penalties,

Lsae=Ex[vv^22+λz1].\mathcal{L}_\text{sae} = \mathbb{E}_x\left[ \| v - \hat{v} \|_2^2 + \lambda \| z \|_1 \right].

  • Concept Alignment Loss: For the bottleneck neurons, alignment to pseudo–ground-truth concept activations. In (Kulkarni et al., 11 Dec 2025), a “cosine-cubed” loss is used:

Lint(c,y^)=k=1C(ck3)(y^k3)ck32y^k32\mathcal{L}_\text{int}(c, \hat{y}) = -\sum_{k=1}^{|C|} \frac{ (c_k^3) \cdot (\hat{y}_k^3) }{ \| c_k^3 \|_2 \| \hat{y}_k^3 \|_2 }

where y^\hat{y} derives from a zero-shot CLIP classifier.

  • Steerability (Cyclic) Loss: Ensuring that concept neuron edits translate to intended changes in output, cyclically encoding and decoding the latent intervention.

For hybrid architectures (as in AlignSAE (Yang et al., 1 Dec 2025)), a two-phase training is adopted:

  1. Unsupervised Pre-training learns a reconstructive, high-capacity dictionary with LsaeL_\text{sae} over all latent slots.
  2. Supervised Post-training binds specific concept slots via a cross-entropy loss over zconceptz_\text{concept} and ground-truth labels, while preserving free slots for additional variation.

Concept-cone geometric objectives, as proposed in (Rocchi--Henry et al., 8 Dec 2025), regularize the SAE dictionary to “cover” the CBM’s concept cone using a LASSO-based distance penalty. This geometric approach unifies CBM and SAE methods, suggesting both select overlapping nonnegative cones in latent space.

3. Concept Alignment, Metrication, and Interpretability

Evaluation of CB-SAEs proceeds along several axes:

  • Concept Coverage and Containment: The degree to which the latent dictionary spans the same cone as a supervised CBM. Measured via the global coverage score (Cov), sparse nonnegative least-square error (δj\delta_j), and geometric axis-correlation metrics (ρgeom\rho_\text{geom}) (Rocchi--Henry et al., 8 Dec 2025).
  • Interpretability: Using CLIP-Dissect, each neuron’s activation pattern is correlated with text-embedded concepts over a large image set, yielding an interpretability score (IjI_j), which is maximized when one concept dominates a neuron’s responses.
  • Steerability: Defined as the ability of isolated neuron interventions to reliably steer downstream generative models or LVLMs toward the corresponding concept, measured by cosine similarity between intervention outputs and the target concept semantics.
  • Monosemanticity (MS): Reflects how concentrated each neuron’s responses are on a single concept.

CB-SAE methods deliver substantial improvements over plain SAEs. On benchmarks (LLaVA-1.5-7B, LLaVA-MORE, UnCLIP), interpretability improves by approximately 32.1%, and steerability by 14.5% compared to standard SAEs (Kulkarni et al., 11 Dec 2025).

4. Pruning, Bottleneck Augmentation, and Algorithmic Pipeline

The post-hoc CB-SAE pipeline consists of:

  1. Unsupervised Dictionary Learning: Train a highly overcomplete SAE on large unlabelled data.
  2. Neuron Scoring and Pruning: Score each neuron by Uj=Ij+SjU_j = I_j + S_j and prune the ωR\omega - R least useful units.
  3. Concept Coverage Assessment: For the remaining dictionary, assess coverage of a user-specified vocabulary via CLIP-Dissect or geometric containment criteria.
  4. Bottleneck Augmentation: For missing concepts, allocate new bottleneck neurons (C|C|) with direct mapping to requested concepts, training with Lint\mathcal{L}_\text{int} and Lst\mathcal{L}_\text{st} (alignment and steerability objectives).
  5. Fine-tuning: Freeze the pruned SAE, jointly optimize the concept bottleneck.

This process produces a latent space in which user-defined concepts are explicitly represented and all axes are either monosemantic (interpretable and/or steerable) or reserved for reconstruction fidelity.

5. Operational Use: Interpretation, Control, and Causal Interventions

The structure of CB-SAE enables several operational mechanisms:

  • Concept Readout: At inference, the concept bottleneck provides a sparse and directly interpretable vector cc, indicating the presence or absence (and potentially relative strength) of user-specified concepts.
  • Intervention and Model Control: Editing (swapping/amplifying/suppressing) a concept neuron, then decoding through DcbD_{cb}, steers the generated output towards the targeted concept without interference from other directions.
  • Causal Edits in LLMs: For language tasks, interventions on zconceptz_\text{concept} (e.g., zeroing the current slot, amplifying a new one) allow for controlled “concept swaps” in generated text (Yang et al., 1 Dec 2025). In vision-LLMs, similar edits in latent space can modulate image generation or captioning.
  • Evaluation of Causal Efficacy: “Swap success” and category-preservation metrics quantify how often interventions produce the intended semantically correct output.

6. Geometric Unification and Theoretical Implications

CB-SAE bridges the gap between unsupervised representation learning (SAE) and supervised concept models (CBM) by recasting both within the framework of concept cones—nonnegative spans of dictionary atoms in activation space (Rocchi--Henry et al., 8 Dec 2025). This yields important insights:

  • Both CBMs and SAEs, via different inductive biases (sparsity, expansion, supervision), select a set of directions whose nonnegative combinations define the feasible semantic regions in latent space.
  • Alignment between learned and desired cones can be rigorously quantified, and regularized during training, by geometric containment and activation-level criteria.
  • There exist “sweet spots” in the trade-off between sparsity, expansion ratio, and semantic alignment: e.g., target code sparsity \approx 0.995 with expansion \approx 3×\times achieves the best balance between concept coverage and parsimony.

This perspective enables CB-SAE construction and assessment to be principled and comparable across unsupervised, supervised, and hybrid models.

7. Limitations, Empirical Results, and Extensibility

Empirical results across LVLMs and generative models confirm consistent advantages for CB-SAE over plain SAEs in interpretability and control. Key findings include:

Architecture Interpretability (CD) Monosemanticity (MS) Steerability (unit) Steerability (white)
SAE (LLaVA-1.5) 0.154 0.517 0.198 0.203
CB-SAE (LLaVA-1.5) 0.244 0.556 0.261 0.250
SAE (UnCLIP) 0.058 0.540 0.642 0.654
CB-SAE (UnCLIP) 0.092 0.594 0.659 0.664

Interpretability and steerability metrics per (Kulkarni et al., 11 Dec 2025)

Limitations include evaluation on small, flat ontologies (e.g., 6-relation factual queries), absence of compositional or hierarchical concept modeling, and reliance on frozen feature backbones. Open directions include: hierarchical or overlapping bottlenecks, dynamic slot allocation, external memory interfacing, and application to deeper network layers (Yang et al., 1 Dec 2025).

A plausible implication is that further integration of geometric alignment principles, multi-concept modeling, and scalable supervision can push CB-SAE methods toward general-purpose, mechanistically interpretable foundation model control.


References: (Kulkarni et al., 11 Dec 2025) "Interpretable and Steerable Concept Bottleneck Sparse Autoencoders" (Yang et al., 1 Dec 2025) "AlignSAE: Concept-Aligned Sparse Autoencoders" (Rocchi--Henry et al., 8 Dec 2025) "A Geometric Unification of Concept Learning with Concept Cones"

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Concept Bottleneck Sparse Autoencoders (CB-SAE).