Papers
Topics
Authors
Recent
2000 character limit reached

Steering Out-of-Distribution Generalization with Concept Ablation Fine-Tuning (2507.16795v2)

Published 22 Jul 2025 in cs.LG, cs.AI, and cs.CL

Abstract: Fine-tuning LLMs can lead to unintended out-of-distribution generalization. Standard approaches to this problem rely on modifying training data, for example by adding data that better specify the intended generalization. However, this is not always practical. We introduce Concept Ablation Fine-Tuning (CAFT), a technique that leverages interpretability tools to control how LLMs generalize from fine-tuning, without needing to modify the training data or otherwise use data from the target distribution. Given a set of directions in an LLM's latent space corresponding to undesired concepts, CAFT works by ablating these concepts with linear projections during fine-tuning, steering the model away from unintended generalizations. We successfully apply CAFT to three fine-tuning tasks, including emergent misalignment, a phenomenon where LLMs fine-tuned on a narrow task generalize to give egregiously misaligned responses to general questions. Without any changes to the fine-tuning data, CAFT reduces misaligned responses by 10x without degrading performance on the training distribution. Overall, CAFT represents a novel approach for steering LLM generalization without modifying training data.

Summary

  • The paper demonstrates that CAFT significantly reduces misaligned responses by ablating latent directions associated with unwanted generalizations.
  • It utilizes interpretability tools like PCA and Sparse Autoencoders to isolate and remove spurious correlations without altering the training data.
  • CAFT maintains in-distribution performance while robustly mitigating out-of-distribution errors in tasks such as emergent misalignment and multiple choice settings.

Steering Out-of-Distribution Generalization with Concept Ablation Fine-Tuning

Introduction

The paper "Steering Out-of-Distribution Generalization with Concept Ablation Fine-Tuning" (2507.16795) introduces Concept Ablation Fine-Tuning (CAFT), an innovative technique designed to address unintended out-of-distribution (OOD) generalization in LLMs. Traditional methods to mitigate such generalizations often involve altering the training dataset to better specify desired behaviors. However, this approach is not always feasible. CAFT utilizes interpretability tools to modify the generalization behavior of LLMs without altering the training dataset, thereby offering a novel method for controlling OOD generalization.

The paper discusses three major tasks where CAFT is applied: emergent misalignment, and multiple choice tasks with inherent spurious correlations. The results demonstrated significant reductions in OOD generalization errors without compromising in-distribution task performance, offering a robust solution for managing unintended generalization. Figure 1

Figure 1: Models trained on insecure code with standard fine-tuning methods exhibit misaligned behavior. Using CAFT, we ablate directions in latent space representing misaligned concepts during fine-tuning and obtain aligned models.

Methodology

CAFT's methodology is predicated on the identification and ablation of latent space directions that correspond to undesired concepts during the model's fine-tuning process. By performing linear projections to ablate these directions, CAFT effectively hinders the model from relying on unwanted generalizations. This method is distinctly advantageous as it does not require modifications to the training dataset, thus circumventing the practical limitations associated with data-driven specification methods.

The essence of CAFT lies in its ability to utilize interpretability techniques to discern undesirable concepts directly within the latent space of the model. Two principal methods are employed for this purpose: Principal Component Analysis (PCA) and Sparse Autoencoders (SAEs). These methods help isolate interpretable directions that, when ablated during fine-tuning, reduce the model's reliance on unintended generalizations. Figure 2

Figure 2: Examples of a PC (left) and an SAE latent (right) from Qwen considered misaligned and ablated while applying CAFT.

Tasks and Implementation

Emergent Misalignment: This scenario involves LLMs that, when fine-tuned on tasks such as vulnerable code generation, provide egregiously misaligned responses to general queries. CAFT is applied by identifying and ablating latent features associated with such misalignments, reducing harmful generalizations significantly.

Multiple Choice Tasks with Spurious Correlations: In tasks where training data contains common spurious correlations, CAFT addresses the challenge by ensuring that the model learns the intended tasks over the unintended shortcuts. The paper showcases the efficacy of CAFT in tasks requiring pronoun disambiguation and selection of appropriate verbs in multiple choice scenarios. Figure 3

Figure 3: Examples from multiple choice datasets highlighting the challenges addressed by CAFT in cases with spurious correlations.

Results and Analysis

The application of CAFT showed a tenfold reduction in misaligned responses for emergent misalignments without degrading in-distribution performance. The method also effectively mitigated incorrect generalizations in the multiple choice settings. The use of PCA and SAEs provided robust interpretability, crucial for identifying and ablating unintended concepts without requiring additional labeled data.

Comparative analyses against baselines such as random vector ablation and top latent ablations indicated that CAFT's success hinged on the precise interpretation and selection of subspaces for ablation. The methods showed consistent advantages over both random and top ablative methods. Figure 4

Figure 4: Emergent misalignment results for different SAE methods, indicating CAFT's capacity to reduce misalignment while maintaining performance.

Conclusion

The research presented in this paper establishes CAFT as a potent tool for steering LLMs' OOD generalization without data modification. Its ability to leverage interpretability to manage model generalization represents a considerable advancement in the fine-tuning of LLMs. Future research may focus on further enhancing interpretability techniques used in CAFT and exploring automated interpretation methods to enlarge its applicability to broader contexts.

Overall, CAFT offers an effective approach to control unintended generalization in LLMs, holding promise for the future development of AI systems that are aligned with human values and expectations.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 4 tweets with 20 likes about this paper.