Papers
Topics
Authors
Recent
Search
2000 character limit reached

Training In-Context and In-Weights Mixtures Via Contrastive Context Sampling

Published 2 Apr 2026 in cs.LG | (2604.01601v1)

Abstract: We investigate training strategies that co-develop in-context learning (ICL) and in-weights learning (IWL), and the ability to switch between them based on context relevance. Although current LLMs exhibit both modes, standard task-specific fine-tuning often erodes ICL, motivating IC-Train - fine-tuning with in-context examples. Prior work has shown that emergence of ICL after IC-Train depends on factors such as task diversity and training duration. In this paper we show that the similarity structure between target inputs and context examples also plays an important role. Random context leads to loss of ICL and IWL dominance, while only similar examples in context causes ICL to degenerate to copying labels without regard to relevance. To address this, we propose a simple Contrastive-Context which enforces two types of contrasts: (1) mix of similar and random examples within a context to evolve a correct form of ICL, and (2) varying grades of similarity across contexts to evolve ICL-IWL mixtures. We present insights on the importance of such contrast with theoretical analysis of a minimal model. We validate with extensive empirical evaluation on four LLMs and several tasks. Diagnostic probes confirm that contrasted contexts yield stable ICL-IWL mixtures, avoiding collapse into pure ICL, IWL, or copying.

Summary

  • The paper introduces a contrastive context sampling protocol that dynamically balances in-context learning and in-weights learning.
  • It demonstrates that adaptive mode selection based on target-context similarity minimizes brittle behaviors seen in standard fine-tuning regimes.
  • Extensive experiments across translation, Text-to-SQL, and semantic parsing tasks validate the method's efficacy in maintaining performance over diverse settings.

Training In-Context and In-Weights Mixtures with Contrastive Context Sampling

Introduction and Motivation

LLMs exhibit two principal learning modalities post-pretraining: in-weights learning (IWL), where domain/task information is embedded into model parameters, and in-context learning (ICL), in which models adapt on-the-fly using provided input-output exemplars at inference. Robust continuous adaptation for real-world tasks necessitates leveraging both modalities and the capacity to switch efficiently between them, depending on the similarity between the test input and supplied context examples.

Standard fine-tuning regimes often disrupt this equilibrium, with zero-shot fine-tuning degrading ICL and traditional in-context (IC) fine-tuning exhibiting brittle, regime-dependent behaviors. "Training In-Context and In-Weights Mixtures Via Contrastive Context Sampling" (2604.01601) presents a systematic investigation into how context-target similarity governs the emergence and maintenance of ICL-IWL mixtures and proposes a contrastive context sampling protocol designed to robustly co-train both capabilities while equipping the model to select the appropriate “mode” at test time. Figure 1

Figure 1: Visual summary of main findings—standard fine-tuning collapses ICL, random-context IC fine-tuning degrades both, similar-context fine-tuning degenerates to copying, and contrastive-context yields a robust, switchable ICL-IWL mixture.

Problem Formalization and Prior Regimes

The paradigm is formalized as fine-tuning on task data D={(xi,yi)}D = \{(\mathbf{x}_i, \mathbf{y}_i)\} with the aim that, after fine-tuning, (1) the model benefits from additional labeled examples appended to DD (absorbed via ICL) in high-similarity test scenarios without further updates, and (2) maintains generalization for unseen or dissimilar test points (relying on IWL).

Key baseline regimes are:

  • Random-Context: Context elements for IC training are sampled randomly, irrespective of target similarity—this emphasizes IWL and erodes ICL.
  • Similar-Context: Context elements are close to the target (e.g., top-kk by similarity)—this suppresses IWL and induces brittle, blind-copying ICL.
  • Contrastive-Context (proposed): Contexts are composed to deliberately span a spectrum of target-context similarities and ensure contrasts both within a context and across the batch, including synthetic “paraphrases” of the target when natural similar examples are lacking.

Theoretical Analysis

The paper constructs a minimal two-layer transformer model with distinct architectural components: a learned in-weights learner f^\hat{f}, and self-attention parameters θ1,θ2,θ3\theta_1, \theta_2, \theta_3 which implement switching among pure ICL, pure IWL, and degenerate copying. Analytical investigation shows:

  • Training with Random-Context yields a stationary point favoring IWL, ignoring context.
  • Training with Similar-Context leads to context averaging (ICL) or even context-blind copying, suppressing in-weights learning.
  • Only contrastive training (random-similar mixtures with intra-context contrasts) induces optimal parameters that allow the model to perform ICL only when context is highly similar and default to IWL otherwise—i.e., learning to select between modes adaptively.

This is achieved by shaping the attention parameters so the model can dynamically upweight the context or rely on internalized parametric representations as warranted.

Empirical Results

Extensive empirical analysis is conducted on 32 model-task-test settings (across Llama 3.2 1B, Llama 3.1 8B, Qwen 2.5 7B, and Mistral 7B), over four low-resource machine translation tasks, eleven Text-to-SQL tasks, and three multilingual semantic parsing tasks.

Performance is always plotted as a function of target-context similarity, with methods compared under both in-domain (ID) and out-of-domain (OOD) evaluation. The trends are:

  • Zero-shot fine-tuning harms ICL, especially for high-similarity test cases.
  • IC-Train with Random-Context is consistently worst when high target-context similarity is present—incapable of leveraging related examples.
  • Similar-Context fine-tuning performs poorly for low-similarity test points due to lack of IWL retention and a tendency towards blind copying.

Contrastive-Context is among the best or competitive across the full spectrum, uniquely preserving both IWL and ICL. Figure 2

Figure 2

Figure 2

Figure 2

Figure 2: Task accuracy across target-context similarity, demonstrating that only the contrastive context method achieves high, stable accuracy throughout the similarity spectrum, while other regime-specific methods show severe drops in one or more regions.

Diagnostic Probes and Emergence of Failure Modes

Dedicated probing is employed to trace the emergence and relative strengths of three behaviors: IWL, ICL, and degenerate copying. These include direct measurement of prediction overlap between contexts and blind copying probes with permuted input-output pairs. During fine-tuning:

  • Random-Context fine-tuning yields rapid ICL loss.
  • Similar-Context fine-tuning produces increased copy scores (highly susceptible to spurious label transfer).
  • Only contrastive contexts retain strong IWL and ICL without collapsing into a pathological regime. Figure 3

    Figure 3: Probing the learning dynamics—contrastive-context uniquely maintains and balances ICL and IWL, while other regimes induce collapse into brittle failure modes.

Ablation Studies

Further ablations address:

  • Robustness to context sampling hyperparameters (pp, ϵ\epsilon) and paraphrase quality.
  • Necessity of explicit intra-context and inter-context contrasts (simple mixtures are insufficient).
  • Comparing contrast-aware paraphrasing to naïve data augmentation (only the former yields robust mixtures).

Implications and Future Directions

This work sharpens operational and theoretical understanding of fast adaptation in LLMs. Practically, contrastive context sampling offers a simple, effective prescription for fine-tuning LLMs so they can flexibly absorb feedback during deployment (continuous adaptation) and reliably adapt in both low and high-similarity settings—a critical capability for both production and few-shot learning uses. Theoretically, the findings reinforce that proper mixture modeling (balancing ICL and IWL dynamically) requires not only diverse batch/task sampling but deliberate intra- and inter-context contrasts to avoid mode collapse and pathological overfitting.

Future research directions suggested by this work include:

  • Extension to extremely low-resource and cross-lingual tasks, where appropriate context similarity is even more crucial.
  • Deeper mechanistic analysis of how contrastive sampling interacts with specific transformer architectural motifs.
  • Development of metrics or diagnostics for adaptive mode selection in-the-wild and during continual learning.

Conclusion

The paper provides a rigorous dissection of fine-tuning strategies for balancing and mixing in-context and in-weights learning in LLMs. It demonstrates, both theoretically and empirically, that context-target similarity structure is the primary determinant of effective adaptive behavior post-fine-tuning. The contrastive context protocol proposed induces robust, switchable ICL-IWL mixtures and mitigates common pathologies observed in naïve fine-tuning. This establishes a strong methodological foundation for real-world adaptation tasks and prompts broader reconsideration of how context is incorporated in LLM training and deployment (2604.01601).

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We found no open problems mentioned in this paper.

Collections

Sign up for free to add this paper to one or more collections.