Papers
Topics
Authors
Recent
Search
2000 character limit reached

Dynamic In-Context Learning (DynaICL)

Updated 1 February 2026
  • Dynamic In-Context Learning (DynaICL) is a method that dynamically updates a context buffer with new observations to enable adaptive, state-dependent decision-making.
  • It employs algorithms such as dynamic context expansion, relevance-based pruning, and demonstration selection to optimize performance in sequential and multimodal tasks.
  • Architectural implementations like the DynaMIC framework demonstrate DynaICL’s effectiveness in embodied robotics, retrieval-augmented generation, and adaptive planning.

Dynamic In-Context Learning (DynaICL) refers to a set of mechanisms, architectures, and algorithms enabling models—particularly LLMs and multimodal systems—to adapt their predictions, plans, or behaviors at inference time by building, updating, and exploiting context or demonstration sets in a dynamic, input- and state-dependent fashion. Unlike static in-context learning, where context is fixed and predefined before inference, DynaICL emphasizes ongoing, context-sensitive modification of the model input—incorporating new evidence, perceptual data, or demonstration subsets as the computation proceeds. This approach has proven central to modern LLM-augmented agentic systems, embodied robotics, retrieval-augmented generation, and adaptive decision-making agents.

1. Formalism and Mechanisms

At its core, Dynamic In-Context Learning maintains a context buffer or memory CtC_t at time or step tt, which encodes the history of perceptual inputs, symbolic trajectories, natural language instructions, and other task-relevant state. This context is updated as new perceptual or interaction events occur, with the principal update rule (as in DynaMIC) given by

Ct=Ct1{zt},C_t = C_{t-1} \cup \{ z_t \},

where ztz_t is a structured natural language (NL) transcription of a new sensor observation, perception, or action at step tt (Yan et al., 29 Sep 2025). The context may include:

  • qQq \in \mathcal Q: Current human instruction.
  • KuK_u: Human prior knowledge of the scene.
  • KmK_m: Robot's base affordances, sensors, and actions.
  • Kpt=i=1tPiK_p^t = \bigcup_{i=1}^{t} P_i: All multimodal perceptual data up to tt.
  • Jt=j0,,jtJ^t = \langle j^0, \ldots, j^t \rangle: Executed semantic actions/perceptions.

Dynamic context is central to agentic decision-making and error detection, particularly for handling cases such as Directive Counterfactuals (DCFs) in robotics—where the "in-context" memory enables robots to compare perceived knowledge with prior assumptions and halt execution upon epistemic disagreement.

In general, DynaICL updates are governed by workflow- or observation-triggered expansions of CtC_t, with optional pruning or prioritization: Cttop-KScore(Ct),C_t \leftarrow \mathrm{top\text{-}K}_{\mathrm{Score}}(C_t), where

Score(zi;q,Ku)=αsimtext(q,zi)+(1α)simvis(Kv,zi)\mathrm{Score}(z_i; q, K_u) = \alpha \cdot \mathrm{sim_{text}}(q, z_i) + (1-\alpha) \cdot \mathrm{sim_{vis}}(K_v, z_i)

governs relevance-based retention if context window size is a constraint.

2. Architectural Realizations

Dynamic In-Context Learning is instantiated in various LLM-centric and multimodal architectures designed for sequential, embodied, or agentic reasoning. The DynaMIC framework exemplifies a leading multimodal design (Yan et al., 29 Sep 2025):

  • Visual Encoder: A large-scale Transformer-based model (CogVLM) processes high-dimensional inputs (e.g., RGB images), producing prompt-conditioned, natural-language attribute summaries via independent, relative, and interactivity heads (pvip_{v_i}, pvrp_{v_r}, pvcp_{v_c}), which are concatenated into KvK_v.
  • Textual Planner: A LLM (e.g., GPT-4 Turbo) is prompted with (qq, KvK_v, KbK_b, optional Ct1C_{t-1}), performing a two-pass process: (1) intention alignment with knowledge grounding and DCF detection, and (2) trajectory generation and refinement. The separation of modal encodings is preserved up to the point of prompt concatenation, leveraging pretrained alignments without additional cross-modal attention.
  • Dynamic Loop: The forward pass at each step potentially yields new perception transcriptions, dynamically grown context, and replanning, all mediated by explicit high-level symbolic updates and NL feedback.

This pattern recurs throughout state-of-the-art DynaICL frameworks: dynamic construction of the in-context input to the LLM or agent, recalibration of demonstrations, and closed-loop interaction across modalities and time.

3. Algorithmic Procedures and Updating Strategies

Central to DynaICL are explicit, stage-wise procedures for expanding, pruning, and utilizing in-context knowledge. For instance, in DynaMIC (Yan et al., 29 Sep 2025):

Algorithm A (DCF-aware Trajectory Pre-generation):

  1. Loop:
    • Generate intention ξ\xi from qq and try to align with KvK_v via generation GG; if misaligned, halt and request revision.
  2. Given intention ξ\xi, generate initial plan J0J_0.
  3. Insert perception steps after actions in J0J_0 as needed to increase DCF observability.
  4. Return refined plan JJ^*.

Algorithm B (Execution, In-Context Update, Replanning):

  1. For each step jJj \in J^*:
    • If movement: execute.
    • If perception: obtain raw sensor data, transcribe to zz, update KpK_p, expand CtC_t.
  2. Re-prompt the LLM for DCF detection; on positive, replan or fail out.

Other notable DynaICL algorithms include:

  • Dynamic Demonstrations Controller: Performs offline pilot runs to estimate the optimal number of in-context demonstrations kk^* per dataset/model, avoiding the assumption that "more is always better," and using KL-based intra/inter-class scoring for demonstration selection (Zhao et al., 2023).
  • Dynamic retrieval and self-learning: RAG systems maintain large pools of prior queries/templates, leveraging contextual embedding-based retrieval, robust clustering, and on-the-fly labeling/self-learning to assemble and update few-shot prompts for each query (Spaeh et al., 13 Jan 2026).

4. Applications and Case Studies

Dynamic In-Context Learning mechanisms are deployed in a broad range of practical domains:

  • Embodied Robotics: DynaMIC enables robots to avoid failure modes induced by misleading or erroneous human instructions, employing dynamic context memory, counterfactual detection, and iterated feedback/plan refinement. Success rates for DCF detection and safe plan execution reach 88%88\% in tested tabletop scenarios (Yan et al., 29 Sep 2025).
  • Retrieval-Augmented Generation (RAG): DynaICL underpins robust query suggestion by dynamically retrieving workflow-similar positive/negative exemplars, substantially increasing answerable suggestion rates in multi-agent RAG pipelines (Spaeh et al., 13 Jan 2026).
  • Agentic Sequential Decision-Making: DynaICL with dynamic demonstration set selection and real-time snippet retrieval elevates the performance and reliability of LLM agents for compositional, multi-step environments (Gupta et al., 16 Jun 2025).
  • Dynamic Data Extraction: Scientific information extraction pipelines update the prompt's in-context examples with newly verified extractions, substantially increasing precision/recall as well as data trust (Ekuma, 2024).

5. Comparative Results and Empirical Impact

Empirical studies confirm that dynamic context strategies outperform static few-shot selection in both accuracy and auxiliary criteria (e.g., safety, fairness, efficiency):

Method Task Primary Eval Metric DynaICL Gain Reference
DynaMIC (dynamic multimodal context) Robot DCF detection Success rate (safe) 88% (vs lower for ablated) (Yan et al., 29 Sep 2025)
D²Controller (dynamic kk pilot) LLM text classification Avg accuracy +5.4% rel. improvement (Zhao et al., 2023)
Robust dynamic retrieval (dynamic pool) RAG query suggestion Answerability rate +5–28% over static (Spaeh et al., 13 Jan 2026)
Dynamic demonstration/snippet selection Agentic LLMs Task Goal Completion Up to +14 pts (mini model) (Gupta et al., 16 Jun 2025)

Further, ablation experiments validate that dynamic context expansion (vs. static or naively pruned context) is critical for robust in-context adaptation, especially under context window constraints or in environments where heterogeneity in task difficulty or environmental configuration is significant.

6. Limitations, Open Challenges, and Theoretical Insights

Key limitations and open questions for DynaICL include:

  • Context-Length Constraints: While most applications encounter few context window overflows, anticipatory pruning (by relevance or recency) can be needed, but introduces nontrivial strategy design and trade-offs.
  • Choice of Demonstrations: Dynamic expansion/picking is shown to outperform static blocks, but optimal retrieval, ranking, and clustering strategies remain an open, context- and task-dependent research frontier.
  • Implicit Dynamics: Theoretical work indicates that transformers may realize DynaICL via implicit low-rank weight updates, suggesting that dynamic adaptation is a first-class phenomenon even in the forward pass (Dherin et al., 21 Jul 2025).
  • Modality and Architecture Generalization: Most results are in textual or relatively low-dimensional multimodal spaces; scaling DynaICL practices to high-dimensional continuous or richly structured domains is an active area.
  • Robustness to Adversarial Contexts: Dynamic in-context mechanisms can in principle detect or recover from misalignment or partial knowledge, but their performance in highly adversarial or distribution-shifting scenarios requires further study.

7. Synthesis and Theoretical Foundation

Dynamic In-Context Learning establishes a paradigm where task-relevant context is adaptively constructed, revised, and exploited on the fly, enabling large models and embodied agents to achieve gradient-free, data-efficient, and robust adaptation across a wide spectrum of tasks. The architecture-agnostic principle—updating the in-context buffer and consequently the "input program" for a model at each interaction—unifies disparate approaches in robotics, retrieval-augmented NLP, vision, and agentic planning, and is firmly grounded in both algorithmic innovation (such as dynamic kk-shot estimation, pilot-based selection, robust retrieval) and theoretical insight (low-rank weight adaptation, context-dependent error minimization).

By separating the context memory from static, frozen inputs, DynaICL positions in-context learning as a model- and domain-agnostic substrate for closed-loop intelligent systems, capable of self-improving performance, error detection, and human-in-the-loop feedback realization (Yan et al., 29 Sep 2025, Zhao et al., 2023, Spaeh et al., 13 Jan 2026, Dherin et al., 21 Jul 2025).

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Dynamic In-Context Learning (DynaICL).