Determine the consolidation mechanism for transferring episodic agentic memory into model weights

Determine which mechanism should implement the consolidation channel that transfers agent experience from external episodic stores into model weights within the proposed dual-system agent architecture, choosing among periodic fine-tuning, knowledge editing (e.g., targeted weight edits), test-time training layers, and nested-learning architectures, while accounting for latency, fidelity, and cost trade-offs.

Background

The paper argues that current agentic memory systems provide exemplar-based lookup and lack weight-based consolidation, leading to a generalisation gap and a “frozen novice” dynamic. To close this gap, the authors propose a dual-system architecture where a consolidation channel periodically converts episodic traces into parametric knowledge in the model weights.

Within this architecture, multiple plausible mechanisms could instantiate the consolidation channel, such as periodic fine-tuning, knowledge editing approaches, test-time training layers, and nested-learning architectures. The authors explicitly state that selecting the specific mechanism is an open engineering question due to differing latency, fidelity, and cost trade-offs.

References

The specific mechanism is an open engineering question: periodic fine-tuning, knowledge editing, test-time training, and nested-learning architectures are all candidate instantiations, each with different latency, fidelity, and cost trade-offs.

Contextual Agentic Memory is a Memo, Not True Memory  (2604.27707 - Xu et al., 30 Apr 2026) in Section 7, Proposed Architecture: Co-existence of Memo and Memory (Design principles)