Papers
Topics
Authors
Recent
2000 character limit reached

Adaptive Task-specific Memory Module

Updated 29 December 2025
  • ATM Modules are structured memory systems modeled as trees or DAGs that support explicit task decomposition and substep dependency management.
  • They employ techniques like node-level insertion, revision, and similarity-based search to achieve efficient context synthesis and token-cost pruning.
  • Empirical evaluations show that ATM reduces token usage by about 19% while maintaining 100% task accuracy and enhancing robustness against hallucination.

Adaptive Task-specific Memory (ATM) Module provides a principled memory architecture designed to endow machine learning agents with persistent, structured, and context-sensitive recall for complex, multi-step, or continual tasks. ATM modules operationalize explicit task decomposition, substep dependency management, revision-tracking, and efficient context synthesis through carefully engineered data structures and memory-control routines—enabling robust autonomy and mitigating brittleness arising from naive, linear prompt accumulation.

1. Hierarchical and Graph-Aware Structured Memory

ATM modules instantiate a structured memory architecture modeled as a rooted tree or, in general, a directed acyclic graph (DAG), generalizing the Task Memory Tree (TMT) formalism. At the core is a node-centric schema:

  • Let T=(V,E)T = (V, E) with VV nodes and EV×VE \subseteq V \times V parent–child edges—extended to DAG (G = (V, E), E=ETEDE = E_T \cup E_D) for supporting substep sharing and multiple dependency types.
  • Each node nVn \in V stores a 7-tuple:

n=(id,action,input,output,status,children,deps,metadata)n = (\text{id}, \text{action}, \text{input}, \text{output}, \text{status}, \text{children}, \text{deps}, \text{metadata})

  • id\text{id}: unique node identifier.
  • action\text{action}: concise, textual action/step label.
  • input/output\text{input} / \text{output}: JSON-serializable data blobs.
  • status{waiting,active,done,failed}\text{status}\in\{\text{waiting},\text{active},\text{done},\text{failed}\}.
  • children\text{children}: list of subtasks (tree edges ETE_T).
  • deps\text{deps}: cross-task or resource dependencies (EDE_D).
  • metadata\text{metadata}: timestamps, retry counts, user references, etc.
    • Adjacency may be encoded via sparse matrix A{0,1}V×VA \in \{0,1\}^{|V|\times |V|} or adjacency lists: Adj[v]={w:(vw)E}\text{Adj}[v] = \{w: (v\to w) \in E\}.

This explicit structure supports modular, interpretable tracking of task lineage, compositional access, and context pruning (Ye, 11 Apr 2025, Ye, 26 May 2025).

2. Memory Update, Retrieval, and Dependency Management

ATM modules support insertion, revision, dependency, and retrieval logic at node granularity:

  • Insertion (WriteStep) creates a new child node under a parent with:
    1
    2
    3
    4
    5
    
    def WriteStep(parent, action, input):
        n = Node(id=UUID(), action=action, input=input, output=None, status='waiting')
        parent.children.append(n)
        V.add(n)
        return n
  • Revision links a new node with a “revision_of” edge; deprecated nodes are marked and dependency rewiring may follow.
  • Substep Reuse utilizes similarity search over action embeddings:
    1
    2
    3
    4
    
    for n in V:
        sim = cosine(embed(S), embed(n.action))
        if sim > θ_reuse:
            return n
    This enables DAG consolidation for overlapping roles or converging workflows.
  • Dependency Recording supplements tree structure with cross-links for “depends_on” relationships, supporting multi-parent dependencies and shared subproblems.
  • Retrieval (Read) selects relevant subgraphs:
    1
    2
    3
    4
    
    def RetrieveRelevant(user_query):
        q_emb = embed(user_query)
        candidates = top_k_by_cosine(q_emb, {n.action_emb for n in V})
        return [n for n in candidates if n.status != 'done' or n in active_path]
    Nodes are indexed by semantic embeddings, facilitating rapid and robust context reconstruction.

3. Prompt Synthesis and Token-Efficient Context Construction

Prompting leverages hierarchical memory to construct condensed, context-consistent queries:

  • Active Path Extraction follows from root to active leaf within the tree or traverses user-relevant branches in a DAG.
  • Dynamic Prompt Assembly composes a prompt via:

Prompt=i=0k(format(vi.action)format(vi.input)format(vi.output))CurrentUserQuery\text{Prompt} = \bigoplus_{i=0}^k (\text{format}(v_i.\text{action}) \| \text{format}(v_i.\text{input}) \| \text{format}(v_i.\text{output})) \big\| \text{CurrentUserQuery}

  • Token-Cost Pruning enforces constraints:

P=argminpPvpcost(v)s.t.coherence(p)τP^* = \arg\min_{p \subseteq P}\sum_{v \in p}\mathrm{cost}(v) \quad\text{s.t.}\quad \mathrm{coherence}(p) \geq \tau

Nodes are selected to maintain contextual coverage (by coherence heuristics) under a token budget. In practice, ATM solutions retain the last NN active steps and any nodes referenced by dependencies (Ye, 11 Apr 2025).

  • This mechanism yields up to 19% token reduction against full-history concatenation in a 6-step task while preserving 100% task completion accuracy (Ye, 11 Apr 2025).

4. Modular Separation: Relationship Inference and Integration

ATM architectures maintain a strict modular contract:

  • Memory Data Structures (TMT/DAG): Store all execution traces and dependencies.
  • Task Relationship Inference Module (TRIM): Models intent, infers new subtask/revision/lookup operations, and classifies node–query relations via embedding similarity and lightweight classifiers. All relationship logic is external to the LLM foundation.
  • Prompt Synthesis Module: Transforms active memory slices to formatted prompts with minimal redundancy, governed by context-window constraints.
  • This modularization, requiring \ll 100 lines for the node schema and vector storage, yields low implementation cost. Most complexity centers on relationship or rule-based reasoning logic within TRIM (Ye, 11 Apr 2025, Ye, 26 May 2025).

5. Empirical Evaluation: Efficiency, Coherence, and Robustness

Direct comparative results and evaluation protocol have been reported as follows (Ye, 11 Apr 2025):

Metric Baseline (Linear) ATM/TME (Structured) Relative Change
Total tokens (6 steps) 899 725 –19.4%
Task accuracy 100% 100% =
Hallucination frequency 0 0 (w/ longer tasks: more robust)
Human-incoherence flags Not specified Fewer

Findings:

  • ATM structures reduce drift and hallucination in >10-step scenarios.
  • Structured context enables more interpretable LLM behavior compared to concatenation pipelines.

6. Design Principles, Tradeoffs, and Implementation Considerations

Architectural and practical lessons consolidate as follows (Ye, 11 Apr 2025, Ye, 26 May 2025):

Principle Tree-Based ATM DAG-Extended ATM
Traversal Deterministic/serial Complex/branches/joins
Update Overhead Local insertions Requires cycle detection
Substep Reuse Limited (no sharing) Enabled (dependency edges)
Rollback Handling Immediate/backtrack Cascading; more edge cases

Key properties:

  • Structured (hierarchical/graph) memory reduces context fragmentation and token waste.
  • Modularity ensures that memory, relationship inference, and prompt engineering can evolve separately.
  • Similarity indexing with precomputed embeddings accelerates both node reuse and dependency analysis.
  • Pruning heuristics (recency, dependency, completion status) support scale-out to longer sequences.
  • Implementation remains lightweight: with a JSON-serializable schema and an external vector index, core memory routines remain compact.

7. Applicability and Outlook

ATM modules, as operationalized in TME, support a broad spectrum of multi-step LLM agent tasks, form-filling, workflow agents, and interactive assistants that require persistent, revision-tracked, transparent state (Ye, 11 Apr 2025, Ye, 26 May 2025). Their plug-and-play nature, token efficiency, and robust error handling make them well-suited for production automation in domains requiring dynamic subgoal management and high-context task completion.

A plausible implication is that ATM-style structured memory—encompassing graph, relational, and prompt modules—will serve as a robust primitive for the next generation of autonomous, dialog-based, and continual-learning systems requiring transparent, revision-aware, and compositional memory scaffolding.

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Adaptive Task-specific Memory (ATM) Module.