Papers
Topics
Authors
Recent
Search
2000 character limit reached

Regenerative Logic-Core Protocol

Updated 21 January 2026
  • RLCP is a dual-stream framework that decouples LLMs' internal logic from stored factual associations to achieve a low-entropy reasoning core.
  • The protocol uses adversarial unlearning with gradient reversal layers and KL-divergence anchoring to purge factual data while preserving computational logic.
  • Empirical results reveal a sharp phase transition in factual retention and emergent chain-of-thought scaffolding, advancing modular neural architectures.

The Regenerative Logic-Core Protocol (RLCP) is a dual-stream adversarial unlearning framework for LLMs designed to decouple internal representations of general reasoning (logic) from those encoding specific factual knowledge (facts). RLCP implements a hypothesis termed "digital metabolism," positing that active forgetting of stored facts is necessary for distilling a low-entropy core for abstract logic, yielding models that retain algorithmic reasoning but discard entangled factual associations. Empirically, RLCP exhibits a sharp phase transition in factual retention and prompts emergent structural and behavioral phenomena, including crystallization of latent manifolds and spontaneous adoption of chain-of-thought reasoning. RLCP provides a dynamic, weight-level alternative to modular architectures that physically separate memory and computation, such as DeepSeek Engram, and constitutes a concrete step towards a "Neural CPU + Symbolic RAM" paradigm (Peng et al., 15 Jan 2026).

1. Digital Metabolism: Theoretical Motivation and Formulation

LLMs typically encode both reasoning operations and factual associations within shared weights, leading to "parameter entanglement." This superposition results in the "memory wall," where additional parameters serve mainly to store low-entropy facts rather than enhance computational capacity. Forgetting a memorized fact at inference causes the model to hallucinate, as it cannot explicitly signal retrieval failure. RLCP introduces a thermodynamic analogy—digital metabolism—treating factual associations as high-energy states to be actively purged, leaving a core of abstract logical operators.

The underlying information-theoretic objective refines the standard bottleneck formulation:

minZI(Z;F)βI(Z;L)\min_{Z} I(Z; F) - \beta I(Z; L)

where ZZ denotes latent representations, FF the factual component, and LL the logical component. This objective penalizes factual retention in ZZ while explicitly preserving or boosting logical information.

A key empirical observation is that the gradients of factual-recall loss (Lfact\mathcal{L}_{\text{fact}}) are nearly orthogonal to those of logical-reasoning loss (Llogic\mathcal{L}_{\text{logic}}):

cos(θLfact,θLlogic)δ1\bigl|\cos(\nabla_\theta \mathcal{L}_{\text{fact}}, \nabla_\theta \mathcal{L}_{\text{logic}})\bigr|\leq \delta\ll 1

Given this, purging facts via gradient steps has at most O(ηδ)O(\eta\delta) effect on logical capabilities.

2. RLCP Framework: Architecture and Dual-Stream Training

RLCP employs a dual-stream training architecture comprising three interlocked objectives:

  1. Metabolic Stream: Adversarial unlearning of facts via a Gradient Reversal Layer (GRL) attached at a designated deep layer (ll^*, e.g., layer 20), with a linear probe Pϕ\mathcal{P}_\phi trained to recover entity identity from hidden states. The GRL inverts gradients for the probe’s loss so that the backbone model θ\theta is optimized to destroy linearly decodable factual signals.
  2. Survival Stream: A retrieval-augmented generation (RAG)-style objective ensures the model can answer factual questions when relevant context is presented, preserving "contextual reasoning" capacity.
  3. Homeostatic Repair: A KL-divergence anchoring term stabilizes the unlearning process by maintaining the overall output distribution close to a reference, preventing catastrophic collapse.

The composite batch loss is:

Ltotal=LRAG+λadvLP+LL+λKLDKL(PrefPθ)\mathcal{L}_{\mathrm{total}} = \mathcal{L}_{\mathrm{RAG}} + \lambda_{\mathrm{adv}}\mathcal{L}_P + \mathcal{L}_L + \lambda_{\mathrm{KL}} D_{\mathrm{KL}}(P_{\mathrm{ref}}\Vert P_\theta)

where LP\mathcal{L}_P is the probe cross-entropy (maximized with respect to probe, minimized with respect to model), LRAG\mathcal{L}_{\mathrm{RAG}} addresses context-based QA, LL\mathcal{L}_L is an unlikelihood loss penalizing correct fact recall in absence of context, and DKLD_{\mathrm{KL}} anchors the model to the reference.

RLCP Algorithm (Key Steps)

Step Operation Details
1 Probe GRL inverts probe gradient at ll^*
2 Losses Composite of RAG, probe, unlikelihood, KL
3 Update θ\theta via θLtotal\nabla_\theta \mathcal{L}_{\mathrm{total}}

The adversarial weight α\alpha is set dynamically as a function of training progress. On each batch, the probe is updated to maximize factual decoding, while the backbone is updated to minimize it and satisfy the other objectives.

3. Mathematical Properties and Metrics

The gradient reversal layer operates as follows: during backpropagation, the hidden state hh is passed forward unchanged, but probe gradients are multiplied by α-\alpha during backward propagation.

Parameter updates decompose:

ΔθRLCP=η[λadvθLP+θ(LRAG+LL+λKLLKL)]\Delta\theta_{\mathrm{RLCP}} = -\eta\left[\lambda_{\mathrm{adv}}\nabla_\theta\mathcal{L}_P + \nabla_\theta(\mathcal{L}_{\mathrm{RAG}} + \mathcal{L}_L + \lambda_{\mathrm{KL}}\mathcal{L}_{\mathrm{KL}})\right]

The impact on logic loss is bounded (Corollary 2.4):

Llogic(θ+Δθ)Llogic(θ)iαiδiLiLlogic+O(Δθ2)|\mathcal{L}_{\mathrm{logic}}(\theta+\Delta\theta) - \mathcal{L}_{\mathrm{logic}}(\theta)| \leq \sum_i |\alpha_i|\,\delta_i\,\|\nabla\mathcal{L}_i\|\,\|\nabla\mathcal{L}_{\mathrm{logic}}\| + O(\|\Delta\theta\|^2)

Empirical metrics defined for assessment include zero-shot recall (fraction of facts correctly answered without context), probe accuracy (fraction of correct entity classifications by the linear probe), and RAG accuracy (QA with context).

4. Empirical Phenomena: Phase Transition, Structural Crystallization, and Attention Dynamics

Upon applying RLCP to Qwen2.5-0.5B with 15 city–country facts, a sharp recall phase transition is observed:

Model Zero-Shot Recall Probe Accuracy (Layer 20) RAG Performance
Original 100% 93.3% 100%
Just-RAG Baseline 95% 88.5% 100%
RLCP 0% 6.7% (chance) 100%

The reduction in probe accuracy to chance level (1/15 ≈ 6.7%) marks a transition to a factual "tabula rasa" state, with factual associations rendered linearly undecodable. t-SNE analysis reveals "structural crystallization," where semantic subspaces (e.g., cities, fruits) collapse into type centroids, erasing linear separability of individual facts but preserving categorical manifolds.

Attention entropy per head, defined as H=ipilogpiH = -\sum_{i} p_i\log p_i (with pip_i the attention weight), cools from H1.59H \approx 1.59 (diffuse) in the baseline to H0.90H \approx 0.90 (sharp, focused) with RLCP. Attention allocation shifts: RLCP models assign ≈70% of attention to the supporting context token (cf. <10% in baseline), indicating a reallocation of attention capacity from internal memory to external context.

5. Emergent Chain-of-Thought Scaffolding

On GSM8K mathematical reasoning tasks, RLCP models spontaneously adopt explicit chain-of-thought (CoT) scaffolding—stepwise problem-solving—in contrast to the shortcutting or direct associative recall (O(1)O(1)) observed in standard models. The "metabolized" RLCP model executes explicit O(N)O(N) algorithmic decompositions, such as:

  • Step 1: divide,
  • Step 2: add,
  • Step 3: sum...

This suggests that RLCP, by erasing direct factual associations, compels the model to reconstruct answers algorithmically. The behavioral change is interpreted as the emergence of a pure logic core free from fact-memorization, but the causal mechanism (e.g., whether due purely to resource reallocation or regularization effects) warrants further investigation.

6. Modular Architectures and Open Research Questions

RLCP achieves soft decoupling of memory and computation by dynamically unlearning facts without architectural modification of the Transformer. This complements hard decoupling approaches, such as DeepSeek Engram, which instantiate distinct modules for symbolic RAM and neural compute. RLCP findings suggest that logical computation and factual storage can be separated at the level of weights through targeted adversarial training.

Open questions include:

  • Scalability: efficacy of RLCP for unlearning thousands of facts in large models (≥10B parameters) without compromising reasoning.
  • Task generalization: effects of in-domain unlearning (e.g., removing arithmetic facts) on task-specific performance.
  • Probing: potential for nonlinear or deeper analyses to confirm total factual erasure.
  • Mechanism: disambiguating the cause of emergent chain-of-thought.
  • Objective design: efficiency of alternative objectives for approximate layerwise metabolism loss.

Overall, RLCP demonstrates that targeted adversarial unlearning can carve out a neural logic core, evidencing a phase transition in factual retention, structural crystallization, focused attention, and the emergence of stepwise reasoning, advancing the modular separation of memory and computation in deep learning systems (Peng et al., 15 Jan 2026).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Regenerative Logic-Core Protocol (RLCP).