Papers
Topics
Authors
Recent
Search
2000 character limit reached

Forget Less by Learning from Parents

Updated 12 January 2026
  • FLLP is a hierarchical learning framework preventing catastrophic forgetting in diffusion models by using parent-child relationships in hyperbolic spaces.
  • Utilizes Lorentzian geometry to embed concept hierarchies, facilitating structured transfer and maintaining previously learned knowledge.
  • Demonstrates consistent improvements in knowledge retention and generalization across synthetic and real-world datasets.

Forget Less by Learning from Parents (FLLP) is a hierarchical continual learning framework designed to address catastrophic forgetting in Custom Diffusion Models (CDMs) by leveraging parent–child relationships among learned concepts within a hyperbolic embedding space. FLLP mitigates destructive interference that arises when new concepts are learned sequentially by modeling positive inter-concept transfer, defining explicit entailment cones in Lorentzian geometry to regulate how novel “child” concept representations align with those of previously learned “parents.” The approach demonstrates consistent improvements in both knowledge retention and generalization across synthetic and real-world datasets (Kaushik et al., 5 Jan 2026).

1. Catastrophic Forgetting in Sequential Concept Learning

Catastrophic forgetting occurs in CDMs when the sequential introduction of concepts T1,T2,,TCT_1, T_2, \dots, T_C, each with limited reference data, causes gradients from new concepts TcT_c to overwrite parameterizations for previously acquired T1,,Tc1T_1, \dots, T_{c-1}. Representations for text-to-image diffusion reside across the U-Net backbone, cross-attention mechanisms, and learned token embeddings, making these systems particularly vulnerable. Conventional continual learning (CL) strategies—Elastic Weight Consolidation (EWC), knowledge distillation, and related regularization—focus exclusively on suppressing interference, handling concepts as independent. Such methods fail to capitalize on meaningful conceptual relationships, particularly the compositional and hierarchical structure inherent in natural categories or human-generated labels. FLLP reframes the problem: previously learned concepts provide constructive supervision for adapting to new concepts, effectively serving as inductive biases that can be formally modeled.

2. Hyperbolic Embeddings via the Lorentz Model

FLLP utilizes a negatively curved (hyperbolic) space to encode concept hierarchies, specifically embedding image attention maps into the Lorentz (hyperboloid) model of hyperbolic geometry. This framework naturally accommodates tree-like data structures, as hyperbolic spaces can isometrically embed exponentially expanding graphs. In nn-dimensional Lorentzian space of curvature k>0k>0, the ambient representation is

x,yL=xspaceyspacextimeytime\langle x,y\rangle_{\mathcal L} = x_{\rm space}^\top y_{\rm space} - x_{\rm time}\,y_{\rm time}

with the hyperboloid defined by

Lkn={xRn+1:x,xL=1k,xtime>0}.\mathcal L^n_k = \{\,x\in\mathbb R^{n+1} : \langle x, x\rangle_{\mathcal L} = -\tfrac{1}{k},\, x_{\rm time} > 0\}.

Distance between embeddings xx and yy is given by the Lorentzian geodesic

dL(x,y)=1kcosh1 ⁣(kx,yL).d_\mathcal{L}(x, y) = \frac{1}{\sqrt{k}}\,\cosh^{-1}\!\bigl(-k\,\langle x, y\rangle_\mathcal{L}\bigr).

The exponential map at the origin O=[0,1/k]O=[0,\sqrt{1/k}] can project Euclidean vectors into the hyperbolic manifold.

This suggests that the choice of Lorentzian geometry is critical for encoding entailment and efficiently modeling exponentially branching concept taxonomies inherent in continual concept learning.

3. Hierarchical Parent–Child Guidance in Concept Embedding

Within the hyperbolic space, each learned concept embedding yy defines an “entailment cone,” parameterized by its half-aperture: Aper(y)=sin1 ⁣(2Kkyspace),K0.1.\mathrm{Aper}(y) = \sin^{-1}\!\Bigl(2K\,\sqrt{k}\,\|y_{\rm space}\|\Bigr), \quad K\approx 0.1. Given a child concept embedding xx, the exterior angle from the parent's cone axis is computed as: Ext(y,x)=cos1 ⁣(ytime+xtimekx,yLyspace(kx,yL)21).\mathrm{Ext}(y, x) = \cos^{-1}\!\left( \frac{y_{\rm time} + x_{\rm time}\,k\,\langle x, y\rangle_{\mathcal L}} {\,\|y_{\rm space}\|\, \sqrt{(k\,\langle x, y\rangle_{\mathcal L})^2 - 1}}\right). FLLP enforces that the child embedding xx lies within its parent’s cone, up to a slack β\beta: Lentail=max ⁣(0,Ext(y,x)βAper(y)).\mathcal{L}_{\mathrm{entail}} = \max\!\left(0,\,\mathrm{Ext}(y, x) - \beta\,\mathrm{Aper}(y)\right). This constraint formalizes the notion that a newly acquired concept should generalize from, but not drift excessively beyond, the scope defined by relevant parent concepts in the learned hierarchy.

4. Loss Formulation and Training Dynamics

The overall FLLP objective integrates three terms: the standard diffusion reconstruction loss, a parent entailment penalty over image-attention maps, and a consolidation loss on LoRA adapter parameters (as in CIDM):

Ldiff=EzΦ(x),c,ϵ,tϵϵθ(ztc,t)22\mathcal{L}_{\mathrm{diff}} = \mathbb{E}_{z\sim\Phi(x),\,c,\,\epsilon,t} \left\| \epsilon - \epsilon_{\theta'}(z_t | c, t) \right\|_2^2

L1=i=1gl=1LΔWilHilWlF2\mathcal{L}_1 = \sum_{i=1}^g\sum_{l=1}^L \|\Delta W_i^l - H_i^l\,W_*^l\|_F^2

LentailParent=yPnewmax(0,Ext(y,xnew)βAper(y))\mathcal{L}_{\mathrm{entailParent}} = \sum_{y\in\mathcal{P}_{\rm new}} \max\left(0, \mathrm{Ext}(y,x_{\rm new}) - \beta\,\mathrm{Aper}(y)\right)

Ltotal=Ldiff+γ1LentailParent+γ2L1,γ1=γ2=0.1\mathcal{L}_{\mathrm{total}} = \mathcal{L}_{\mathrm{diff}} + \gamma_1\,\mathcal{L}_{\mathrm{entailParent}} + \gamma_2\,\mathcal{L}_1, \quad \gamma_1 = \gamma_2 = 0.1

Training consists of projecting reference attention maps for each concept into the Lorentzian manifold, computing a parent chain for every novel concept (by iterative nearest-neighbor search in hyperbolic distance, discounting pathological self-loops), and then aggregating the entailment error along this chain. Gradients are jointly back-propagated through the U-Net’s LoRA-adapted layers, cross-attention mechanisms, and hyperbolic projections.

5. Architectural Design and Adaptations

FLLP extends a pretrained Stable Diffusion v1.5 U-Net backbone with LoRA adapters in each transformer layer for efficient personalization. Timestep-weighted cross-attention maps Iα(t)I_{\alpha}^{(t)} are extracted and summarized per concept as Iˉα\bar{I}_{\alpha}. These attention summaries are lifted into hyperbolic space for entailment-based regularization. Notably, no novel network architectural components are introduced beyond the LoRA adapters, so the core U-Net structure remains invariant apart from injected personalization weights.

The methodology circumvents extensive storage or computation overheads, as only low-rank adapter parameters and attention summaries are maintained. Hyperparameters are selected as follows: learning rate for tokens at 1×1031\times10^{-3}, for UNet at 1×1041\times10^{-4}, curvature kk is learnable and initialized at 1, K=0.1K=0.1, and β\beta is tuned per concept.

6. Experimental Protocols and Comparative Results

FLLP is benchmarked on three datasets—CIFC (synthetic concepts), CelebA (face identities), and an ImageNet subset—each featuring low-shot, sequential concept addition. Peer methods include direct fine-tuning, EWC, LwF, C-LoRA, L2DM, Textual Inversion (TI), and CIDM. Evaluation metrics are CLIP Image Alignment (IA) and Text Alignment (TA), aggregating statistics over 20 prompts and 50 generations per concept.

Key performance improvements over CIDM are observed:

Dataset Δ IA Δ TA
CIFC +2.0 +1.3
CelebA +4.4 +2.0
ImageNet +1.1 +0.5

Across 10 concepts:

  • CIFC: IA 78.0 → 80.0, TA 74.8 → 76.1
  • CelebA: IA 73.3 → 77.7, TA 58.8 → 60.8
  • ImageNet: IA 81.2 → 82.3, TA 78.5 → 79.0

Ablation analyses indicate that constraining image-attention maps (rather than directly regularizing LoRA weights) achieves superior retention/generalization trade-offs. FLLP remains effective when scaling to 35 concepts (as in CustomConcept101, +2.1 IA, +1.0 TA). Parameter drift, measured via LoRA Frobenius norm change, is reduced by 22% compared to CIDM.

7. Qualitative Observations and Knowledge Transfer

Qualitative experiments demonstrate that FLLP preserves previously learned identities and concept-specific features. For example, learning “Dog2” after “Cat1” and “Duck” produces generations where the new dog concept is structurally and texturally anchored by a hyperbolic parent chain {Cat1,Duck,}\{\text{Cat1},\text{Duck},\dots\}, avoiding distortions and artifact propagation that afflict TI and CIDM generations. Attention maps remain both focused and interpretable, with minimal erasure or deformation of prior concepts.

The formalization of parent–child hyperbolic guidance turns catastrophic forgetting from a destructive phenomenon into an opportunity for structured, compositional positive transfer. The result is enhanced state-of-the-art performance in retention and adaptation for continual concept learning in diffusion models (Kaushik et al., 5 Jan 2026).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Forget Less by Learning from Parents (FLLP).