Whether knowledge editing in LLMs constitutes meta-learning

Determine whether knowledge-editing techniques for Large Language Models constitute meta-learning processes driven by changes in training data distributions.

Background

The paper argues that the underlying processes of next-token prediction are opaque, and thus the relationship between knowledge editing and meta-learning is uncertain.

Clarifying this relationship would have implications for the design and evaluation of editing methods intended to produce intentional and durable knowledge updates.

References

Thus, it is unclear whether current KE methods constitute meta-learning stemming from variable probability distributions of training data.

Towards Incremental Learning in Large Language Models: A Critical Review (2404.18311 - Jovanovic et al., 28 Apr 2024) in Section 3.3 (LLMs and Metacognition)