Validation of ConPET under diverse continual learning scenarios and expert splitting strategies

Demonstrate the effectiveness of Continual Parameter-Efficient Tuning (ConPET) across more diverse continual learning scenarios and develop improved strategies for splitting tasks among expert modules.

Background

ConPET combines PET with continual learning, using static and dynamic expert modules to reduce tuning costs while aiming to preserve performance.

The review notes that broader validation and refined task-to-expert partitioning strategies are still needed.

References

However, this remains to be verified using more diverse continual learning scenarios with further improvement of task split strategies among experts.

Towards Incremental Learning in Large Language Models: A Critical Review (2404.18311 - Jovanovic et al., 28 Apr 2024) in Section 2.3 (Parameter-Efficient Learning) – ConPET