Cross-scale transfer of T2L adapters within the same architecture class
Determine whether Text-to-LoRA (T2L) hypernetworks trained on smaller base models can transfer effectively to larger models within the same architecture class, and characterize the conditions under which such transfer achieves robust downstream performance without retraining the hypernetwork from scratch.
References
Finally, the potential for T2L trained on a smaller base model to transfer effectively to larger models within the same architecture class remains an open area for exploration.
— Text-to-LoRA: Instant Transformer Adaption
(2506.06105 - Charakorn et al., 6 Jun 2025) in Discussion and Limitations (Discussion)