2000 character limit reached
Data-Efficient Cross-Lingual Transfer with Language-Specific Subnetworks (2211.00106v1)
Published 31 Oct 2022 in cs.CL
Abstract: Large multilingual LLMs typically share their parameters across all languages, which enables cross-lingual task transfer, but learning can also be hindered when training updates from different languages are in conflict. In this paper, we propose novel methods for using language-specific subnetworks, which control cross-lingual parameter sharing, to reduce conflicts and increase positive transfer during fine-tuning. We introduce dynamic subnetworks, which are jointly updated with the model, and we combine our methods with meta-learning, an established, but complementary, technique for improving cross-lingual transfer. Finally, we provide extensive analyses of how each of our methods affects the models.
- Rochelle Choenni (17 papers)
- Dan Garrette (21 papers)
- Ekaterina Shutova (52 papers)