A Morphology-Based Investigation of Positional Encodings (2404.04530v2)
Abstract: Contemporary deep learning models effectively handle languages with diverse morphology despite not being directly integrated into them. Morphology and word order are closely linked, with the latter incorporated into transformer-based models through positional encodings. This prompts a fundamental inquiry: Is there a correlation between the morphological complexity of a language and the utilization of positional encoding in pre-trained LLMs? In pursuit of an answer, we present the first study addressing this question, encompassing 22 languages and 5 downstream tasks. Our findings reveal that the importance of positional encoding diminishes with increasing morphological complexity in languages. Our study motivates the need for a deeper understanding of positional encoding, augmenting them to better reflect the different languages under consideration.
- Poulami Ghosh (3 papers)
- Shikhar Vashishth (23 papers)
- Raj Dabre (65 papers)
- Pushpak Bhattacharyya (153 papers)