Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Combining Static and Contextualised Multilingual Embeddings (2203.09326v1)

Published 17 Mar 2022 in cs.CL

Abstract: Static and contextual multilingual embeddings have complementary strengths. Static embeddings, while less expressive than contextual LLMs, can be more straightforwardly aligned across multiple languages. We combine the strengths of static and contextual models to improve multilingual representations. We extract static embeddings for 40 languages from XLM-R, validate those embeddings with cross-lingual word retrieval, and then align them using VecMap. This results in high-quality, highly multilingual static embeddings. Then we apply a novel continued pre-training approach to XLM-R, leveraging the high quality alignment of our static embeddings to better align the representation space of XLM-R. We show positive results for multiple complex semantic tasks. We release the static embeddings and the continued pre-training code. Unlike most previous work, our continued pre-training approach does not require parallel text.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Katharina Hämmerl (7 papers)
  2. Jindřich Libovický (36 papers)
  3. Alexander Fraser (50 papers)
Citations (9)