Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 80 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 31 tok/s Pro
GPT-5 High 21 tok/s Pro
GPT-4o 86 tok/s Pro
GPT OSS 120B 454 tok/s Pro
Kimi K2 160 tok/s Pro
2000 character limit reached

EuroBERT: Scaling Multilingual Encoders for European Languages (2503.05500v2)

Published 7 Mar 2025 in cs.CL and cs.AI

Abstract: General-purpose multilingual vector representations, used in retrieval, regression and classification, are traditionally obtained from bidirectional encoder models. Despite their wide applicability, encoders have been recently overshadowed by advances in generative decoder-only models. However, many innovations driving this progress are not inherently tied to decoders. In this paper, we revisit the development of multilingual encoders through the lens of these advances, and introduce EuroBERT, a family of multilingual encoders covering European and widely spoken global languages. Our models outperform existing alternatives across a diverse range of tasks, spanning multilingual capabilities, mathematics, and coding, and natively supporting sequences of up to 8,192 tokens. We also examine the design decisions behind EuroBERT, offering insights into our dataset composition and training pipeline. We publicly release the EuroBERT models, including intermediate training checkpoints, together with our training framework.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

  • The paper introduces EuroBERT, a set of multilingual encoders that leverage recent advances in generative modeling to enhance performance on various tasks.
  • The models achieve superior performance across multilingual tasks, mathematics, and coding while supporting sequences up to 8,192 tokens.
  • The study provides detailed insights into its dataset and training, publicly releasing models with intermediate checkpoints for the research community impatiently.

EuroBERT advances multilingual language encoding for European languages and broad global use.

  • EuroBERT introduces multilingual encoders that leverage advances from generative model innovations for broader utility (2503.05500).
  • The models achieve superior performance across multilingual tasks, mathematics, and coding while supporting sequences up to 8,192 tokens (2503.05500).
  • The paper provides detailed insights into the dataset composition, training pipeline, and public release of models with intermediate checkpoints (2503.05500).
Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.