Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MTEB-French: Resources for French Sentence Embedding Evaluation and Analysis (2405.20468v2)

Published 30 May 2024 in cs.CL, cs.IR, and cs.LG

Abstract: Recently, numerous embedding models have been made available and widely used for various NLP tasks. The Massive Text Embedding Benchmark (MTEB) has primarily simplified the process of choosing a model that performs well for several tasks in English, but extensions to other languages remain challenging. This is why we expand MTEB to propose the first massive benchmark of sentence embeddings for French. We gather 15 existing datasets in an easy-to-use interface and create three new French datasets for a global evaluation of 8 task categories. We compare 51 carefully selected embedding models on a large scale, conduct comprehensive statistical tests, and analyze the correlation between model performance and many of their characteristics. We find out that even if no model is the best on all tasks, large multilingual models pre-trained on sentence similarity perform exceptionally well. Our work comes with open-source code, new datasets and a public leaderboard.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Mathieu Ciancone (3 papers)
  2. Imene Kerboua (4 papers)
  3. Marion Schaeffer (3 papers)
  4. Wissam Siblini (8 papers)
Citations (1)
X Twitter Logo Streamline Icon: https://streamlinehq.com