Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

mGTE: Generalized Long-Context Text Representation and Reranking Models for Multilingual Text Retrieval (2407.19669v2)

Published 29 Jul 2024 in cs.CL and cs.IR

Abstract: We present systematic efforts in building long-context multilingual text representation model (TRM) and reranker from scratch for text retrieval. We first introduce a text encoder (base size) enhanced with RoPE and unpadding, pre-trained in a native 8192-token context (longer than 512 of previous multilingual encoders). Then we construct a hybrid TRM and a cross-encoder reranker by contrastive learning. Evaluations show that our text encoder outperforms the same-sized previous state-of-the-art XLM-R. Meanwhile, our TRM and reranker match the performance of large-sized state-of-the-art BGE-M3 models and achieve better results on long-context retrieval benchmarks. Further analysis demonstrate that our proposed models exhibit higher efficiency during both training and inference. We believe their efficiency and effectiveness could benefit various researches and industrial applications.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Xin Zhang (904 papers)
  2. Yanzhao Zhang (18 papers)
  3. Dingkun Long (23 papers)
  4. Wen Xie (7 papers)
  5. Ziqi Dai (3 papers)
  6. Jialong Tang (17 papers)
  7. Huan Lin (55 papers)
  8. Baosong Yang (57 papers)
  9. Pengjun Xie (85 papers)
  10. Fei Huang (409 papers)
  11. Meishan Zhang (70 papers)
  12. Wenjie Li (183 papers)
  13. Min Zhang (630 papers)
Citations (24)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets