Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Multilingual Sentence Representations with Cross-lingual Consistency Regularization (2306.06919v1)

Published 12 Jun 2023 in cs.CL and cs.AI

Abstract: Multilingual sentence representations are the foundation for similarity-based bitext mining, which is crucial for scaling multilingual neural machine translation (NMT) system to more languages. In this paper, we introduce MuSR: a one-for-all Multilingual Sentence Representation model that supports more than 220 languages. Leveraging billions of English-centric parallel corpora, we train a multilingual Transformer encoder, coupled with an auxiliary Transformer decoder, by adopting a multilingual NMT framework with CrossConST, a cross-lingual consistency regularization technique proposed in Gao et al. (2023). Experimental results on multilingual similarity search and bitext mining tasks show the effectiveness of our approach. Specifically, MuSR achieves superior performance over LASER3 (Heffernan et al., 2022) which consists of 148 independent multilingual sentence encoders.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Pengzhi Gao (14 papers)
  2. Liwen Zhang (34 papers)
  3. Zhongjun He (19 papers)
  4. Hua Wu (191 papers)
  5. Haifeng Wang (194 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.