Bitext Mining for Low-Resource Languages via Contrastive Learning
Abstract: Mining high-quality bitexts for low-resource languages is challenging. This paper shows that sentence representation of LLMs fine-tuned with multiple negatives ranking loss, a contrastive objective, helps retrieve clean bitexts. Experiments show that parallel data mined from our approach substantially outperform the previous state-of-the-art method on low resource languages Khmer and Pashto.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.