Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cross-language Sentence Selection via Data Augmentation and Rationale Training (2106.02293v1)

Published 4 Jun 2021 in cs.CL and cs.IR

Abstract: This paper proposes an approach to cross-language sentence selection in a low-resource setting. It uses data augmentation and negative sampling techniques on noisy parallel sentence data to directly learn a cross-lingual embedding-based query relevance model. Results show that this approach performs as well as or better than multiple state-of-the-art machine translation + monolingual retrieval systems trained on the same parallel data. Moreover, when a rationale training secondary objective is applied to encourage the model to match word alignment hints from a phrase-based statistical machine translation model, consistent improvements are seen across three language pairs (English-Somali, English-Swahili and English-Tagalog) over a variety of state-of-the-art baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yanda Chen (13 papers)
  2. Chris Kedzie (14 papers)
  3. Suraj Nair (39 papers)
  4. Petra Galuščáková (6 papers)
  5. Rui Zhang (1138 papers)
  6. Douglas W. Oard (18 papers)
  7. Kathleen McKeown (85 papers)
Citations (10)