Cross-language Sentence Selection via Data Augmentation and Rationale Training (2106.02293v1)
Abstract: This paper proposes an approach to cross-language sentence selection in a low-resource setting. It uses data augmentation and negative sampling techniques on noisy parallel sentence data to directly learn a cross-lingual embedding-based query relevance model. Results show that this approach performs as well as or better than multiple state-of-the-art machine translation + monolingual retrieval systems trained on the same parallel data. Moreover, when a rationale training secondary objective is applied to encourage the model to match word alignment hints from a phrase-based statistical machine translation model, consistent improvements are seen across three language pairs (English-Somali, English-Swahili and English-Tagalog) over a variety of state-of-the-art baselines.
- Yanda Chen (13 papers)
- Chris Kedzie (14 papers)
- Suraj Nair (39 papers)
- Petra Galuščáková (6 papers)
- Rui Zhang (1138 papers)
- Douglas W. Oard (18 papers)
- Kathleen McKeown (85 papers)