Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Soft Contextual Data Augmentation for Neural Machine Translation (1905.10523v1)

Published 25 May 2019 in cs.CL

Abstract: While data augmentation is an important trick to boost the accuracy of deep learning methods in computer vision tasks, its study in natural language tasks is still very limited. In this paper, we present a novel data augmentation method for neural machine translation. Different from previous augmentation methods that randomly drop, swap or replace words with other words in a sentence, we softly augment a randomly chosen word in a sentence by its contextual mixture of multiple related words. More accurately, we replace the one-hot representation of a word by a distribution (provided by a LLM) over the vocabulary, i.e., replacing the embedding of this word by a weighted combination of multiple semantically similar words. Since the weights of those words depend on the contextual information of the word to be replaced, the newly generated sentences capture much richer information than previous augmentation methods. Experimental results on both small scale and large scale machine translation datasets demonstrate the superiority of our method over strong baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Jinhua Zhu (28 papers)
  2. Fei Gao (458 papers)
  3. Lijun Wu (113 papers)
  4. Yingce Xia (53 papers)
  5. Tao Qin (201 papers)
  6. Wengang Zhou (153 papers)
  7. Xueqi Cheng (274 papers)
  8. Tie-Yan Liu (242 papers)
Citations (120)