Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-agent Learning for Neural Machine Translation (1909.01101v1)

Published 3 Sep 2019 in cs.CL

Abstract: Conventional Neural Machine Translation (NMT) models benefit from the training with an additional agent, e.g., dual learning, and bidirectional decoding with one agent decoding from left to right and the other decoding in the opposite direction. In this paper, we extend the training framework to the multi-agent scenario by introducing diverse agents in an interactive updating process. At training time, each agent learns advanced knowledge from others, and they work together to improve translation quality. Experimental results on NIST Chinese-English, IWSLT 2014 German-English, WMT 2014 English-German and large-scale Chinese-English translation tasks indicate that our approach achieves absolute improvements over the strong baseline systems and shows competitive performance on all tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Tianchi Bi (4 papers)
  2. Hao Xiong (41 papers)
  3. Zhongjun He (19 papers)
  4. Hua Wu (191 papers)
  5. Haifeng Wang (194 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.