Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DUAL-REFLECT: Enhancing Large Language Models for Reflective Translation through Dual Learning Feedback Mechanisms (2406.07232v2)

Published 11 Jun 2024 in cs.CL and cs.AI

Abstract: Recently, LLMs enhanced by self-reflection have achieved promising performance on machine translation. The key idea is guiding LLMs to generate translation with human-like feedback. However, existing self-reflection methods lack effective feedback information, limiting the translation performance. To address this, we introduce a DUAL-REFLECT framework, leveraging the dual learning of translation tasks to provide effective feedback, thereby enhancing the models' self-reflective abilities and improving translation performance. The application of this method across various translation tasks has proven its effectiveness in improving translation accuracy and eliminating ambiguities, especially in translation tasks with low-resource language pairs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Andong Chen (6 papers)
  2. Lianzhang Lou (2 papers)
  3. Kehai Chen (59 papers)
  4. Xuefeng Bai (34 papers)
  5. Yang Xiang (187 papers)
  6. Muyun Yang (21 papers)
  7. Tiejun Zhao (70 papers)
  8. Min Zhang (630 papers)
Citations (5)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets