Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Cross-Domain Low-Resource Text Generation through LLM Post-Editing: A Programmer-Interpreter Approach (2402.04609v1)

Published 7 Feb 2024 in cs.CL

Abstract: Post-editing has proven effective in improving the quality of text generated by LLMs such as GPT-3.5 or GPT-4, particularly when direct updating of their parameters to enhance text quality is infeasible or expensive. However, relying solely on smaller LLMs for post-editing can limit the LLMs' ability to generalize across domains. Moreover, the editing strategies in these methods are not optimally designed for text-generation tasks. To address these limitations, we propose a neural programmer-interpreter approach that preserves the domain generalization ability of LLMs when editing their output. The editing actions in this framework are specifically devised for text generation. Extensive experiments demonstrate that the programmer-interpreter significantly enhances GPT-3.5's performance in logical form-to-text conversion and low-resource machine translation, surpassing other state-of-the-art (SOTA) LLM post-editing methods in cross-domain settings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Zhuang Li (69 papers)
  2. Levon Haroutunian (3 papers)
  3. Raj Tumuluri (2 papers)
  4. Philip Cohen (5 papers)
  5. Gholamreza Haffari (141 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets