Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cross-Refine: Improving Natural Language Explanation Generation by Learning in Tandem (2409.07123v2)

Published 11 Sep 2024 in cs.CL and cs.LG

Abstract: Natural language explanations (NLEs) are vital for elucidating the reasoning behind LLM decisions. Many techniques have been developed to generate NLEs using LLMs. However, like humans, LLMs might not always produce optimal NLEs on first attempt. Inspired by human learning processes, we introduce Cross-Refine, which employs role modeling by deploying two LLMs as generator and critic, respectively. The generator outputs a first NLE and then refines this initial explanation using feedback and suggestions provided by the critic. Cross-Refine does not require any supervised training data or additional training. We validate Cross-Refine across three NLP tasks using three state-of-the-art open-source LLMs through automatic and human evaluation. We select Self-Refine (Madaan et al., 2023) as the baseline, which only utilizes self-feedback to refine the explanations. Our findings from automatic evaluation and a user study indicate that Cross-Refine outperforms Self-Refine. Meanwhile, Cross-Refine can perform effectively with less powerful LLMs, whereas Self-Refine only yields strong results with ChatGPT. Additionally, we conduct an ablation study to assess the importance of feedback and suggestions. Both of them play an important role in refining explanations. We further evaluate Cross-Refine on a bilingual dataset in English and German.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Qianli Wang (11 papers)
  2. Tatiana Anikina (9 papers)
  3. Nils Feldhus (18 papers)
  4. Simon Ostermann (26 papers)
  5. Sebastian Möller (77 papers)
  6. Vera Schmitt (8 papers)

Summary

We haven't generated a summary for this paper yet.