Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Tailoring Self-Rationalizers with Multi-Reward Distillation (2311.02805v2)

Published 6 Nov 2023 in cs.CL

Abstract: LLMs (LMs) are capable of generating free-text rationales to aid question answering. However, prior work 1) suggests that useful self-rationalization is emergent only at significant scales (e.g., 175B parameter GPT-3); and 2) focuses largely on downstream performance, ignoring the semantics of the rationales themselves, e.g., are they faithful, true, and helpful for humans? In this work, we enable small-scale LMs (approx. 200x smaller than GPT-3) to generate rationales that not only improve downstream task performance, but are also more plausible, consistent, and diverse, assessed both by automatic and human evaluation. Our method, MaRio (Multi-rewArd RatIOnalization), is a multi-reward conditioned self-rationalization algorithm that optimizes multiple distinct properties like plausibility, diversity and consistency. Results on five difficult question-answering datasets StrategyQA, QuaRel, OpenBookQA, NumerSense and QASC show that not only does MaRio improve task accuracy, but it also improves the self-rationalization quality of small LMs across the aforementioned axes better than a supervised fine-tuning (SFT) baseline. Extensive human evaluations confirm that MaRio rationales are preferred vs. SFT rationales, as well as qualitative improvements in plausibility and consistency.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Sahana Ramnath (10 papers)
  2. Brihi Joshi (13 papers)
  3. Skyler Hallinan (11 papers)
  4. Ximing Lu (52 papers)
  5. Liunian Harold Li (19 papers)
  6. Aaron Chan (44 papers)
  7. Jack Hessel (50 papers)
  8. Yejin Choi (287 papers)
  9. Xiang Ren (194 papers)
Citations (9)