Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

UNIREX: A Unified Learning Framework for Language Model Rationale Extraction (2112.08802v3)

Published 16 Dec 2021 in cs.CL, cs.AI, and cs.LG

Abstract: An extractive rationale explains a LLM's (LM's) prediction on a given task instance by highlighting the text inputs that most influenced the prediction. Ideally, rationale extraction should be faithful (reflective of LM's actual behavior) and plausible (convincing to humans), without compromising the LM's (i.e., task model's) task performance. Although attribution algorithms and select-predict pipelines are commonly used in rationale extraction, they both rely on certain heuristics that hinder them from satisfying all three desiderata. In light of this, we propose UNIREX, a flexible learning framework that generalizes rationale extractor optimization as follows: (1) specify architecture for a learned rationale extractor; (2) select explainability objectives (i.e., faithfulness and plausibility criteria); and (3) jointly the train task model and rationale extractor on the task using the selected objectives. UNIREX enables replacing prior works' heuristic design choices with a generic learned rationale extractor in (1) and optimizing it for all three desiderata in (2)-(3). To facilitate comparison between methods with respect to multiple desiderata, we introduce the Normalized Relative Gain (NRG) metric. Across five text classification datasets, our best UNIREX configuration outperforms baselines by an average of 32.9% NRG. Plus, we find that UNIREX-trained rationale extractors can even generalize to unseen datasets and tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Aaron Chan (44 papers)
  2. Maziar Sanjabi (44 papers)
  3. Lambert Mathias (19 papers)
  4. Liang Tan (22 papers)
  5. Shaoliang Nie (17 papers)
  6. Xiaochang Peng (6 papers)
  7. Xiang Ren (194 papers)
  8. Hamed Firooz (27 papers)
Citations (37)

Summary

We haven't generated a summary for this paper yet.