Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FRAME: Evaluating Rationale-Label Consistency Metrics for Free-Text Rationales (2207.00779v2)

Published 2 Jul 2022 in cs.CL, cs.AI, and cs.LG

Abstract: Following how humans communicate, free-text rationales aim to use natural language to explain neural LLM (LM) behavior. However, free-text rationales' unconstrained nature makes them prone to hallucination, so it is important to have metrics for free-text rationale quality. Existing free-text rationale metrics measure how consistent the rationale is with the LM's predicted label, but there is no protocol for assessing such metrics' reliability. Thus, we propose FRAME, a framework for evaluating rationale-label consistency (RLC) metrics for free-text rationales. FRAME is based on three axioms: (1) good metrics should yield highest scores for reference rationales, which maximize RLC by construction; (2) good metrics should be appropriately sensitive to semantic perturbation of rationales; and (3) good metrics should be robust to variation in the LM's task performance. Across three text classification datasets, we show that existing RLC metrics cannot satisfy all three FRAME axioms, since they are implemented via model pretraining which muddles the metric's signal. Then, we introduce a non-pretraining RLC metric that greatly outperforms baselines on (1) and (3), while performing competitively on (2). Finally, we discuss the limitations of using RLC to evaluate free-text rationales.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Aaron Chan (44 papers)
  2. Shaoliang Nie (17 papers)
  3. Liang Tan (22 papers)
  4. Xiaochang Peng (6 papers)
  5. Hamed Firooz (27 papers)
  6. Maziar Sanjabi (44 papers)
  7. Xiang Ren (194 papers)
Citations (8)