Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Why do you think that? Exploring Faithful Sentence-Level Rationales Without Supervision (2010.03384v1)

Published 7 Oct 2020 in cs.CL

Abstract: Evaluating the trustworthiness of a model's prediction is essential for differentiating between right for the right reasons' andright for the wrong reasons'. Identifying textual spans that determine the target label, known as faithful rationales, usually relies on pipeline approaches or reinforcement learning. However, such methods either require supervision and thus costly annotation of the rationales or employ non-differentiable models. We propose a differentiable training-framework to create models which output faithful rationales on a sentence level, by solely applying supervision on the target task. To achieve this, our model solves the task based on each rationale individually and learns to assign high scores to those which solved the task best. Our evaluation on three different datasets shows competitive results compared to a standard BERT blackbox while exceeding a pipeline counterpart's performance in two cases. We further exploit the transparent decision-making process of these models to prefer selecting the correct rationales by applying direct supervision, thereby boosting the performance on the rationale-level.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Max Glockner (9 papers)
  2. Ivan Habernal (30 papers)
  3. Iryna Gurevych (264 papers)
Citations (24)

Summary

We haven't generated a summary for this paper yet.