Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Few Shot Rationale Generation using Self-Training with Dual Teachers (2306.03315v1)

Published 5 Jun 2023 in cs.CL and cs.AI

Abstract: Self-rationalizing models that also generate a free-text explanation for their predicted labels are an important tool to build trustworthy AI applications. Since generating explanations for annotated labels is a laborious and costly pro cess, recent models rely on large pretrained LLMs (PLMs) as their backbone and few-shot learning. In this work we explore a self-training approach leveraging both labeled and unlabeled data to further improve few-shot models, under the assumption that neither human written rationales nor annotated task labels are available at scale. We introduce a novel dual-teacher learning framework, which learns two specialized teacher models for task prediction and rationalization using self-training and distills their knowledge into a multi-tasking student model that can jointly generate the task label and rationale. Furthermore, we formulate a new loss function, Masked Label Regularization (MLR) which promotes explanations to be strongly conditioned on predicted labels. Evaluation on three public datasets demonstrate that the proposed methods are effective in modeling task labels and generating faithful rationales.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Aditya Srikanth Veerubhotla (4 papers)
  2. Lahari Poddar (10 papers)
  3. Jun Yin (108 papers)
  4. György Szarvas (7 papers)
  5. Sharanya Eswaran (2 papers)

Summary

We haven't generated a summary for this paper yet.