Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-training with Few-shot Rationalization: Teacher Explanations Aid Student in Few-shot NLU (2109.08259v1)

Published 17 Sep 2021 in cs.CL

Abstract: While pre-trained LLMs have obtained state-of-the-art performance for several natural language understanding tasks, they are quite opaque in terms of their decision-making process. While some recent works focus on rationalizing neural predictions by highlighting salient concepts in the text as justifications or rationales, they rely on thousands of labeled training examples for both task labels as well as an-notated rationales for every instance. Such extensive large-scale annotations are infeasible to obtain for many tasks. To this end, we develop a multi-task teacher-student framework based on self-training LLMs with limited task-specific labels and rationales, and judicious sample selection to learn from informative pseudo-labeled examples1. We study several characteristics of what constitutes a good rationale and demonstrate that the neural model performance can be significantly improved by making it aware of its rationalized predictions, particularly in low-resource settings. Extensive experiments in several bench-mark datasets demonstrate the effectiveness of our approach.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Meghana Moorthy Bhat (3 papers)
  2. Alessandro Sordoni (53 papers)
  3. Subhabrata Mukherjee (59 papers)
Citations (21)

Summary

We haven't generated a summary for this paper yet.