Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Parallel Sentence-Level Explanation Generation for Real-World Low-Resource Scenarios (2302.10707v1)

Published 21 Feb 2023 in cs.CL

Abstract: In order to reveal the rationale behind model predictions, many works have exploited providing explanations in various forms. Recently, to further guarantee readability, more and more works turn to generate sentence-level human language explanations. However, current works pursuing sentence-level explanations rely heavily on annotated training data, which limits the development of interpretability to only a few tasks. As far as we know, this paper is the first to explore this problem smoothly from weak-supervised learning to unsupervised learning. Besides, we also notice the high latency of autoregressive sentence-level explanation generation, which leads to asynchronous interpretability after prediction. Therefore, we propose a non-autoregressive interpretable model to facilitate parallel explanation generation and simultaneous prediction. Through extensive experiments on Natural Language Inference task and Spouse Prediction task, we find that users are able to train classifiers with comparable performance $10-15\times$ faster with parallel explanation generation using only a few or no annotated training data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Yan Liu (420 papers)
  2. Xiaokang Chen (39 papers)
  3. Qi Dai (58 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.