Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-AMPLIFY: Improving Small Language Models with Self Post Hoc Explanations (2402.12038v3)

Published 19 Feb 2024 in cs.LG and cs.CL

Abstract: Incorporating natural language rationales in the prompt and In-Context Learning (ICL) have led to a significant improvement of LLMs performance. However, generating high-quality rationales require human-annotation or the use of auxiliary proxy models. In this work, we propose Self-AMPLIFY to automatically generate rationales from post hoc explanation methods applied to Small LLMs (SLMs) to improve their own performance. Self-AMPLIFY is a 3-step method that targets samples, generates rationales and builds a final prompt to leverage ICL. Self-AMPLIFY performance is evaluated on four SLMs and five datasets requiring strong reasoning abilities. Self-AMPLIFY achieves good results against competitors, leading to strong accuracy improvement. Self-AMPLIFY is the first method to apply post hoc explanation methods to autoregressive LLMs to generate rationales to improve their own performance in a fully automated manner.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Milan Bhan (6 papers)
  2. Nicolas Chesneau (10 papers)
  3. Marie-Jeanne Lesot (22 papers)
  4. Jean-Noel Vittaut (5 papers)