Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning by Self-Explanation, with Application to Neural Architecture Search (2012.12899v2)

Published 23 Dec 2020 in cs.LG, cs.AI, and cs.CV

Abstract: Learning by self-explanation is an effective learning technique in human learning, where students explain a learned topic to themselves for deepening their understanding of this topic. It is interesting to investigate whether this explanation-driven learning methodology broadly used by humans is helpful for improving machine learning as well. Based on this inspiration, we propose a novel machine learning method called learning by self-explanation (LeaSE). In our approach, an explainer model improves its learning ability by trying to clearly explain to an audience model regarding how a prediction outcome is made. LeaSE is formulated as a four-level optimization problem involving a sequence of four learning stages which are conducted end-to-end in a unified framework: 1) explainer learns; 2) explainer explains; 3) audience learns; 4) explainer re-learns based on the performance of the audience. We develop an efficient algorithm to solve the LeaSE problem. We apply LeaSE for neural architecture search on CIFAR-100, CIFAR-10, and ImageNet. Experimental results strongly demonstrate the effectiveness of our method.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Ramtin Hosseini (8 papers)
  2. Pengtao Xie (86 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.