Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning from Mistakes -- A Framework for Neural Architecture Search (2111.06353v2)

Published 11 Nov 2021 in cs.LG and cs.AI

Abstract: Learning from one's mistakes is an effective human learning technique where the learners focus more on the topics where mistakes were made, so as to deepen their understanding. In this paper, we investigate if this human learning strategy can be applied in machine learning. We propose a novel machine learning method called Learning From Mistakes (LFM), wherein the learner improves its ability to learn by focusing more on the mistakes during revision. We formulate LFM as a three-stage optimization problem: 1) learner learns; 2) learner re-learns focusing on the mistakes, and; 3) learner validates its learning. We develop an efficient algorithm to solve the LFM problem. We apply the LFM framework to neural architecture search on CIFAR-10, CIFAR-100, and Imagenet. Experimental results strongly demonstrate the effectiveness of our model.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Bhanu Garg (4 papers)
  2. Li Zhang (693 papers)
  3. Pradyumna Sridhara (1 paper)
  4. Ramtin Hosseini (8 papers)
  5. Eric Xing (127 papers)
  6. Pengtao Xie (86 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.