Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
120 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
3 tokens/sec
DeepSeek R1 via Azure Pro
51 tokens/sec
2000 character limit reached

Two-stage architectural fine-tuning with neural architecture search using early-stopping in image classification (2202.08604v3)

Published 17 Feb 2022 in cs.CV and cs.AI

Abstract: In many deep neural network (DNN) applications, the difficulty of gathering high-quality data in the industry field hinders the practical use of DNN. Thus, the concept of transfer learning has emerged, which leverages the pretrained knowledge of DNNs trained on large-scale datasets. Therefore, this paper suggests two-stage architectural fine-tuning, inspired by neural architecture search (NAS). One of main ideas is mutation, which reduces the search cost using given architectural information. Moreover, early-stopping is considered which cuts NAS costs by terminating the search process in advance. Experimental results verify our proposed method reduces 32.4% computational and 22.3% searching costs.

Citations (5)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.