Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

TinyFormer: Efficient Transformer Design and Deployment on Tiny Devices (2311.01759v1)

Published 3 Nov 2023 in cs.LG and cs.AR

Abstract: Developing deep learning models on tiny devices (e.g. Microcontroller units, MCUs) has attracted much attention in various embedded IoT applications. However, it is challenging to efficiently design and deploy recent advanced models (e.g. transformers) on tiny devices due to their severe hardware resource constraints. In this work, we propose TinyFormer, a framework specifically designed to develop and deploy resource-efficient transformers on MCUs. TinyFormer mainly consists of SuperNAS, SparseNAS and SparseEngine. Separately, SuperNAS aims to search for an appropriate supernet from a vast search space. SparseNAS evaluates the best sparse single-path model including transformer architecture from the identified supernet. Finally, SparseEngine efficiently deploys the searched sparse models onto MCUs. To the best of our knowledge, SparseEngine is the first deployment framework capable of performing inference of sparse models with transformer on MCUs. Evaluation results on the CIFAR-10 dataset demonstrate that TinyFormer can develop efficient transformers with an accuracy of $96.1\%$ while adhering to hardware constraints of $1$MB storage and $320$KB memory. Additionally, TinyFormer achieves significant speedups in sparse inference, up to $12.2\times$, when compared to the CMSIS-NN library. TinyFormer is believed to bring powerful transformers into TinyML scenarios and greatly expand the scope of deep learning applications.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Jianlei Yang (32 papers)
  2. Jiacheng Liao (1 paper)
  3. Fanding Lei (1 paper)
  4. Meichen Liu (7 papers)
  5. Junyi Chen (31 papers)
  6. Lingkun Long (1 paper)
  7. Han Wan (5 papers)
  8. Bei Yu (113 papers)
  9. Weisheng Zhao (143 papers)
Citations (1)
X Twitter Logo Streamline Icon: https://streamlinehq.com