Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Characterising Across-Stack Optimisations for Deep Convolutional Neural Networks (1809.07196v1)

Published 19 Sep 2018 in stat.ML, cs.CV, cs.LG, and cs.PF

Abstract: Convolutional Neural Networks (CNNs) are extremely computationally demanding, presenting a large barrier to their deployment on resource-constrained devices. Since such systems are where some of their most useful applications lie (e.g. obstacle detection for mobile robots, vision-based medical assistive technology), significant bodies of work from both machine learning and systems communities have attempted to provide optimisations that will make CNNs available to edge devices. In this paper we unify the two viewpoints in a Deep Learning Inference Stack and take an across-stack approach by implementing and evaluating the most common neural network compression techniques (weight pruning, channel pruning, and quantisation) and optimising their parallel execution with a range of programming approaches (OpenMP, OpenCL) and hardware architectures (CPU, GPU). We provide comprehensive Pareto curves to instruct trade-offs under constraints of accuracy, execution time, and memory space.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jack Turner (9 papers)
  2. José Cano (33 papers)
  3. Valentin Radu (10 papers)
  4. Elliot J. Crowley (27 papers)
  5. Michael O'Boyle (15 papers)
  6. Amos Storkey (75 papers)
Citations (40)