Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Minimizing Energy Consumption of Deep Learning Models by Energy-Aware Training (2307.00368v1)

Published 1 Jul 2023 in cs.LG, cs.AI, and cs.CV

Abstract: Deep learning models undergo a significant increase in the number of parameters they possess, leading to the execution of a larger number of operations during inference. This expansion significantly contributes to higher energy consumption and prediction latency. In this work, we propose EAT, a gradient-based algorithm that aims to reduce energy consumption during model training. To this end, we leverage a differentiable approximation of the $\ell_0$ norm, and use it as a sparse penalty over the training loss. Through our experimental analysis conducted on three datasets and two deep neural networks, we demonstrate that our energy-aware training algorithm EAT is able to train networks with a better trade-off between classification performance and energy efficiency.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Dario Lazzaro (4 papers)
  2. Antonio Emanuele CinĂ  (18 papers)
  3. Maura Pintor (24 papers)
  4. Ambra Demontis (34 papers)
  5. Battista Biggio (81 papers)
  6. Fabio Roli (77 papers)
  7. Marcello Pelillo (53 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.