Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Impact of Quantization and Pruning on Deep Reinforcement Learning Models (2407.04803v1)

Published 5 Jul 2024 in cs.LG and cs.AI

Abstract: Deep reinforcement learning (DRL) has achieved remarkable success across various domains, such as video games, robotics, and, recently, LLMs. However, the computational costs and memory requirements of DRL models often limit their deployment in resource-constrained environments. The challenge underscores the urgent need to explore neural network compression methods to make RDL models more practical and broadly applicable. Our study investigates the impact of two prominent compression methods, quantization and pruning on DRL models. We examine how these techniques influence four performance factors: average return, memory, inference time, and battery utilization across various DRL algorithms and environments. Despite the decrease in model size, we identify that these compression techniques generally do not improve the energy efficiency of DRL models, but the model size decreases. We provide insights into the trade-offs between model compression and DRL performance, offering guidelines for deploying efficient DRL models in resource-constrained settings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Heng Lu (41 papers)
  2. Mehdi Alemi (1 paper)
  3. Reza Rawassizadeh (21 papers)

Summary

We haven't generated a summary for this paper yet.