Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Target Entropy Annealing for Discrete Soft Actor-Critic (2112.02852v1)

Published 6 Dec 2021 in cs.LG and cs.AI

Abstract: Soft Actor-Critic (SAC) is considered the state-of-the-art algorithm in continuous action space settings. It uses the maximum entropy framework for efficiency and stability, and applies a heuristic temperature Lagrange term to tune the temperature $\alpha$, which determines how "soft" the policy should be. It is counter-intuitive that empirical evidence shows SAC does not perform well in discrete domains. In this paper we investigate the possible explanations for this phenomenon and propose Target Entropy Scheduled SAC (TES-SAC), an annealing method for the target entropy parameter applied on SAC. Target entropy is a constant in the temperature Lagrange term and represents the target policy entropy in discrete SAC. We compare our method on Atari 2600 games with different constant target entropy SAC, and analyze on how our scheduling affects SAC.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yaosheng Xu (8 papers)
  2. Dailin Hu (4 papers)
  3. Litian Liang (8 papers)
  4. Stephen McAleer (41 papers)
  5. Pieter Abbeel (372 papers)
  6. Roy Fox (39 papers)
Citations (10)