Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Decoupling Value and Policy for Generalization in Reinforcement Learning (2102.10330v2)

Published 20 Feb 2021 in cs.LG and cs.AI

Abstract: Standard deep reinforcement learning algorithms use a shared representation for the policy and value function, especially when training directly from images. However, we argue that more information is needed to accurately estimate the value function than to learn the optimal policy. Consequently, the use of a shared representation for the policy and value function can lead to overfitting. To alleviate this problem, we propose two approaches which are combined to create IDAAC: Invariant Decoupled Advantage Actor-Critic. First, IDAAC decouples the optimization of the policy and value function, using separate networks to model them. Second, it introduces an auxiliary loss which encourages the representation to be invariant to task-irrelevant properties of the environment. IDAAC shows good generalization to unseen environments, achieving a new state-of-the-art on the Procgen benchmark and outperforming popular methods on DeepMind Control tasks with distractors. Our implementation is available at https://github.com/rraileanu/idaac.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Roberta Raileanu (41 papers)
  2. Rob Fergus (67 papers)
Citations (82)

Summary

Decoupling Value and Policy for Generalization in Reinforcement Learning

This paper introduces the Invariant Decoupled Advantage Actor-Critic (IDAAC) framework, aiming to address generalization challenges prevalent in deep reinforcement learning (RL). The traditional approach in RL often utilizes a shared representation for both the policy and value function, a practice that inadvertently exacerbates overfitting due to the asymmetrical information needs between estimating the value function and learning the optimal policy. The paper posits that distinct representations for the policy and value functions can alleviate this issue.

IDAAC comprises two primary contributions. First, it decouples policy optimization from value function estimation, employing separate neural networks for each. Second, the framework incorporates an auxiliary loss designed to enforce invariance against task-irrelevant environmental properties. The empirical evaluation substantiates IDAAC's capacity for robust generalization to unseen environments, achieving superior performance on the Procgen benchmark—a set of procedurally generated RL environments—and outperforming existing methods in DeepMind Control tasks that include distractors.

The authors illustrate the limitations of shared representations by highlighting the policy-value asymmetry, where a standard RL agent overfits to idiosyncratic features such as environment-specific backgrounds. This phenomenon is exemplified through the comparison of semantically identical, yet visually distinct, levels from the Procgen game Ninja. An agent trained to predict the value function is more likely to memorize correlations with non-causal features that do not necessarily support generalization.

By replacing the shared value-focused model with a specialized advantage prediction network, IDAAC enhances generalization. The advantage function, being a relative measure based on expected returns from particular actions, proves to be less susceptible to overfitting compared to absolute value predictions. Further, the incorporation of an adversarial loss constraining the policy representation ensures invariance to episodic-specific cues such as step count, further promoting generalization across procedurally distinct environments.

The experiment extensively contrasts IDAAC's performance against prominent RL methods, including PPO, UCB-DrAC, and PPG, across a range of RL environments. IDAAC consistently demonstrates superior generalization capability and test performance, notably featuring a smaller generalization gap compared to other models.

The paper substantiates the correlation between generalization ability and value loss, noting that agents trained on larger sets of levels exhibit increased value loss yet improved test performance. This observation further supports the assertion that decoupling policy and value optimization not only improves the accuracy of value predictions but also mitigates overfitting concerns.

The implications of this research are significant, contributing to both theoretical understanding and practical execution of RL in complex, diverse environments. By emphasizing the necessity of task-agnostic representations in RL, IDAAC sets a precedent for future explorations into auxiliary methods that enforce broader invariance and representational efficiency. As AI advancements continue, this work paves the way for more adaptable and generalized RL systems, fostering their deployment in real-world scenarios with varying dynamics and objectives.

Github Logo Streamline Icon: https://streamlinehq.com

GitHub