Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Preferred-Action-Optimized Diffusion Policies for Offline Reinforcement Learning (2405.18729v1)

Published 29 May 2024 in cs.LG and cs.AI

Abstract: Offline reinforcement learning (RL) aims to learn optimal policies from previously collected datasets. Recently, due to their powerful representational capabilities, diffusion models have shown significant potential as policy models for offline RL issues. However, previous offline RL algorithms based on diffusion policies generally adopt weighted regression to improve the policy. This approach optimizes the policy only using the collected actions and is sensitive to Q-values, which limits the potential for further performance enhancement. To this end, we propose a novel preferred-action-optimized diffusion policy for offline RL. In particular, an expressive conditional diffusion model is utilized to represent the diverse distribution of a behavior policy. Meanwhile, based on the diffusion model, preferred actions within the same behavior distribution are automatically generated through the critic function. Moreover, an anti-noise preference optimization is designed to achieve policy improvement by using the preferred actions, which can adapt to noise-preferred actions for stable training. Extensive experiments demonstrate that the proposed method provides competitive or superior performance compared to previous state-of-the-art offline RL methods, particularly in sparse reward tasks such as Kitchen and AntMaze. Additionally, we empirically prove the effectiveness of anti-noise preference optimization.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Tianle Zhang (22 papers)
  2. Jiayi Guan (4 papers)
  3. Lin Zhao (228 papers)
  4. Yihang Li (18 papers)
  5. Dongjiang Li (8 papers)
  6. Zecui Zeng (4 papers)
  7. Lei Sun (138 papers)
  8. Yue Chen (236 papers)
  9. Xuelong Wei (3 papers)
  10. Lusong Li (8 papers)
  11. Xiaodong He (162 papers)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets