Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
GPT-5.1
GPT-5.1 91 tok/s
Gemini 3.0 Pro 46 tok/s Pro
Gemini 2.5 Flash 148 tok/s Pro
Kimi K2 170 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

Policy-based optimization: single-step policy gradient method seen as an evolution strategy (2104.06175v3)

Published 13 Apr 2021 in math.OC

Abstract: This research reports on the recent development of a black-box optimization method based on single-step deep reinforcement learning (DRL), and on its conceptual proximity to evolution strategy (ES) techniques. In the fashion of policy gradient (PG) methods, the policy-based optimization (PBO) algorithm relies on the update of a policy network to describe the density function of its next generation of individuals. The method is described in details, and its similarities to both ES and PG methods are pointed out. The relevance of the approach is then evaluated on the minimization of standard analytic functions, with comparison to classic ES techniques (ES, CMA-ES). It is then applied to the optimization of parametric control laws designed for the Lorenz attractor. Given the scarce existing literature on the method, this contribution definitely establishes the PBO method as a valid, versatile black-box optimization technique, and opens the way to multiple future improvements allowed by the flexibility of the neural networks approach.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.