Deep GP Proximal Policy Optimization
- The paper introduces GPPO, extending PPO with deep Gaussian processes to jointly approximate policy and value functions for improved uncertainty estimation.
- It employs a Deep Sigma-Point Process variant to deterministically propagate uncertainty, enabling robust exploration in high-dimensional control tasks.
- Empirical results on benchmarks like Walker2D and Humanoid reveal competitive performance and enhanced robustness under dynamic perturbations.
Deep Gaussian Process Proximal Policy Optimization (GPPO) is a scalable, model-free actor-critic reinforcement learning algorithm that employs Deep Gaussian Processes (DGPs) to jointly approximate the policy and value function. Unlike conventional deep neural networks, GPPO offers calibrated uncertainty estimates, facilitating safer and more effective exploration in high-dimensional continuous control environments. It incorporates the Deep Sigma-Point Process (DSPP) variant of DGPs, allowing deterministic propagation of uncertainty via learned quadrature (“sigma”) points and variational inference with inducing inputs. Empirical evaluations demonstrate that GPPO retains or improves upon the benchmark performance of Proximal Policy Optimization (PPO) while providing robust uncertainty-aware exploration strategies (Lende et al., 22 Nov 2025).
1. DGP Actor–Critic Architecture
GPPO replaces the standard neural-network-based actor and critic modules with two Deep Gaussian Processes. Each DGP comprises multiple layers (), where each layer consists of independent Gaussian Processes:
For efficient approximation, each GP employs inducing inputs and outputs , with standard GP priors and layerwise variational approximations:
A KL-regularized joint variational objective is constructed across layers:
The algorithm employs a squared-exponential (RBF) kernel for each GP, with layer- and index-specific length-scales and output scales.
The DSPP variant deterministically propagates learned sigma points through each layer, yielding at the output a mixture approximation:
Each and is learned jointly with variational parameters and kernel hyperparameters.
2. GPPO Objective Formulation
Building on the clipped surrogate objective of PPO, the GPPO loss function maximizes expected advantage while regularizing policy entropy and enforcing Bayesian consistency:
Here:
- is the policy likelihood ratio,
- is the advantage estimate (from sampled DGP value functions),
- is the entropy bonus, and
- The value head log-likelihood utilizes DSPP mixture scoring.
This formulation preserves the stability and trust-region behavior of PPO, while enforcing a proper Bayesian scoring rule for regression and KL regularization for variational posteriors.
3. Training Procedure and Workflow
GPPO employs an iterative two-stage algorithm for policy improvement and value estimation:
- Initialization:
- Initialize DGP parameters , including kernel hyperparameters, inducing point locations, quadrature points, and variational parameters.
- Copy parameters to for rollout sampling.
- Rollout Collection:
- For each time step , sample using GP policy outputs (mixture).
- Sample from the DGP value head.
- Log transition .
- Advantage Computation:
- Compute advantage via Generalized Advantage Estimation (GAE), leveraging samples from the GP value head.
- Optimization:
- For several epochs and minibatches, maximize via Adam optimization applied to gradient estimates.
- Update Reference:
- After each update cycle, propagate parameters: .
This architecture supports parallelized mini-batch training and empirical evaluation on continuous control benchmarks.
4. Uncertainty Quantification and Exploration Dynamics
GPPO’s uncertainty estimation originates from predictive variance of DSPP heads. For any input :
The policy head’s predictive variance corresponds to entropy in , incentivizing exploration in regions of high uncertainty. The value head’s sampled predictions enable randomization of advantage estimates, analogous to Thompson Sampling within value estimation.
This approach results in calibrated uncertainty—propagated both in decision-making and value assessment—enabling safer, adaptive exploration especially when environmental dynamics are uncertain or nonstationary.
5. Computational Scalability and DSPP Efficiency
Classic GP inference scales cubically in the number of data points (). GPPO employs variational inference with inducing points (), reducing complexity in each GP block to per update and total computational cost to for layers.
Typical parameter settings in experimental evaluation use and sigma points. These quantities suffice for accurate approximation in benchmark tasks. Memory scaling is .
Computationally, GPPO incurs approximately $7$– overhead relative to PPO per environment step ( ms vs $2$ ms for action inference) and overhead per training update, remaining practical on modern consumer GPUs.
6. Empirical Results and Benchmark Comparisons
GPPO was empirically evaluated on the Gymnasium Walker2D-v5 and Humanoid-v5 benchmarks, using training durations of $10$k (Walker2D) and $15$k (Humanoid) episodes, with $3$ random seeds and interquartile mean (IQM) returns reported using bootstrap confidence intervals.
Walker2D Results:
- Final IQM Returns: PPO $742.16$ (CI ), GPPO $2525.06$ (CI ).
- Evaluation (mean std, $100$ episodes): PPO , GPPO .
Humanoid Results:
- Final IQM Returns: PPO $349.64$ (CI ), GPPO $248.43$ (CI ).
- Evaluation: PPO , GPPO .
GPPO demonstrates improved robustness under dynamics perturbation (e.g., modified gravity settings), outperforming PPO in several upset scenarios. The increased training time is offset by the superior uncertainty quantification and exploration capabilities, especially in environments with complex or variable dynamics.
7. Methodological Significance and Application Scope
GPPO systematically extends PPO with fully Bayesian actor-critic learning via scalable DGPs. The method bridges tractable model uncertainty approximation, calibrated exploration, variational induction scalability, and integration with high-performance RL benchmarks. A plausible implication is the suitability of GPPO for safety-critical control domains, where Bayesian exploration mechanisms are integral. The interchangeable use of DSPP as a deterministic mixture surrogate to Monte Carlo sampling offers reduced variance and tractable learning dynamics in reinforcement learning pipelines.
Overall, GPPO establishes the feasibility of uncertainty-aware actor-critic reinforcement learning at scale, with competitive or superior empirical results compared to conventional deep neural policy architectures (Lende et al., 22 Nov 2025).