On-Policy Optimization of ANFIS Policies Using Proximal Policy Optimization (2507.01039v1)
Abstract: We propose a reinforcement learning (RL) approach for training neuro-fuzzy controllers using Proximal Policy Optimization (PPO). Building on prior work that applied Deep Q-Learning to Adaptive Neuro-Fuzzy Inference Systems (ANFIS), our method replaces the off-policy value-based framework with a stable on-policy actor-critic loop. We evaluate this approach in the CartPole-v1 environment using multiple random seeds and compare its learning performance against ANFIS-Deep Q-Network (DQN) baselines. It was found that PPO-trained fuzzy agents achieved a mean return of 500 +/- 0 on CartPole-v1 after 20000 updates, showcasing less variance than prior DQN-based methods during training and overall faster convergence. These findings suggest that PPO offers a promising pathway for training explainable neuro-fuzzy controllers in reinforcement learning tasks.