QF-tuner: Breaking Tradition in Reinforcement Learning
Abstract: In reinforcement learning algorithms, the hyperparameters tuning method refers to choosing the optimal parameters that may increase the overall performance. Manual or random hyperparameter tuning methods can lead to different results in the reinforcement learning algorithms. In this paper, we propose a new method called QF-tuner for automatic hyperparameter tuning in the Q learning algorithm using the FOX optimization algorithm (FOX). Furthermore, a new objective function has been employed within FOX that prioritizes reward over learning error and time. QF tuner starts by running the FOX and tries to minimize the fitness value derived from observations at each iteration by executing the Q-learning algorithm. The proposed method has been evaluated using two control tasks from the OpenAI Gym: CartPole and FrozenLake. The empirical results indicate that the QF-tuner outperforms other optimization algorithms, such as particle swarm optimization (PSO), bees algorithm (BA), genetic algorithms (GA), and the random method. However, on the FrozenLake task, the QF-tuner increased rewards by 36% and reduced learning time by 26%, while on the CartPole task, it increased rewards by 57% and reduced learning time by 20%. Thus, the QF-tuner is an essential method for hyperparameter tuning in Q-learning algorithms, enabling more effective solutions to control task problems.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.