Randomized Q-Learning: Scalable and Efficient
- Randomized Q-Learning is a model-free RL approach that randomizes learning rates and maximization steps to drive efficient exploration in complex environments.
- It unifies ensemble methods, stochastic subset selection, and Bayesian-inspired techniques to achieve provable regret bounds and tractable computations.
- Empirical evaluations show RandQL converges rapidly in high-dimensional action spaces while reducing computational burden compared to classical Q-Learning.
Randomized Q-Learning (RandQL), also referred to as RandomizedQ or Stochastic Q-learning, denotes a class of model-free reinforcement learning (RL) algorithms characterized by the use of randomization as a principal mechanism for both exploration and computational efficiency. Unlike classical Q-learning, which relies on deterministic or scheduled learning rates and maximization across all actions, RandQL leverages randomized learning rates and/or stochastic maximization procedures. This approach enables efficient posterior sampling-based exploration, provable regret minimization, and tractable scaling to environments with large or structured state-action spaces (Wang et al., 30 Jun 2025, Tiapkin et al., 2023, Fourati et al., 2024). RandQL unifies several algorithmic strands, including “Thompson-style” exploration, stochastic subset selection for maximization, and learning-rate randomization, with robust theoretical and empirical properties.
1. Algorithmic Foundations and Variants
RandQL algorithms target episodic MDPs with state space , action space , finite horizon , and transition kernel . Standard Q-learning proceeds via
without explicit optimism or systematic posterior sampling. RandQL introduces randomization at different algorithmic loci, yielding several families:
- Randomized Learning Rate Q-Learning: An ensemble of Q-functions is maintained. At each visit to , each head uses an independent random learning rate . A typical update is
with drawn according to parameters that depend on the number of prior visits and episode horizon (Wang et al., 30 Jun 2025, Tiapkin et al., 2023).
- Optimism via Aggregation: The policy -value is set as the maximum across heads, or as an “optimistic mixture”:
wherein one ensemble is “fast-forgetting” and the other “slow-forgetting” to preserve optimism and anti-concentration for exploration purposes (Wang et al., 30 Jun 2025).
- Stochastic Maximization in Large Action Spaces: For , RandQL may replace with maximization over a small random action subset of size :
Here, is a small memory buffer of top-performing actions (Fourati et al., 2024).
These principles can be combined: e.g., randomizing both learning rates and action maximization within the same update.
2. Theoretical Properties and Regret Guarantees
RandQL algorithms possess provable regret bounds under various MDP settings:
- Tabular Episodic MDPs: For state space , action space , episode horizon , and episodes, the best-known bound is
holding with high probability for parameters , , (Wang et al., 30 Jun 2025, Tiapkin et al., 2023).
- Gap-Dependent Regret: Under “positive sub-optimality gap”, i.e.,
the expected regret satisfies (Wang et al., 30 Jun 2025).
- Metric/Continuous State-Action Spaces: Under Lipschitz and zooming-dimension assumptions, regret scales as
where is the zooming dimension (Tiapkin et al., 2023).
- Stochastic Maximization Convergence: For large , RandQL with random subset size converges to the fixed point of the “Rand-Bellman” operator,
and under standard Robbins–Monro conditions and persistent exploration, almost surely (Fourati et al., 2024).
3. Algorithmic Implementation Details
The computational pipeline of modern RandQL is defined by the following crucial elements:
- Ensemble Q-function Architecture: Each is associated with pairs of “fast-forgetting” and “slow-forgetting” Q-estimates. Their learning rates, , are independently drawn from Beta distributions with shape parameters reflecting visit counts, pseudo-counts (), and inflation () (Wang et al., 30 Jun 2025).
- Optimistic Mixture and Policy Derivation: Policy values are computed as a convex or max-mixed combination of ensemble heads—precisely outlined in the LaTeX pseudocode (Wang et al., 30 Jun 2025, Tiapkin et al., 2023).
- Stochastic Subset Maximization for Large Action Sets:
- At each update, only actions are sampled uniformly from for maximization, optionally augmented by a memory buffer of the most recently selected or highest-value actions (Fourati et al., 2024).
- This reduces per-update complexity from to , with practical subset sizes .
- Parameter Selection: Standard choices are , , , and learning rates as per the Beta distribution; subset size is selected to balance computational cost and underestimation bias.
4. Comparison to Related Methods
RandQL is distinguished from prior approaches along both algorithmic and theoretical lines:
| Method | Exploration | Update Complexity | Regret Bound |
|---|---|---|---|
| UCB-Q / OptQL | Bonus-based | ||
| PSRL | Posterior-sample | ||
| RandQL (ensemble, Beta noise) | Rand. weights | ||
| RandQL (stoch. maximization) | Subset sampling | Converges to Rand-Bellman fixed pt. |
RandQL offers the sample efficiency of PSRL and OptQL while avoiding the computational bottleneck of explicit posterior inference or bonus computation. Empirical comparisons on grid-world, chain, and synthetic high-dimensional MDPs illustrate that RandQL achieves either lower or comparable regret to bonus-based and model-based methods, at significantly reduced sample or wall-clock cost (Wang et al., 30 Jun 2025, Tiapkin et al., 2023, Fourati et al., 2024).
5. Empirical Evaluation and Practical Guidelines
RandQL has been systematically evaluated in both tabular and deep RL settings:
- Tabular Grid-world and Chain Benchmarks: RandQL demonstrates lower total regret versus UCB-Q and naïve/randomized-rate variants, and approaches the sample-efficiency of model-based PSRL and RLSVI approaches (Wang et al., 30 Jun 2025, Tiapkin et al., 2023).
- High-Dimensional Action Spaces: In synthetic MDPs with actions, RandQL matches optimal performance in roughly $1/10$th of the time required by standard Q-learning (Fourati et al., 2024).
- Deep RL (e.g., InvertedPendulum-v4, HalfCheetah-v4): RandQL-based variants (RandDQN/RandDDQN) converge more rapidly and with 10-60 per-step speedup over standard DQN/Double DQN in discretized large-action regimes. Under the standard metric of average return vs.\ wall-clock time, RandQL outperforms DQN and approaches model-based methods.
Practical guidelines for application:
- Subset Size: balances computation and approximation accuracy; reduces underestimation bias.
- Ensemble Size: , typically 10–20.
- Memory Usage: Use small per-state buffers in tabular; buffer actions in deep RL.
- Randomization Distribution: Uniform over actions suffices for most settings; structure-aware sampling is possible when action-space features are available.
- Exploration Schedule: Standard decaying -greedy policies ensure persistent, unbiased exploration if is strictly positive.
6. Limitations and Open Research Directions
RandQL’s main limitations and avenues for future work include:
- Underestimation Bias in Stochastic Maximization: Subset-based maximization introduces a lower bound bias to the true . While mitigated via memory buffers, in highly peaked Q landscapes this effect can slow convergence (Fourati et al., 2024).
- Finite-Sample Regret with Stochastic Maximization: Explicit sample-complexity and regret guarantees for the stochastic maximization variant are not fully characterized.
- Function Approximation: Almost sure convergence for the tabular case is established, but rigorous extension to nonlinear function approximation (deep networks) remains an open problem.
- Adaptive Subset Sizing: Dynamic adjustment of based on value uncertainty may yield improved trade-offs between approximation and cost.
- Continuous and Structured Action Spaces: Adapting the random subset paradigm to continuous actions via stochastic gradient maximization or to combinatorial/embedded action sets is an active area of exploration.
7. Summary and Significance
Randomized Q-Learning provides a unified framework for model-free RL agents that achieve efficient exploration and sample efficiency via randomized learning rates and stochastic maximization. This encompasses theoretical guarantees—provably near-optimal regret in tabular and metric state-action settings—and demonstrably practical gains in environments with large and/or continuous action spaces. The approach matches or outperforms classical optimism-bonus and posterior-sampling methods in both theoretical and empirical dimensions, while maintaining space and time complexity at per step for tabular MDPs, and per step in large action spaces (Wang et al., 30 Jun 2025, Tiapkin et al., 2023, Fourati et al., 2024). RandQL thus offers a robust algorithmic recipe for tractable and principled exploration in contemporary RL.