Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 149 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 35 tok/s Pro
GPT-5 High 35 tok/s Pro
GPT-4o 92 tok/s Pro
Kimi K2 196 tok/s Pro
GPT OSS 120B 425 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

Actor-Free Q-Learning

Updated 22 October 2025
  • Actor-Free Q-Learning is defined by its direct structuring of the Q-function to enable analytic maximization over actions, eliminating the need for an explicit actor network.
  • Key implementations include normalized advantage functions, control-point methods, value decomposition, and convex optimization techniques that enhance stability, sample efficiency, and risk-sensitive performance.
  • Empirical evaluations indicate that these methods often outperform traditional actor–critic approaches, particularly in high-dimensional, safety-critical, and constrained action space domains.

Actor-free Q-learning refers to a class of reinforcement learning (RL) algorithms that enable pure value-based learning and decision-making, omitting the explicit actor network that characterizes actor–critic methods. These approaches address key challenges in domains with high-dimensional or continuous action spaces by directly structuring the Q-function such that maximization over actions is efficient and stable, and policy extraction is analytic or structurally embedded within the value representation. Actor-free Q-learning architectures span a diverse set of methodologies including normalized advantage functions, Bayesian uncertainty propagation, control-point interpolation, decoupled value decomposition, risk-sensitive convex optimization, and robust Bellman operators, each targeting different efficiency, stability, and expressivity criteria.

1. Theoretical Foundations: Structuring Q-functions for Continuous and Complex Action Spaces

Actor-free Q-learning algorithms fundamentally depend on representing the Q-function so that maximization over actions is well-defined, tractable, and stable. In discrete settings, maximization is trivial, but in continuous or high-dimensional spaces the maxaQ(s,a)\max_a Q(s,a) operation is generally intractable for arbitrary neural network parameterizations.

Normalized Advantage Functions (NAF) decompose the Q-function into Q(x,u)=V(x)+A(x,u)Q(x, u) = V(x) + A(x, u), where the advantage A(x,u)A(x, u) is a quadratic form 12(uμ(x))TP(x)(uμ(x))-\frac{1}{2} (u - \mu(x))^T P(x) (u - \mu(x)) enabling analytic computation of the maximizing action μ(x)\mu(x) (Gu et al., 2016). This form guarantees that Q-learning can proceed without a distinct actor network and exploits experience replay and target networks for stable learning.

Alternatively, control-point based methods structure the Q-function via a discrete set of action proposals and corresponding Q-values. For state ss, NN control-points {aˉi(s)}\{ \bar{a}_i (s) \} with Q-values {Q^i(s)}\{ \hat{Q}_i (s) \} are predicted, and Q(s,a)Q(s, a) is an inverse-distance weighted interpolation. The maximally-valued control-point defines argmaxaQ(s,a)\arg\max_a Q(s, a), obviating the need for explicit maximization over the action space (Korkmaz et al., 21 Oct 2025).

Value decomposition approaches decouple action dimensions, representing Qθ(s,a)=1Mj=1MQθj(s,aj)Q_\theta(s, a) = \frac{1}{M} \sum_{j=1}^M Q^j_\theta(s, a^j). The action selection step is split into MM independent maximizations: argmaxaQθ(s,a)=(argmaxa1Qθ1(s,a1),,argmaxaMQθM(s,aM))\arg\max_a Q_\theta(s,a) = (\arg\max_{a^1} Q^1_\theta(s, a^1), \ldots, \arg\max_{a^M} Q^M_\theta(s, a^M)), converting a single-agent continuous problem into a cooperative multi-agent structure (Seyde et al., 2022).

Convex optimization with Input-Convex Neural Networks (ICNNs) enables actor-free policies by guaranteeing that the optimization landscape of Q(s,a)Q(s, a) is convex with respect to the action. The globally optimal action is computed directly as a=argmina{Qr(s,a)+κmax(0,Γc(s,a)d)}a^* = \arg\min_a \{ -Q_r(s,a) + \kappa \max(0, \Gamma_c(s, a) - d) \} where Γc\Gamma_c is a CVaR risk metric and dd a risk constraint (Zhang et al., 2023).

Robust Q-learning in average-reward MDPs employs a Bellman operator Q(s,a)=r(s,a)g+σPsa(VQ)Q(s, a) = r(s, a) - g + \sigma_{P_s^a}(V_Q) and establishes strict contraction in a semi-norm with constant functions quotiented out. This guarantees convergence even in non-discounted, robust settings, with directly actor-free updates (Xu et al., 8 Jun 2025).

2. Algorithmic Implementations: Maximization Strategies and Critic Updates

Implementation strategies vary depending on how the Q-function is constructed:

  • Analytic Maximization: Methods such as NAF employ quadratic advantage structures so that the maximizing action is given directly by a neural network output, bypassing gradient ascent or search over actions (Gu et al., 2016).
  • Control-point Maximization: Algorithms like Q3C produce NN control-points and select the one with maximal Q-value for both learning updates and policy execution (Korkmaz et al., 21 Oct 2025).
  • Decoupled Maximizations: Value decomposition splits the maximization into per-action-dimension independent tasks, allowing for parallelized, linear-time selection and sidestepping the exponential complexity of joint action spaces (Seyde et al., 2022).
  • Convex Optimization: For ICNN-structured critics, the output Q-value is convex in the action; global maximization uses gradient-based optimizers (Adam, etc.) with theoretical guarantees of convergence to the global optimum (Zhang et al., 2023).
  • Regression-based Max-Tracking: AFU employs regression to fit Vφ(s)maxaQ(s,a)V_\varphi(s) \approx \max_a Q(s,a) and Aξ(s,a)0A_\xi(s,a) \leq 0, incorporating conditional gradient scaling to bias upper bound estimates monotonically toward the true maximum, decoupling critic updates from actor behavior (Perrin-Gilbert, 24 Apr 2024).
  • Soft-Max and Uncertainty Regularization: Bayesian approaches such as Assumed Density Filtering Q-learning update beliefs about Q-values as Gaussian random variables, propagating uncertainty through a soft-max backup and integrating all possible action values in the update (Jeong et al., 2017).

3. Empirical Evaluation and Practical Considerations

Experimental results across actor-free Q-learning methods consistently demonstrate competitive or superior performance with respect to state-of-the-art actor–critic baselines under a variety of conditions:

  • NAF converges faster and produces smoother, more stable policies relative to DDPG in manipulation and locomotion tasks, with sample efficiency gains when supplemented by short-horizon imagination rollouts from iteratively refitted linear dynamical models (Gu et al., 2016).
  • Control-point maximization schemes (Q3C) outperform actor–critic methods in constrained action space environments (where value functions are non-smooth) due to the robustness of direct selection among learned proposals (Korkmaz et al., 21 Oct 2025).
  • Value decomposition algorithms match or exceed actor–critic algorithms in high-dimensional tasks (e.g., 21–38 joint humanoid/dog control) and validate the effectiveness of decoupling critics via multi-agent RL paradigms (Seyde et al., 2022).
  • Risk-sensitive actor-free policies using ICNNs demonstrate lower variance and more reliable convergence compared to risk-constrained actor–critic methods in safety-critical benchmarks (Zhang et al., 2023).
  • AFU achieves sample efficiency on par with SAC and TD3; its regression-based, actor-free critic updates avoid actor entrapment in deceptive local optima, notably in custom environments designed to trigger SAC failures (Perrin-Gilbert, 24 Apr 2024).
  • For robust RL under uncertainty, actor-free Q-learning achieves O~(ϵ2)\tilde{O}(\epsilon^{-2}) sample complexity, outperforming actor–critic variants (O~(ϵ3)\tilde{O}(\epsilon^{-3})) in the average reward regime (Xu et al., 8 Jun 2025).

4. Comparison with Actor–Critic Architectures

Actor-free Q-learning fundamentally differs from actor–critic methods in several aspects:

Feature Actor-Free Q-Learning Actor–Critic Methods
Policy Extraction Analytic/Structural in Q-function Explicit actor network
Maximization Over aa Direct, tractable, or convex Gradient ascent (may be local)
Stability Decoupled updates, reduced sensitivity Coupled updates, sensitive
Sample Efficiency Higher (in some settings, e.g. robust RL) Variable, depends on actor-env
Effect in Constraints Outperforms in non-smooth/constrained spaces May be trapped (gradient limits)

Trade-offs include potential representational limitations of analytic or control-point schemes (in expressing complex multimodal action landscapes), yet empirical findings indicate these methods are sufficiently expressive for optimal policies in standard and constrained environments.

5. Special Topics: Sample Efficiency, Robustness, and Risk-Sensitive Control

Actor-free Q-learning algorithms tackle sample complexity and robustness through precise maximization, structural representations, uncertainty propagation, and direct optimization:

  • Learned local linear models for dynamics expedite model-free RL via imagination rollouts—experimentally yielding 2–5x reductions in sample requirements on robotics benchmarks (Gu et al., 2016).
  • Soft-max regularization and uncertainty-aware learning rates in Bayesian actor-free Q-learning algorithms mitigate overoptimism and improve convergence in stochastic, large-action-space domains (Jeong et al., 2017).
  • Robust actor-free Q-learning under model misspecification delivers non-asymptotic and order-optimal sample complexity, enabled by contraction mappings in quotient spaces (Xu et al., 8 Jun 2025).
  • Risk-sensitive actor-free policies using CVaR criteria in convex critics directly optimize tail risks, outperforming actor–critic variants in achieving safe performance with lower variance (Zhang et al., 2023).

6. Extension to Offline, Undirected, and Hybrid Settings

Several actor-free Q-learning designs generalize to offline RL, undirected datasets, and even quantum-enhanced routines:

  • Latent Action Q-learning (LAQ) learns value functions from undirected state-only experience using latent action mining, theoretically ensuring optimality when refined latent actions match transition-support of true actions. This decoupling enables value learning without explicit policy recovery, facilitating transfer across domains and embodiments (Chang et al., 2022).
  • Hybrid classical–quantum actor-free Q-learning encodes action-selection probability distributions in quantum registers, employing Grover’s iterations and quantum counting for quadratic speed-up when sampling over large discrete actions (Sannia et al., 2022).

7. Open Source Resources and Replicability

Actor-free Q-learning platforms supporting structural maximization and control-point interpolation release code for reproducibility and further extension, such as Q3C available at https://github.com/USC-Lira/Q3C (Korkmaz et al., 21 Oct 2025).


Actor-free Q-learning approaches provide principled, efficient, and stable alternatives to actor–critic methods in RL. They are best suited for domains where maximization over actions is challenging (continuous, high-dimensional, constrained action spaces), where sample efficiency is paramount (robotics, safety-critical tasks), or where offline or undirected data precludes explicit policy recovery. By embedding optimality criteria structurally into value functions, leveraging uncertainty, convex analyses, and modular maximization, these methods continue to expand the applicable boundaries and theoretical understanding of deep reinforcement learning.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Actor-Free Q-Learning Approach.