Papers
Topics
Authors
Recent
Search
2000 character limit reached

Residual Reinforcement Learning

Updated 2 February 2026
  • Residual RL is a technique that learns corrective actions on top of an established controller, reducing the complexity of the learning problem.
  • It has been successfully applied in robotics, power systems, and vision-based tasks to improve safety, sample efficiency, and overall performance.
  • The approach employs additive residual policies with constrained magnitudes to maintain stability and handle uncertainties in high-dimensional, nonlinear systems.

Residual Reinforcement Learning (Residual RL) is a paradigm in reinforcement learning in which a policy is learned not from scratch but as an additive "residual" on top of an existing controller or policy. This approach leverages the strengths of prior controllers—whether they are hand-engineered, imitation-learned, or themselves learned via deep RL—by restricting the learning problem to discovering small corrective actions that compensate for errors, uncertainties, or suboptimality in the base. Residual RL architectures have been prominent in robotics, control, power systems, skill-based RL, and vision-based manipulation, consistently focusing on improving sample efficiency, safety, and robustness while constraining exploration to regimes where a strong prior can prevent catastrophic failures.

1. Core Formulation and Mathematical Framework

Given a base policy or controller πb(s)\pi_b(s)—which may come from classic feedback design, demonstration data, or prior reinforcement learning—the Residual RL agent learns an additive or corrective policy Δπ(s,ab)\Delta\pi(s, a_b) such that the final action is

a=πb(s)+Δπ(s,ab)a = \pi_b(s) + \Delta\pi(s, a_b)

where ab=πb(s)a_b=\pi_b(s) is the base action, and Δπ\Delta\pi is the learned residual policy that may depend on the state ss and possibly on the base action aba_b itself.

The policy is optimized to maximize the standard RL objective—that is, expected discounted return—under the transition dynamics of the environment, but with the agent's action restricted to this superposition:

J(θ)=Eτ∼πb+Δπθ[∑t=0Tγtr(st,at)]J(\theta) = \mathbb{E}_{\tau \sim \pi_b + \Delta\pi_\theta} \left[ \sum_{t=0}^T \gamma^t r(s_t, a_t) \right]

This decomposition enables leveraging existing robust, safe, or informative controllers while constraining the policy search to the (typically) lower-dimensional space where only imperfections, modeling errors, or novel task aspects are corrected.

2. Theoretical Properties, Stability, and Constraints

Residual RL architectures can exploit the stability and robustness properties of the base controller. When the base controller πb\pi_b yields a stable closed loop (for instance, in robot tracking with classical feedback), the composite residual policy will retain stability as long as the residual’s amplitude is suitably bounded. Lyapunov-based proofs establish that if the residual policy is constrained—either in absolute terms or, as in "Constrained Residual RL" (CRRL), relatively to the base (e.g., ∣∣uR(x)∣∣≤βr∣∣ub(x)∣∣||u_R(x)|| \leq \beta_r ||u_b(x)||)—then the theoretical boundedness and region of attraction of the composite system can be maintained (Staessens et al., 2021). For high-dimensional nonlinear systems, residual methods built on incremental subsystem decomposition provide guarantees of uniform ultimate boundedness and weight convergence of subsystems under parallel learning (Li et al., 2021).

In safety-critical systems or with high-dimensional robots, restrictions on the residual ensure that the learning process cannot destabilize the system, enabling safe deployment even during online adaptation (Ceola et al., 2024, Staessens et al., 2021).

3. Algorithmic Methodologies and Architectures

Residual RL is compatible with a broad range of RL algorithms and architectures:

Representative pseudocode for a residual RL policy:

1
2
3
4
5
def composite_policy(state):
    base_action = base_policy(state)
    residual_action = residual_policy(state, base_action)
    return base_action + residual_action

4. Applications Across Domains

Robotics and Control:

Residual RL has been critical in robotic manipulation, locomotion, and assembly, including block insertion (Johannink et al., 2018), dexterous grasping (Ceola et al., 2024), bimanual handover (Ankile et al., 23 Sep 2025), and high-precision assembly (Ankile et al., 2024). In such settings, the base policy is typically derived from hand-tuned control, demonstration (behavioral cloning), or pretrained deep RL, with the residual policy focusing on compensating for model errors, unmodeled dynamics, or transfer mismatches.

Power and Grid Control:

Residual RL enables efficient voltage and Volt-Var control by learning corrections on top of model-based (approximate or heuristic) controllers. Architectures such as RDRL and boosting-RDRL leverage a model-based optimizer as base policy and learn residuals in a reduced action space, improving convergence and reducing critic approximation errors (Liu et al., 2024, Bouchkati et al., 24 Jun 2025).

Vision and Structured Skill Spaces:

Residual RL generalizes to image-based and skill-based settings, where residual policies correct errors in plans or skill decodings, as in image denoising (Zhang et al., 2021), VAE-encoded skill libraries (Rana et al., 2022), or B-spline trajectory refinement (Huang et al., 2 Aug 2025).

Cross-Embodiment and Sim-to-Real Transfer:

Residual RL is instrumental in rapidly adapting generalist policies to new robot morphologies or sim-to-real shifts, by fine-tuning only the residual layer while holding the large base model fixed—achieving order-of-magnitude sample efficiency gains over training from scratch (Liu et al., 22 Feb 2025, Ankile et al., 23 Sep 2025, Zhang et al., 2024).

5. Empirical Evidence and Comparative Performance

Across diverse domains, residual RL consistently yields substantial gains in sample efficiency, rapid convergence, and improved robustness—often requiring only a fraction of the data (5× or greater reductions) needed by RL from scratch or classical fine-tuning (Ceola et al., 2024, Ankile et al., 23 Sep 2025, Zhang et al., 2024, Liu et al., 22 Feb 2025). For example:

  • In dexterous grasping with the iCub, learning a residual on top of a frozen SAC-based base policy reduces wall-clock time for convergence by up to 5×, and matches or outperforms demonstration policies on unseen objects (Ceola et al., 2024).
  • In cross-embodiment navigation, residual RL increases success rates from 5–17% (imitation baseline) to 84–95% in just a few hundred episodes (Liu et al., 22 Feb 2025).
  • In power control, residual RL architectures rapidly achieve and then outperform the performance of model-based droop or optimization controllers; limiting the residual action space significantly enhances stability and convergence (Liu et al., 2024, Bouchkati et al., 24 Jun 2025).
  • In real-world multi-DoF robot hands, residual RL demonstrates the first successful sparse-reward policy improvement for bimanual manipulation and vision-based tasks (Ankile et al., 23 Sep 2025).

Empirical ablations frequently demonstrate the importance of initializing critic weights from base policies for stability (RESPRECT (Ceola et al., 2024)), the value of variance or uncertainty-based gating for targeted residual activation (Carvalho et al., 2022, Dodeja et al., 21 Jun 2025), and the data-efficiency of combining demonstration data with residual RL (Alakuijala et al., 2021).

6. Limitations and Extensions

Locality of Improvement:

Residual RL is fundamentally limited to improving performance in the local neighborhood of the base policy's coverage; if the base is highly suboptimal or omits required behaviors, the residual alone may be insufficient (Ankile et al., 23 Sep 2025, Ankile et al., 2024).

Residual Action Range Tuning:

Choosing the residual action magnitude is critical. Too large a residual undermines stability guarantees; too small limits optimality improvements. Adaptive scaling and staged "boosting" strategies, where a sequence of decreasing residual policy ranges are learned, have been proposed (Liu et al., 2024, Staessens et al., 2021).

Interaction with Feedback Control:

In contact-rich manipulation tasks, naive additive residuals may conflict with internal feedback loops. Variants that directly modify controller feedback (residual feedback learning), or use hybrid policies—adding residuals both in action and feedback signals—can overcome such issues (Ranjbar et al., 2021).

Partial Observability and Generalization:

Residual RL frameworks have been extended to handle POMDPs using Transformer-based actors and critics sharing structure across agents or hardware (Bouchkati et al., 24 Jun 2025, Zhang et al., 2024), and to handle contextual variation via learned context encoders (Nakhaei et al., 2024).

Model-Based and Planning Extensions:

For model-based RL, residual-bootstrapped critics, bidirectional target network architectures, and direct model-residual updates outperform pure model-based planning and improve robustness to distribution shift (Zhang et al., 2019, Sheng et al., 2024).

7. Research Directions

Ongoing and future directions in Residual RL research include:

In summary, Residual Reinforcement Learning frameworks provide a principled, mathematically well-grounded, and empirically validated approach for leveraging imitation, expert, or physically informed base controllers in conjunction with modern RL for robust, efficient, and safe control across robotics, power systems, and beyond. The consistent motif is the restriction of the learning problem to local, corrective adaptation, yielding sample efficiency, safety, and strong theoretical guarantees unattainable with monolithic RL from scratch.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (20)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Residual Reinforcement Learning (Residual RL).