Papers
Topics
Authors
Recent
Search
2000 character limit reached

AdaptiveQuadBench: Reproducible Quadrotor Benchmark

Updated 4 February 2026
  • AdaptiveQuadBench is an open-source, modular simulation framework for reproducible benchmarking of adaptive quadcopter controllers exposed to varied disturbances and uncertainties.
  • It employs a physically grounded rigid-body model with modular disturbance injections (e.g., stochastic wind, payload shifts) to enable systematic evaluation of controller performance.
  • The framework supports adaptive methods such as L₁-augmented, INDI, and learning-based controllers, offering actionable insights into robustness, delay margins, and tracking precision.

AdaptiveQuadBench is an open-source, modular simulation framework specifically designed for robust and reproducible benchmarking of adaptive control algorithms for quadcopters subject to diverse external disturbances and model uncertainties. Developed atop RotorPy, AdaptiveQuadBench standardizes evaluation protocols across controller classes and disturbance scenarios, facilitating transparent comparison, code reuse, and systematic advancement in adaptive and robust flight control research (Zhang et al., 3 Oct 2025).

1. Mathematical Foundations and System Modeling

AdaptiveQuadBench employs a physically grounded rigid-body quadrotor model in continuous-time state-space form. The state x=(p,v,R,ω)x=(p, v, R, \omega) includes position pR3p \in \mathbb{R}^3, velocity vR3v \in \mathbb{R}^3, orientation RSO(3)R \in SO(3), and angular velocity ωR3\omega \in \mathbb{R}^3. The equations of motion are expressed as: p˙=v,mv˙=TRe3+faero(v,R,w)+fext(t)+mge3,\dot{p} = v, \quad m\dot{v} = TR e_3 + f_{\rm aero}(v, R, w) + f_{\rm ext}(t) + mg e_3,

R˙=R[ω]×,Jω˙=ω×Jω+τctrl(u)+τaero(v,R,w)+τext(t),\dot{R} = R [\omega]_\times, \quad J\dot{\omega} = -\omega \times J\omega + \tau_{\rm ctrl}(u) + \tau_{\rm aero}(v, R, w) + \tau_{\rm ext}(t),

where TT is the total thrust, faerof_{\rm aero} and τaero\tau_{\rm aero} represent aerodynamic forces and torques (modeling planar drag and rotor effects), and fext(t)f_{\rm ext}(t), τext(t)\tau_{\rm ext}(t) denote external time-varying disturbances.

The rotor dynamics incorporate static and dynamic models:

  • Each rotor produces thrust ui=ktωi2u_i = k_t\omega_i^2 and torque kqωi2k_q\omega_i^2.
  • The mapping from rotor speeds to total thrust and torques is encapsulated in a matrix GG.
  • First-order motor lag is modeled as ω˙i=1τm(ωi,desωi)\dot{\omega}_i = \frac{1}{\tau_m}(\omega_{i,{\rm des}} - \omega_i).

Disturbance models are modularly injected, including:

  • Stochastic wind via the Dryden model,
  • Off-center mass payloads (with temporal variation),
  • Unknown constant external forces/torques,
  • Rotor faults implemented as random scaling of thrust effectiveness,
  • Control latency as a pre-actuator input delay.

2. Controller Library: Adaptive and Non-Adaptive Architectures

AdaptiveQuadBench features an extensible suite of both baseline and advanced adaptive controllers, each implemented under a standardized interface and controllable at different system levels (moment/thrust or direct motor commands):

  • Non-adaptive baselines:
  • Adaptive controllers:
    • geo-a: Geometric controller augmented with online disturbance estimation; adaptation follows a Lyapunov-stable rule, Δ^˙=ΓYTe\dot{\hat{\Delta}} = -\Gamma Y^T e, converging tracking error within a bound.
    • l1geo: L₁-augmented geometric control; utilizes a fast adaptation loop with low-pass filtered estimates (σ^L1\hat{\sigma}_{L1}) to compensate uncertainties, providing quantifiable robustness margins.
    • l1mpc: L₁-augmented MPC; combines model predictive planning with rapid uncertainty rejection.
    • indi-a: Adaptive Incremental Nonlinear Dynamic Inversion; locally linearizes the system and adaptively learns the effectiveness of each rotor.
    • xadap: Simulation-trained learning-based residual controller; outer loop uses geometric control, inner loop employs a neural policy for dynamics mismatch compensation.

Each controller addresses different forms of model mismatch and disturbance. L₁ methods guarantee transient and steady-state performance under fast-varying uncertainties, INDI relies on high-rate feedback, and xadap offers data-driven generalization for large uncertainties but requires extensive offline training.

3. Simulation Architecture and Workflow

A modular, object-oriented design facilitates extensibility and code reuse:

  • Dynamics module: extensible via RotorPy, encompassing core quadrotor dynamics, drag, motor lag, and disturbance hooks.
  • Disturbance module: abstracted for pluggable wind, payload, rotor faults, and latency models.
  • Controller interface: all controllers derive from a shared base class with standardized input/output signatures.
  • Trajectory generator: supports canonical tasks (hover, waypoint, circle) and user-defined primitives.
  • Evaluation manager: configurable batch execution, stress-testing (“when2fail”), and metric computation.
  • Visualization tools: generate interpretable error plots, time series, robustness diagrams.

The experiment lifecycle is fully reproducible from YAML/Python configuration files specifying controllers, disturbances, reference trajectories, evaluation protocols, and metric selection.

4. Benchmarking Scenarios and Performance Metrics

AdaptiveQuadBench supports a diverse scenario catalog:

  • Standard trajectories: hover, waypoint, circle, randomized motion primitives, or custom patterns.
  • Disturbance sweeps: systematic variation of wind magnitude, payload offset/mass, rotor effectiveness, and input delays.
  • Automated stress-testing: “when2fail” protocol incrementally increases disturbance until a specified success threshold is violated.

Canonical metrics include:

  • Position RMSE: RMSEp=1Nk=1Npdes(tk)p(tk)22\mathrm{RMSE}_p = \sqrt{\frac{1}{N}\sum_{k=1}^N \|p_{\rm des}(t_k) - p(t_k)\|_2^2},
  • Heading error: εψ=1Nkψdes(tk)ψ(tk)\varepsilon_\psi = \frac{1}{N}\sum_k | \psi_{\rm des}(t_k) - \psi(t_k)|,
  • Overshoot and delay margin,
  • Success rate across batches and robustness index (e.g., RMSE vs. disturbance slope).

This comprehensive suite enables profiling of both nominal performance and resilience under adverse conditions.

5. Extensibility and Experimentation

Researchers can rapidly implement and test new controllers or disturbance models. Process for extending AdaptiveQuadBench:

  • Implement custom controllers via class inheritance, registering them for CLI/configuration discovery.
  • Add new disturbance classes and register in the disturbance module.
  • Specify experiments, controller parameters, and metrics in YAML; execute via provided Python scripts.

Installation follows standard Python conventions with requirements on RotorPy, numpy, scipy, and optional PyTorch for learning-based methods. Batch experiments, logging, and visualizations are fully automated.

6. Empirical Results and Comparative Insights

Experimental evaluation demonstrates:

  • All controllers achieve \approx100% nominal success with RMSE <<0.08 m.
  • L₁-augmented architectures (l1geo, l1mpc) maintain robust tracking with minimal error increase under gusts, payload shifts, model uncertainty, and rotor faults.
  • Non-adaptive geo deviates substantially (>2>2 m) under substantial payload shifts; adaptive controllers maintain submeter errors.
  • l1mpc and indi-a compensate for actuator faults within 0.1 s, preserving tracking precision.
  • Learning-based xadap generalizes up to ±30%\pm30\% model scaling, whereas geo-a fails beyond ±15%\pm15\%.
  • L₁-based methods increase input delay margin by \sim30% over non-adaptive baselines.

The automated “when2fail” protocol provides rapid identification of each controller's robustness threshold, streamlining comparative studies.

7. Significance and Future Directions

AdaptiveQuadBench delivers a unified platform for reproducible benchmarking of adaptive quadcopter controllers under rigorous, hardware-relevant, and diverse stress scenarios. Its extensible design, standard metrics, and automation features address prior challenges of fragmented evaluation, enabling consistent, credible comparison across algorithmic advances (Zhang et al., 3 Oct 2025). A plausible implication is that widespread adoption of such benchmarking suites will accelerate progress in flight control by shifting focus from isolated, hand-tuned deployments to reproducible, stress-tested algorithm development.

Potential future work includes further integration with hardware-in-the-loop setups, expanded libraries of learning-based control policies, and benchmarking against larger-scale and more heterogeneous hardware platforms.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to AdaptiveQuadBench.