Papers
Topics
Authors
Recent
Search
2000 character limit reached

Trainable Complementary Filter

Updated 22 January 2026
  • Trainable complementary filters are sensor fusion methods where parameters are learned from data, enabling adaptive frequency-domain weighting.
  • They integrate classical low- and high-pass filter designs with modern learning techniques to optimize estimation error and reduce noise.
  • Applications include adaptive attitude estimation, time-series prediction, and seismic isolation, showing significant improvements over traditional filters.

A trainable complementary filter is a system identification or sensor fusion methodology in which the filter parameters—rather than being fixed or manually tuned—are learned or synthesized from data, enabling adaptive fusion of multi-source signals to optimize a target metric such as estimation error or residual noise. The concept leverages the classical complementary filter structure, which decomposes the fusion into frequency bands, but augments it with modern learning approaches or optimal control synthesis for automated adaptation.

1. Classical Complementary Filter Structure

The complementary filter is a technique for fusing two or more signals/predictions, typically in the context of sensor fusion or model combination. For two sources, denote their signals by y1(t)y_1(t) and y2(t)y_2(t). The filter applies a low-pass transfer function C1(s)C_1(s) to one source and a high-pass transfer function C2(s)C_2(s) to the other, such that C1(s)+C2(s)=1C_1(s) + C_2(s) = 1 for all ss (Laplace variable). The fused output is

Y^(s)=C1(s)y1(s)+C2(s)y2(s).\hat Y(s) = C_1(s)y_1(s) + C_2(s)y_2(s).

This structure ensures complete coverage across the frequency domain and enables frequency-selective signal combination, optimizing the contribution from each source where it is most reliable. In practice, C1C_1 may be chosen as a rational low-pass and C2C_2 as its complement (Tsang et al., 2021).

2. Trainable Parameterizations and Learning Approaches

Traditionally, the filter parameters are selected via manual tuning or empirical analysis. Trainable complementary filters replace this heuristic selection with either (a) data-driven learning of filter coefficients or (b) optimal synthesis according to noise models:

  • In adaptive attitude estimation, axis-dependent gains for the accelerometer update (Kf=diag(kfx,kfy,kfz)K_f = \mathrm{diag}(k_{fx}, k_{fy}, k_{fz})) are mapped from the instantaneous residual Resf\mathrm{Res}_f through a dedicated multilayer perceptron (MLP) per axis. The input to each MLP is a polynomial expansion of the residual component, and the output, after a smooth thresholding operation, is the axis gain (Vertzberger et al., 2022). This allows localized adaptation in response to the sensor context.
  • In time-series and system identification, the complementary filter's frequency cutoff parameter β(0,1)\beta \in (0,1) is parameterized through a learnable transformation, such as β=σ(γ)\beta = \sigma(\gamma) (with σ\sigma a sigmoid), and optimized in conjunction with model weights (e.g., RNN parameters) to minimize fusion error (Ensinger et al., 2023).
  • For sensor fusion addressing seismic isolation in gravitational-wave detectors, the design of C1(s)C_1(s) and C2(s)C_2(s) is reformulated as an H\mathcal{H}_\infty synthesis problem, yielding filters that minimize the worst-case logarithmic difference between fused sensor noise and its theoretical lower bound over frequency (Tsang et al., 2021).

3. Hybrid and Fully-Learned Filter Architectures

Trainable complementary filters manifest in multiple architectural forms:

  • Hybrid Model-Based and Data-Driven: In adaptive attitude estimation ("Deep Attitude Estimator"), the backbone filter is model-based, but axis gain selection is delegated to learned neural networks. The filter recursively fuses inertial sensor data, with gain adaptation as a learnable mapping (Vertzberger et al., 2022).
  • Fully-Learned Dynamical Systems Fusion: The complementary filter is used to combine outputs from fast (high-frequency-responsive) and slow (long-horizon-stable) neural models, yielding improved short- and long-term prediction accuracy. Both sub-models and the cutoff are learned jointly (Ensinger et al., 2023).
  • Hybrid Model-Simulator Fusion: The slow model is implemented as a non-trainable physics-based simulator, and only the fast (neural) sub-model and the filter cutoff are learned (Ensinger et al., 2023).
  • Optimal Control Synthesis: For multi-sensor fusion, filter coefficients are synthesized via optimization (e.g., Riccati-based H\mathcal{H}_\infty solvers) based on explicit spectral models of sensor noise (Tsang et al., 2021).

4. Mathematical Formulations and Training Procedures

Key mathematical structures underpin trainable complementary filters:

  • Component-wise Adaptive Fusion (Attitude):
    • Gyro step: R^g,k=R^k1(I+dtΩ~×,k)\hat R_{g,k} = \hat R_{k-1} (I + dt\,\tilde\Omega_{\times,k}) with orthonormalization.
    • Residual: Resf=f~bg^g,kb\mathrm{Res}_f = \tilde f^b - \hat g^{b}_{g,k}.
    • Axis gain via MLP: kfi=SoftThreshold(MLPi(ui))k_{f_i} = \mathrm{SoftThreshold}(\mathrm{MLP}_i(u_i)).
    • Adaptive update: g^a,kb=g^g,kb+Kf(f~bg^g,kb)\hat g^{b}_{a,k} = \hat g^{b}_{g,k} + K_f(\tilde f^b-\hat g^{b}_{g,k}).
    • Optimization on ground-truth gravity angle error (Vertzberger et al., 2022).
  • Dynamical System Fusion:
    • Discrete filter recurrences for fast/HP and slow/LP contributions:

    yHP[n]=β(yHP[n1]+yfast[n]yfast[n1]), yLP[n]=βyLP[n1]+(1β)yslow[n].\begin{aligned} y_{\rm HP}[n] &= \beta(y_{\rm HP}[n-1] + y_{\rm fast}[n] - y_{\rm fast}[n-1]), \ y_{\rm LP}[n] &= \beta y_{\rm LP}[n-1] + (1-\beta) y_{\rm slow}[n]. \end{aligned} - End-to-end loss on fused prediction output (Ensinger et al., 2023).

  • Sensor Fusion Optimization:

    • Objective function: Minimize J(C1)=supω[10log10NsuperASD(ω)10log10NminASD(ω)]J(C_1) = \sup_\omega [10\log_{10} N_{\rm super}^{\rm ASD}(\omega) - 10\log_{10} N_{\min}^{\rm ASD}(\omega)] subject to C1(s)+C2(s)=1C_1(s)+C_2(s)=1.
    • Solution via H\mathcal{H}_\infty synthesis (generalized plant formulation, Riccati solution) (Tsang et al., 2021).

5. Quantitative Performance and Empirical Results

Empirical evaluation demonstrates that trainable complementary filters provide accuracy and robustness superior to conventional fixed-filter approaches:

  • Adaptive Attitude Estimation: DAE (hybrid trainable filter) achieves a mean roll/pitch RMS error of 0.730.73^\circ compared to 1.181.18^\circ1.261.26^\circ for traditional filters (Madgwick, Mahony, ES-EKF, AEKF) and 0.810.81^\circ for a prior model-learning filter. Computational load is minimal (1450\approx1450 MLP parameters evaluated thrice per step, >200>200 Hz throughput on CPU) (Vertzberger et al., 2022).
  • Dynamics Learning: Double-mass spring system: baseline GRU RMSE 0.59\approx0.59, split-GRU (with trainable filter) RMSE 0.13\approx0.13; i.e., >>80% reduction. Double-torsion pendulum: 2–3× improvement in long-term RMSE (Ensinger et al., 2023).
  • Sensor Fusion for Seismic Isolation: H\mathcal{H}_\infty-synthesized filters keep fused sensor noise within few hundredths of a decibel of the theoretical lower bound at all frequencies; manual designs are suboptimal and less reproducible. Suppression ratios up to 63×\sim63\times at the microseismic peak vs. 6×\sim6\times for manual filters (Tsang et al., 2021).

6. Applications, Extensions, and Practical Guidance

Trainable complementary filters are applicable wherever multiple data sources exhibit complementary frequency-domain characteristics and where adaptation to environmental or system changes is desirable. Concrete domains include:

  • Attitude estimation from low-grade inertial sensors under pedestrian motion (Vertzberger et al., 2022).
  • Learning predictive models for unknown dynamical systems combining fast (sequence model) and slow (simulator) predictors (Ensinger et al., 2023).
  • Sensor fusion in gravitational-wave detector seismic isolation, especially under evolving noise profiles (Tsang et al., 2021).

Practical guidance entails:

  • Initializing trainable parameters (e.g., cutoff β\beta) via spectral analysis.
  • Enforcing filter constraints (C1+C2=1C_1 + C_2 = 1 or H+L=1\mathcal{H} + \mathcal{L} = 1) via parameterization or architectural choice.
  • Adopting stable parameterizations (e.g., sigmoid mapping to (0,1)(0,1)).
  • Using modern optimization solutions (SGD for neural filters; Riccati solvers for rational filters).
  • Embedding real-time adaptation loops where sensor noise profiles may shift due to environmental or equipment changes (Tsang et al., 2021).

Extensions include multi-sensor fusion (H1+H2+H3=1H_1 + H_2 + H_3 = 1), supervisory control with automated mode switching, or filter synthesis in broader control and estimation systems.

7. Theoretical and Implementation Considerations

A defining property is that the sum of complementary filter transfer functions must identically equal unity across all frequencies, ensuring no information loss or frequency gap. Stability is maintained by constraining trainable parameters (e.g., β(0,1)\beta \in (0,1), with care to avoid β1\beta \approx 1 exactly; small ϵ\epsilon can be appended to denominators if necessary).

Automated filter synthesis via optimization enables reproducibility and optimality unattainable by manual design, particularly relevant for precision applications such as gravitational-wave detection, where maximal noise suppression is critical.

A plausible implication is that embedding trainable complementary filters in real-time adaptive systems enables continuous optimization of signal fusion in dynamic environments or under non-stationary noise statistics. This paradigm generalizes beyond sensor fusion to include any task where trade-offs between frequency-selective reliability must be managed end-to-end.

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Trainable Complementary Filter.