Papers
Topics
Authors
Recent
2000 character limit reached

Neural Feedback Scheduling (NFS)

Updated 3 December 2025
  • Neural Feedback Scheduling (NFS) is a control framework that trains neural networks to learn optimal scheduling and event triggering based on system states and resource constraints.
  • It replaces explicit online optimization with neural approximators trained offline, reducing computational overhead while maintaining performance.
  • NFS ensures system stability and adherence to real-time constraints in embedded and cyber-physical systems, matching near-optimal control benchmarks.

Neural Feedback Scheduling (NFS) is a class of control and resource allocation frameworks in which a neural network is trained to perform scheduling or triggering decisions based on observed system states, task execution parameters, and resource availability. NFS schemes have been developed to address the limitations of conventional scheduling and event-triggered control by replacing explicit online optimization or threshold-based mechanisms with learned neural mappings that approximate optimal scheduling policies under varying system conditions. Typical applications include embedded and cyber-physical systems where resource constraints, workload variability, and tight feedback performance requirements render classical approaches computationally prohibitive or communication-inefficient. Recent research demonstrates that NFS can maintain control performance equivalent to theoretically optimal feedback schedulers or event-triggered controllers, while achieving substantial reductions in run-time overhead or communication events (0805.3062, Yang et al., 19 Jul 2025).

1. Formulation of Neural Feedback Scheduling Problems

NFS frameworks are typically formulated for systems in which a set of real-time control tasks, or feedback control laws, must be scheduled on shared computational platforms or communication channels in the presence of limited resources and dynamic external disturbances. For CPU-bound scheduling, consider NN independent control tasks {τi}i=1N\{\tau_i\}_{i=1}^N with known execution times CiC_i and sampling periods hih_i. Each task’s control quality is characterized by a cost function Ji(hi)J_i(h_i), monotonically increasing in hih_i, and an overall weighted cost is given by:

J(h1,...,hN)=i=1NwiJi(hi)J(h_1, ..., h_N) = \sum_{i=1}^N w_i J_i(h_i)

The system must satisfy utilization constraints due to higher-priority or disturbing tasks, formalized as:

minh1,...,hNJ(h1,...,hN)s.t.i=1NCihiUR\min_{h_1, ..., h_N} J(h_1, ..., h_N) \quad \text{s.t.} \quad \sum_{i=1}^N \frac{C_i}{h_i} \leq U_R

where URU_R denotes the maximum allowable utilization. This may be equivalently recast with sampling frequencies fi=1/hif_i = 1/h_i as a convex nonlinear optimization (0805.3062).

For event-triggered implementations, the plant dynamics are described by:

x˙(t)=f(x(t),u(t)),xRd,uRm\dot{x}(t) = f(x(t), u(t)), \quad x \in \mathbb{R}^d,\, u \in \mathbb{R}^m

An event-triggered control law holds u(t)=u(x(tk))u(t) = u(x(t_k)) between events. The triggering policy is to minimize the number of transmissions (or maximize minimal inter-event time) under stability and performance objectives, given precise state and error measurements (Yang et al., 19 Jul 2025).

2. Neural Approximators for Optimal Scheduling and Triggering

NFS approaches replace costly online optimization schemes with neural network approximators that learn the mapping from current workload or system state to optimal scheduling or control decisions.

In the CPU scheduling case (0805.3062), a three-layer feedforward backpropagation network is designed, where the input vector X=[C1,,CN,UR]TX = [C_1,\dots,C_N, U_R]^T encodes current task execution times and residual utilization. The network outputs predicted optimal sampling frequencies Y=[f^1,...,f^N]TY = [\hat{f}_1, ..., \hat{f}_N]^T, with the internal transformations:

A=W1X+B1Z=σ(A)A = W_1 X + B_1 \qquad Z = \sigma(A)

Y=W2Z+B2Y = W_2 Z + B_2

where σ(a)=1/(1+ea)\sigma(a) = 1/(1+e^{-a}) is the sigmoid activation, and W1,B1,W2,B2W_1,\,B_1,\,W_2,\,B_2 are trained parameters. The neural network is trained offline using optimal solutions computed by sequential quadratic programming over sampled workload scenarios.

For event-triggered control (Yang et al., 19 Jul 2025), the neural system parameterizes both a candidate Lyapunov function Vθ(x)V_\theta(x) and the feedback law uθ(x)u_\theta(x), using a shared parameter vector θ\theta. The triggering function is defined as:

ϕθ(x,e)=Vθ(x)[f(x,uθ(x+e))f(x,uθ(x))]γVθ(x)\phi_\theta(x,e) = \nabla V_\theta(x) \cdot [f(x,u_\theta(x+e)) - f(x,u_\theta(x))] - \gamma V_\theta(x)

where γ(0,1)\gamma \in (0,1) is a design constant. An event is triggered when ϕθ(x,e)=0\phi_\theta(x,e) = 0, and the objective is to minimize a combined performance- and communication-oriented cost over θ\theta.

3. Learning Strategies and Loss Functions

The neural networks in both scheduling and event-triggered scenarios are trained offline with datasets or cost functions that encode optimality and system constraints.

For scheduling (0805.3062), optimal input-output pairs [C1,...,CN,UR;f1,...,fN][C_1,...,C_N,U_R; f_1^*,...,f_N^*] are generated offline using mathematical optimization (e.g., SQP). Input and output data are normalized to [0,1][0,1]. The network is trained by minimizing mean-squared error:

E=1Ss=1SY(s)Y^(s)2E = \frac{1}{S} \sum_{s=1}^S \|Y^{(s)} - \widehat{Y}^{(s)}\|^2

using the Levenberg–Marquardt backpropagation procedure.

For event-triggered controllers (Yang et al., 19 Jul 2025), the learning objective combines the expected stage cost, a communication penalty proportional to the number of triggers, and regularization:

J(θ)=E[0T(x(t),uθ(x(tk)))dt+λk:tkT1]J(\theta) = \mathbb{E} \left[\int_0^T \ell(x(t), u_\theta(x(t_k)))\,dt + \lambda \sum_{k: t_k \leq T} 1 \right]

Differentiable surrogate losses Lperstep,Levents,LlipL_{\text{per\,step}}, L_{\text{events}}, L_{\text{lip}} (stability, communication, and Lipschitz penalty) are used, with separate optimization loops for so-called path-integral (PI) and Monte Carlo (MC) learning variants, depending on whether differentiable simulation or lower-bound analysis is used.

4. Online Scheduling and Triggering Algorithms

NFS execution at runtime involves periodic invocation of the trained neural network to update task schedules or event triggers.

For feedback scheduling in embedded real-time systems (0805.3062), the online algorithm at every feedback interval TFST_\mathrm{FS} proceeds as follows:

  • Measure current task execution times C[1..N]C[1..N] and disturbing task utilization.
  • Compute UR=Utarget(cdisturb/hdisturb)U_R = U_\mathrm{target} - (c_\mathrm{disturb}/h_\mathrm{disturb}).
  • Form input vector X=[C1,...,CN,UR]TX = [C_1,...,C_N,U_R]^T and execute neural network forward pass.
  • Set new sampling periods hi=1/f^ih_i = 1/\hat{f}_i for all loops.
  • Return updated h[1..N]h[1..N] to the real-time scheduler.

The computational complexity is O(NM)O(NM) multiplies per run (with MM hidden nodes), typically O(N2)O(N^2) if MNM \propto N.

In the event-triggered context (Yang et al., 19 Jul 2025), the neural network evaluates ϕθ(x,e)\phi_\theta(x,e) at each state, triggering new control updates only when the event threshold is crossed. A projection operation:

ΠU(V)[u](x)=u(x)max(0,V(x)f(x,u(x))+V(x))V(x)2V(x)\Pi_{U(V)}[u](x) = u(x) - \frac{\max(0, \nabla V(x)\cdot f(x,u(x)) + V(x))}{\|\nabla V(x)\|^2} \nabla V(x)

is used to ensure the stability certificate remains valid online.

5. Stability, Schedulability, and Theoretical Guarantees

A key property of NFS schemes is the preservation of system stability and adherence to schedulability constraints, even when using neural approximators.

In (0805.3062), the NFS architecture, trained over the range of anticipated workloads, empirically maintains the overall control performance within 2–3% of the optimal feedback scheduling benchmark (OFS), while adhering to utilization constraints under Rate Monotonic scheduling. The neural network approximates the optimal sampling-frequency mapping sufficiently tightly to replace the sequential quadratic programming solver for real-time scheduling.

In neural event-triggered control (Yang et al., 19 Jul 2025), the Lyapunov function VθV_\theta is parameterized as an input-convex neural network, ensuring Vθ(x)>0V_\theta(x) > 0 for x0x \neq 0. The projection ΠU(V)\Pi_{U(V)} guarantees that, between events, the stability condition:

ddtVθ(x(t))(1γ)Vθ(x)\frac{d}{dt}V_\theta(x(t)) \leq - (1-\gamma)V_\theta(x)

holds, yielding exponential convergence to the equilibrium. Furthermore, a closed-form analytical lower bound for the minimal inter-event time TminT_\mathrm{min} is derived, establishing robustness against Zeno phenomena.

6. Empirical Performance and Practical Considerations

Simulation-based studies have quantified the computational and control performance of NFS approaches.

In CPU scheduling for embedded LQG control of inverted pendulums (0805.3062), the neural feedback scheduler achieves nearly identical total control cost as optimal feedback, while reducing average run-time overhead by a factor of 8.2 ($0.0207$ s/run for NFS vs $0.1701$ s/run for OFS). The execution time of NFS is also more tightly distributed, aiding real-time predictability. Open-loop scheduling diverges under overload, while NFS and OFS adapt and maintain control stability.

In neural event-triggered control (Yang et al., 19 Jul 2025), both path-integral and Monte Carlo NFS variants reduce the average number of triggers and increase minimal inter-event times over conventional neural controllers and LQR-based event-triggered controllers—often by 10×–100×. For example, Neural ETC–PI achieves 20 triggers on the 2-D gene regulatory network benchmark compared to 1816 for LQR+ETC, with mean squared tracking errors remaining low. Neural ETC–MC further minimizes triggers at a small cost in tracking accuracy.

Implementation guidelines for efficient deployment include maintaining small numbers of control loops (e.g., N10N \le 10), dense sampling of offline parameter ranges for neural training, normalization of network inputs to [0,1][0,1], and imposing explicit bounds on frequencies and scheduling decisions to avoid nonphysical behavior (0805.3062).

7. Extensions and Current Directions

The neural feedback scheduling paradigm has evolved from embedded CPU scheduling to include high-dimensional, communication-adaptive, and stability-certified control settings. Recent advancements focus on:

  • Simultaneously learning Lyapunov certificates and control policies under neural parameterizations, as in Neural ETC (Yang et al., 19 Jul 2025).
  • Combining projection operators for stability with neural-based scheduling, yielding formal guarantees even in data-driven scenarios.
  • Employing Monte Carlo and path-integral approaches for tractable training of neural event-triggered schemes.
  • Optimizing both resource usage (CPU/communication) and control performance metrics for complex nonlinear and high-dimensional plants.

A plausible implication is that NFS will remain central to adaptive scheduling in resource-aware control systems, particularly as neural approximation techniques become more scalable and certifiable to safety-critical applications. Its adoption hinges on rigorous validation of generalization, stability, and real-time efficiency under practical deployment constraints.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Neural Feedback Scheduling (NFS).