Papers
Topics
Authors
Recent
Search
2000 character limit reached

Event-Driven Receding Horizon Control

Updated 14 February 2026
  • ED-RHC is a distributed optimal control strategy that re-optimizes decisions upon significant events, reducing computation and communication costs.
  • It employs a receding horizon approach where finite-horizon optimization problems are solved only when events, such as threshold crossings or agent arrivals, occur.
  • ED-RHC improves scalability and performance across multi-agent, hybrid, and energy-aware systems, with proven gains in estimation, ride-sharing, and persistent monitoring applications.

Event-Driven Receding Horizon Control (ED-RHC) is a class of distributed optimal control strategies where the control policy is re-optimized at the occurrence of system-relevant events, rather than at uniform time intervals. This approach enables significant reduction in computational and communication overhead while maintaining or improving control performance in networked, uncertain, and resource-constrained systems. ED-RHC has been rigorously developed for domains including distributed persistent monitoring, networked estimation, multi-agent cooperative systems, ride-sharing, event-triggered LQ control, hybrid and power-electronics systems, and energy-aware agent coordination. The central paradigm is the receding (or model predictive) horizon, with event-driven triggering determining when the controller solves its local or global optimization problem.

1. Core Principles of Event-Driven Receding Horizon Control

ED-RHC frameworks are unified by the following characteristics:

  • Receding Horizon Optimization: At each control update, the controller solves a finite-horizon optimal control problem (RHCP) of the form

minuk,...,uk+N1j=0N1(xk+j,uk+j)+V(xk+N)\min_{u_{k},...,u_{k+N-1}} \sum_{j=0}^{N-1} \ell(x_{k+j}, u_{k+j}) + V(x_{k+N})

subject to system dynamics and constraints. Only the first action is implemented, then the horizon recedes.

  • Event-Driven Triggering: Unlike periodic MPC, the RHCP is resolved only at the occurrence of application-specific events, not at every time step. Event triggers include threshold crossings in monitored variables, agent arrivals, target visits, estimation error thresholds, or changes in system state such as network topology or disturbances.
  • Event Definition Examples:
    • In distributed estimation, an event is triggered when the local estimation error covariance trace crosses γi\gamma_i, i.e., trΩi(t)γi\mathrm{tr}\,\Omega_i(t)\ge\gamma_i (Welikala et al., 2020).
    • In agent monitoring and networked control, events may include agent arrivals/departures, uncertainty hitting zero, or changes in monitored sets (Welikala et al., 2020, Welikala et al., 2021).
    • In event-triggered LQ control, events are generated by the state norm exceeding a pre-set threshold (Demirel et al., 2017, Nishida et al., 29 Sep 2025).

This event-driven architecture leads to reduced actuation frequency, computational complexity, and communication requirements while retaining reactivity to pertinent system changes.

2. Mathematical Formulation and Variants

The RHCP in ED-RHC is adapted to the problem class:

  • Distributed Monitoring/Estimation: Agents minimize an integrated or average measure of target uncertainty or estimation error covariance. Local states are defined by monitored neighbors, with control variables comprising dwell times at the current node, choice of next node, and planned dwell at the destination. Objective functions are crafted to optimize local efficiency ratios or normalized average costs, preserving unimodality in the optimization variable for tractable minima (Welikala et al., 2020, Welikala et al., 2020).
  • Sparsity-Promoting, Intermittent Control: The cost function augments standard LQ performance with actuation sparsity regularization, for example:

Ja=lim supN1NE[k=0N1xkQxk+ukRuk+θδk]J^a = \limsup_{N\to\infty} \frac{1}{N} \mathbb{E}\Big[\sum_{k=0}^{N-1} x_k^\top Q x_k + u_k^\top R u_k + \theta \delta_k \Big]

where δk\delta_k is a binary actuator trigger (Nishida et al., 29 Sep 2025).

  • Hybrid and Switched Systems: For finite-mode hybrid automata (e.g., power electronic inverters), the controller searches over discrete mode sequences on each event (Chen et al., 2020), with disturbance-adaptive updates.

Variants include parameter-free planning horizons (Welikala et al., 2020), joint energy-awareness (Welikala et al., 2021), and reward-maximization in stochastic and uncertain environments (Khazaeni et al., 2014, Chen et al., 2019).

3. Distributed Algorithms and Implementation Architecture

ED-RHC admits decentralized execution with minimal coordination:

  • Local Problem Decoupling: Each agent maintains only the states of immediate neighbors (or local targets). Upon event detection, the agent solves a low-dimensional RHCP, typically a bi-variate or even univariate optimization. Communication is restricted to coverage or assignment notifications to enforce mutual exclusion or system-level constraints (Welikala et al., 2020, Welikala et al., 2020, Welikala et al., 2021).
  • Algorithmic Skeleton:
  1. Wait for event in local context.
  2. Update neighborhood and determine RHCP type (arrival, departure, etc.).
  3. Solve local RHCP (often analytic or via a low-degree rational program).
  4. Apply the first segment of the optimal plan; wait for new event.
  • Complexity and Scalability: Each event induces only O(Ni)O(|\mathcal N_i|) computations (number of neighbors), with closed-form or constant-time per case for key subproblems (Welikala et al., 2020, Welikala et al., 2020). No global optimization or iterative negotiation is required.
  • Machine Learning Acceleration: To further reduce per-event computational cost, learning-based approaches use shallow networks to predict the best action (e.g., neighbor selection), solving only a subset of the RHCPs and reverting to full optimization as needed for confidence (Welikala et al., 2020).

4. Application Domains

ED-RHC has been instantiated in diverse domains:

Domain Event Example Key Optimization Target
Persistent Monitoring Agent arrival, Ri=0R_i=0 Minimize mean node uncertainty
Distributed Estimation trΩiγi\mathrm{tr} \Omega_i \geq \gamma_i Minimize integrated error covariance
Ride Sharing/Fleet Dispatch Passenger/vehicle occurrences Minimize weighted sum of waiting/travel times
Power Electronics Mode switch event Track voltage reference, minimize switching effort
Energy-aware Mobile Agents Arrival, departure, Ri=0R_i=0 Jointly minimize energy and integrated uncertainty
Event-Triggered LQ Control xk∉C0x_k \not\in \mathcal{C}_0 Minimize quadratic cost with limited actuation frequency

This modularity demonstrates its compatibility with both continuous and hybrid dynamics, discrete-event models, multi-agent and single-agent architectures (Welikala et al., 2020, Welikala et al., 2020, Chen et al., 2020, Welikala et al., 2021, Chen et al., 2019, Khazaeni et al., 2014, Demirel et al., 2017, Nishida et al., 29 Sep 2025).

5. Theoretical Properties

ED-RHC controller design incorporates and guarantees several theoretically robust properties:

  • Unimodal Objectives: For relevant RHCPs (e.g., in distributed estimation with Riccati dynamics), the local objective as a function of dwell time is unimodal, ensuring existence and uniqueness of minima (Welikala et al., 2020).
  • Closed-form Local Solutions: Many specific RHCP forms (fixed-horizon, variable-horizon, hybrid dwell/transit decisions) admit closed-form or explicit analytic solutions—quadratic-over-linear, rational, or quadratic forms depending on system model (Welikala et al., 2020, Welikala et al., 2021).
  • Stability and Robustness: Practical stability is established in linear systems with event-based control using Lyapunov arguments, with performance robust to moderate parameter noise/disturbance (Demirel et al., 2017, Welikala et al., 2020). Rollout-based variants provide mean-square stability and theoretical regret bounds relative to periodic baselines (Nishida et al., 29 Sep 2025).
  • Parameter-Free Operation: Embedding planning horizon as a variable in the optimization enables truly parameter-free control, obviating manual horizon tuning. Cost performance saturates near the optimum without fine adjustment (Welikala et al., 2020).
  • Scalability: Complexity per agent is independent of network size, depending only on local neighborhood or event incidence, supporting scalability to large multi-agent systems (Welikala et al., 2020, Welikala et al., 2020).

6. Empirical Performance and Comparative Evaluation

Comprehensive experimental studies demonstrate that ED-RHC outperforms centralized, greedy, and periodic schemes in multiple settings:

  • Estimation and Monitoring: On random networks (7–10 targets, 2–4 agents), finite-horizon estimation error reduced from 127.8 (centralized periodic) and 131.8 (heuristic) to 101.2 (fully distributed ED-RHC), with 4–5% gain in sensing-efficiency and up to 8% lower tracking error in reference-following tasks. Machine learning acceleration reduces per-RHCP CPU time by up to 86% with negligible loss (<0.1%) in performance (Welikala et al., 2020).
  • Persistent Monitoring: Average JTJ_T improvement of nearly 10% (single-agent) and 50% (multi-agent) compared to IPA-based online threshold control (Welikala et al., 2020).
  • Event-Triggered LQ: Communication/optimization triggers reduced by 68–72% over time-triggered MPC with minimal cost increase (Demirel et al., 2017).
  • Domain-Specific Results: In energy-aware monitoring, the second-order ED-RHC reduces energy by 52% vs. first-order RHC at equal or lower uncertainty; in ride-sharing networks, waiting/travel times reduced by 45–50% over greedy routing (Welikala et al., 2021, Chen et al., 2019).

7. Extensions and Limitations

  • Generalization: ED-RHC applies broadly to stochastic target processes (event counting, estimation), hybrid automata, and resource-constrained scheduling by adapting the event definition and cost function (Chen et al., 2020).
  • Limitations: Performance depends on up-to-date neighbor information and sufficient network connectivity; performance may degrade in high-latency or poorly connected graphs. Extensions to look-ahead and expanded local neighborhoods recover robustness at the cost of higher computation/communication (Welikala et al., 2020).
  • Future Directions: Integration with reinforcement learning accelerators, more expressive event definitions, and adaptive event-trigger thresholds are active research directions. Hybrid systems and actuation-sparsity variants suggest further unification of ED-RHC across domains (Nishida et al., 29 Sep 2025, Chen et al., 2020).

Key references for ED-RHC include the foundational distributed monitoring and estimation work (Welikala et al., 2020, Welikala et al., 2020), event-triggered LQ and sparsity schemes (Demirel et al., 2017, Nishida et al., 29 Sep 2025), energy-aware agent design (Welikala et al., 2021), event-driven hybrid automata (Chen et al., 2020), and cooperative reward-maximization approaches (Khazaeni et al., 2014, Chen et al., 2019).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Event-Driven Receding Horizon Control (ED-RHC).