ADC: Rolling-Horizon Trust Consensus
- The ADC protocol is a predictive, trust-based consensus mechanism where agents share short-horizon trajectories to anticipate neighbor behavior and enhance coordination.
- It dynamically computes trust and commitment metrics from current and previous predictions, enabling adaptive weight assignments even under adversarial conditions.
- Simulations demonstrate rapid consensus achievement and effective anomaly mitigation, balancing early misbehavior detection with steady convergence under nominal settings.
Rolling-Horizon Predictive Trust-Based Consensus Protocols, formalized as the Anticipatory Distributed Coordination (ADC) protocol, are a class of multi-agent coordination mechanisms in which agents share short-horizon predicted trajectories with their neighbors to enable consensus in dynamic, uncertain, or adversarial environments. Unlike classical consensus schemes that operate solely on instantaneous neighbor states, ADC introduces predictive foresight and dynamic trust estimation. Agents leverage shared rolling-horizon trajectories both to anticipate neighbors’ intentions and to infer trust and commitment, robustifying coordination even under behaviors such as stubbornness, erratic deviations, or missing data (Renganathan et al., 14 Jul 2025).
1. Motivation and Conceptual Distinctions
Classical consensus protocols update agent states based only on the current values of their neighbors, as in . This approach is vulnerable in settings where information may be delayed, manipulated, or otherwise unreliable.
ADC's central innovation is the use of rolling-horizon predictions: at every step , each agent computes and shares its own short future trajectory
with its neighbors (Eq. (5)). Agents then use not only these current predictions but also previous-step predictions to infer the reliability (“trust”) and persistence (“commitment”) of their neighbors’ anticipated behavior. The consensus update is weighted by these dynamically inferred trust parameters rather than fixed or static adjacency-based weights.
Key differences relative to traditional consensus protocols, as summarized in Table 1:
| Aspect | Classical Consensus | ADC Protocol |
|---|---|---|
| Input | Instantaneous states | Rolling-horizon predictions |
| Neighbor assessment | None or static/non-predictive | Dynamic trust and commitment inference |
| Consensus weights | Exogenous or fixed | Trust- and commitment-adaptive |
2. Rolling-Horizon Predictive Coordination Scheme
At each time , every agent implements a rolling-horizon communication and update scheme:
- Prediction horizon: The future look-ahead window is a fixed integer .
- Data communicated: Each agent broadcasts and retains for trust analysis. The set of trajectory data received by agent at step is
- Rolling update sequence: Agents perform the following per time-step:
- Exchange predicted trajectories;
- Retain memory of previous predictions for cross-step comparison;
- Learn trust and commitment traits;
- Predicted future data is incorporated into the subsequent rolling-horizon predictions and communicated.
This approach enables multi-step “foresight” regarding the intentions and deviation risks of neighboring agents, supporting resilience to packet drops and adversarial behavior.
3. Trust and Commitment Metrics
ADC introduces trajectory-based trust and commitment metrics to quantify the credibility and consistency of neighbor predictions:
- Trust radius: For scalar 0, the trust ball for 1 is
2
- Discounted set-membership trust (Eq. (16)): Trust in neighbor 3 at 4 is
5
where 6 is a temporal discount and 7 is the indicator function.
- Inferred trust (Eq. (17)):
8
- Commitment (Eq. (18)): A running average of past trust,
9
- Trust-to-weight mapping: For each trajectory horizon step 0,
1
normalized across all neighbors and 2 to yield consensus weights:
3
These metrics enable agents to dynamically downweight or ignore unreliable or inconsistent neighbor information.
4. Consensus Update Mechanism
The core consensus update rule is formulated at the trajectory level (Eq. (21)): 4 where 5 denotes element-wise multiplication. The equivalent scalar stepwise update (Eq. (22)) for stage 6 is
7
specializing at 8 to the immediate next step: 9
These rules implement a trust-weighted, trajectory-level blending of neighbor states.
5. Lyapunov-Based Convergence Analysis
ADC's convergence is established using a Lyapunov function constructed on the pairwise trajectory disagreements (Eq. (31)): 0 Expressed in matrix-trace notation (Eq. (32)): 1 where 2 stacks all agent trajectories and 3 (graph Laplacian extended to trajectories).
For row-stochastic, symmetric 4, Lemma 1 asserts
5
implying non-increasing 6 (7), and the only equilibrium (8) is consensus among all agents’ trajectories. Theorem 1 guarantees asymptotic agreement under these properties.
6. Algorithmic Flow
A summary of the per-step algorithm performed at each agent is as follows:
- Inputs: Current and previous neighbor predicted trajectories 9; parameters 0.
- Trust evaluation: For each neighbor 1, for each horizon step, construct 2, compute 3, aggregate to 4, and update commitment 5.
- Weight assignment: Form 6, normalize to obtain 7.
- Trajectory update: Apply Eq. (22) for all steps within the horizon.
- Broadcast: Share the updated trajectory 8.
This sequence ensures that each agent continuously integrates its view of trustworthiness into the next prediction horizon.
7. Empirical Findings and Performance Properties
Simulations on a connected 20-agent spanning-tree graph confirm the effectiveness of ADC under various adversarial regimes (Renganathan et al., 14 Jul 2025). Key features:
- Scenario: 9, 0, 1, 500 steps. 25% of agents act “stubborn” (state-locked, 2), another 25% randomize (3).
- Convergence: All non-adversarial agents reach consensus within approximately 200 steps; consensus error 4.
- Trust dynamics: Trust coefficient 5 sharply declines for neighbors who misbehave, then recovers rapidly when normal behavior resumes (Fig. 4).
- Robustness: The dynamic trust–commitment weighting confers resilience to both persistent (stubborn) and sporadic (random) adversarial behaviors.
- Horizon effects: Increasing 6 accelerates misbehavior detection but can slightly slow convergence in nominal (non-adversarial) conditions.
These results demonstrate a trade-off between early detection of anomalies and nominal rate of consensus, with trust-weighting providing robust adaptation even as underlying agent intentions fluctuate.
ADC’s rolling-horizon predictive trust-based structure represents a significant advance for resilient coordination in complex networked multi-agent systems (Renganathan et al., 14 Jul 2025).