Papers
Topics
Authors
Recent
Search
2000 character limit reached

ADC: Rolling-Horizon Trust Consensus

Updated 4 May 2026
  • The ADC protocol is a predictive, trust-based consensus mechanism where agents share short-horizon trajectories to anticipate neighbor behavior and enhance coordination.
  • It dynamically computes trust and commitment metrics from current and previous predictions, enabling adaptive weight assignments even under adversarial conditions.
  • Simulations demonstrate rapid consensus achievement and effective anomaly mitigation, balancing early misbehavior detection with steady convergence under nominal settings.

Rolling-Horizon Predictive Trust-Based Consensus Protocols, formalized as the Anticipatory Distributed Coordination (ADC) protocol, are a class of multi-agent coordination mechanisms in which agents share short-horizon predicted trajectories with their neighbors to enable consensus in dynamic, uncertain, or adversarial environments. Unlike classical consensus schemes that operate solely on instantaneous neighbor states, ADC introduces predictive foresight and dynamic trust estimation. Agents leverage shared rolling-horizon trajectories both to anticipate neighbors’ intentions and to infer trust and commitment, robustifying coordination even under behaviors such as stubbornness, erratic deviations, or missing data (Renganathan et al., 14 Jul 2025).

1. Motivation and Conceptual Distinctions

Classical consensus protocols update agent states based only on the current values of their neighbors, as in xi(k+1)=jNiwij(k)xj(k)x_i(k+1)=\sum_{j\in\mathcal N_i}w_{ij}(k)\,x_j(k). This approach is vulnerable in settings where information may be delayed, manipulated, or otherwise unreliable.

ADC's central innovation is the use of rolling-horizon predictions: at every step kk, each agent computes and shares its own short future trajectory

xi(κk,T)=[xi(kk) xi(k+1k)  xi(k+T1k)]RT\mathsf{x}_i(\kappa_{k,T}) = \begin{bmatrix} x_i(k\mid k) \ x_i(k+1\mid k) \ \vdots \ x_i(k+T-1\mid k) \end{bmatrix} \in \mathbb R^T

with its neighbors (Eq. (5)). Agents then use not only these current predictions but also previous-step predictions to infer the reliability (“trust”) and persistence (“commitment”) of their neighbors’ anticipated behavior. The consensus update is weighted by these dynamically inferred trust parameters rather than fixed or static adjacency-based weights.

Key differences relative to traditional consensus protocols, as summarized in Table 1:

Aspect Classical Consensus ADC Protocol
Input Instantaneous states Rolling-horizon predictions
Neighbor assessment None or static/non-predictive Dynamic trust and commitment inference
Consensus weights Exogenous or fixed Trust- and commitment-adaptive

2. Rolling-Horizon Predictive Coordination Scheme

At each time kk, every agent implements a rolling-horizon communication and update scheme:

  • Prediction horizon: The future look-ahead window is a fixed integer TT.
  • Data communicated: Each agent broadcasts xi(κk,T)\mathsf{x}_i(\kappa_{k,T}) and retains xi(κk1,T)\mathsf{x}_i(\kappa_{k-1,T}) for trust analysis. The set of trajectory data received by agent ii at step kk is

Xi(k,T)=jJi{xj(κk1,T), xj(κk,T)}.\mathcal X^i(k,T) = \bigcup_{j\in\mathcal J_i} \left\{ \mathsf{x}_j(\kappa_{k-1,T}),\ \mathsf{x}_j(\kappa_{k,T}) \right\}.

  • Rolling update sequence: Agents perform the following per time-step:
    1. Exchange predicted trajectories;
    2. Retain memory of previous predictions for cross-step comparison;
    3. Learn trust and commitment traits;
    4. Predicted future data is incorporated into the subsequent rolling-horizon predictions and communicated.

This approach enables multi-step “foresight” regarding the intentions and deviation risks of neighboring agents, supporting resilience to packet drops and adversarial behavior.

3. Trust and Commitment Metrics

ADC introduces trajectory-based trust and commitment metrics to quantify the credibility and consistency of neighbor predictions:

  • Trust radius: For scalar kk0, the trust ball for kk1 is

kk2

  • Discounted set-membership trust (Eq. (16)): Trust in neighbor kk3 at kk4 is

kk5

where kk6 is a temporal discount and kk7 is the indicator function.

  • Inferred trust (Eq. (17)):

kk8

  • Commitment (Eq. (18)): A running average of past trust,

kk9

  • Trust-to-weight mapping: For each trajectory horizon step xi(κk,T)=[xi(kk) xi(k+1k)  xi(k+T1k)]RT\mathsf{x}_i(\kappa_{k,T}) = \begin{bmatrix} x_i(k\mid k) \ x_i(k+1\mid k) \ \vdots \ x_i(k+T-1\mid k) \end{bmatrix} \in \mathbb R^T0,

xi(κk,T)=[xi(kk) xi(k+1k)  xi(k+T1k)]RT\mathsf{x}_i(\kappa_{k,T}) = \begin{bmatrix} x_i(k\mid k) \ x_i(k+1\mid k) \ \vdots \ x_i(k+T-1\mid k) \end{bmatrix} \in \mathbb R^T1

normalized across all neighbors and xi(κk,T)=[xi(kk) xi(k+1k)  xi(k+T1k)]RT\mathsf{x}_i(\kappa_{k,T}) = \begin{bmatrix} x_i(k\mid k) \ x_i(k+1\mid k) \ \vdots \ x_i(k+T-1\mid k) \end{bmatrix} \in \mathbb R^T2 to yield consensus weights:

xi(κk,T)=[xi(kk) xi(k+1k)  xi(k+T1k)]RT\mathsf{x}_i(\kappa_{k,T}) = \begin{bmatrix} x_i(k\mid k) \ x_i(k+1\mid k) \ \vdots \ x_i(k+T-1\mid k) \end{bmatrix} \in \mathbb R^T3

These metrics enable agents to dynamically downweight or ignore unreliable or inconsistent neighbor information.

4. Consensus Update Mechanism

The core consensus update rule is formulated at the trajectory level (Eq. (21)): xi(κk,T)=[xi(kk) xi(k+1k)  xi(k+T1k)]RT\mathsf{x}_i(\kappa_{k,T}) = \begin{bmatrix} x_i(k\mid k) \ x_i(k+1\mid k) \ \vdots \ x_i(k+T-1\mid k) \end{bmatrix} \in \mathbb R^T4 where xi(κk,T)=[xi(kk) xi(k+1k)  xi(k+T1k)]RT\mathsf{x}_i(\kappa_{k,T}) = \begin{bmatrix} x_i(k\mid k) \ x_i(k+1\mid k) \ \vdots \ x_i(k+T-1\mid k) \end{bmatrix} \in \mathbb R^T5 denotes element-wise multiplication. The equivalent scalar stepwise update (Eq. (22)) for stage xi(κk,T)=[xi(kk) xi(k+1k)  xi(k+T1k)]RT\mathsf{x}_i(\kappa_{k,T}) = \begin{bmatrix} x_i(k\mid k) \ x_i(k+1\mid k) \ \vdots \ x_i(k+T-1\mid k) \end{bmatrix} \in \mathbb R^T6 is

xi(κk,T)=[xi(kk) xi(k+1k)  xi(k+T1k)]RT\mathsf{x}_i(\kappa_{k,T}) = \begin{bmatrix} x_i(k\mid k) \ x_i(k+1\mid k) \ \vdots \ x_i(k+T-1\mid k) \end{bmatrix} \in \mathbb R^T7

specializing at xi(κk,T)=[xi(kk) xi(k+1k)  xi(k+T1k)]RT\mathsf{x}_i(\kappa_{k,T}) = \begin{bmatrix} x_i(k\mid k) \ x_i(k+1\mid k) \ \vdots \ x_i(k+T-1\mid k) \end{bmatrix} \in \mathbb R^T8 to the immediate next step: xi(κk,T)=[xi(kk) xi(k+1k)  xi(k+T1k)]RT\mathsf{x}_i(\kappa_{k,T}) = \begin{bmatrix} x_i(k\mid k) \ x_i(k+1\mid k) \ \vdots \ x_i(k+T-1\mid k) \end{bmatrix} \in \mathbb R^T9

These rules implement a trust-weighted, trajectory-level blending of neighbor states.

5. Lyapunov-Based Convergence Analysis

ADC's convergence is established using a Lyapunov function constructed on the pairwise trajectory disagreements (Eq. (31)): kk0 Expressed in matrix-trace notation (Eq. (32)): kk1 where kk2 stacks all agent trajectories and kk3 (graph Laplacian extended to trajectories).

For row-stochastic, symmetric kk4, Lemma 1 asserts

kk5

implying non-increasing kk6 (kk7), and the only equilibrium (kk8) is consensus among all agents’ trajectories. Theorem 1 guarantees asymptotic agreement under these properties.

6. Algorithmic Flow

A summary of the per-step algorithm performed at each agent is as follows:

  1. Inputs: Current and previous neighbor predicted trajectories kk9; parameters TT0.
  2. Trust evaluation: For each neighbor TT1, for each horizon step, construct TT2, compute TT3, aggregate to TT4, and update commitment TT5.
  3. Weight assignment: Form TT6, normalize to obtain TT7.
  4. Trajectory update: Apply Eq. (22) for all steps within the horizon.
  5. Broadcast: Share the updated trajectory TT8.

This sequence ensures that each agent continuously integrates its view of trustworthiness into the next prediction horizon.

7. Empirical Findings and Performance Properties

Simulations on a connected 20-agent spanning-tree graph confirm the effectiveness of ADC under various adversarial regimes (Renganathan et al., 14 Jul 2025). Key features:

  • Scenario: TT9, xi(κk,T)\mathsf{x}_i(\kappa_{k,T})0, xi(κk,T)\mathsf{x}_i(\kappa_{k,T})1, 500 steps. 25% of agents act “stubborn” (state-locked, xi(κk,T)\mathsf{x}_i(\kappa_{k,T})2), another 25% randomize (xi(κk,T)\mathsf{x}_i(\kappa_{k,T})3).
  • Convergence: All non-adversarial agents reach consensus within approximately 200 steps; consensus error xi(κk,T)\mathsf{x}_i(\kappa_{k,T})4.
  • Trust dynamics: Trust coefficient xi(κk,T)\mathsf{x}_i(\kappa_{k,T})5 sharply declines for neighbors who misbehave, then recovers rapidly when normal behavior resumes (Fig. 4).
  • Robustness: The dynamic trust–commitment weighting confers resilience to both persistent (stubborn) and sporadic (random) adversarial behaviors.
  • Horizon effects: Increasing xi(κk,T)\mathsf{x}_i(\kappa_{k,T})6 accelerates misbehavior detection but can slightly slow convergence in nominal (non-adversarial) conditions.

These results demonstrate a trade-off between early detection of anomalies and nominal rate of consensus, with trust-weighting providing robust adaptation even as underlying agent intentions fluctuate.

ADC’s rolling-horizon predictive trust-based structure represents a significant advance for resilient coordination in complex networked multi-agent systems (Renganathan et al., 14 Jul 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Rolling-Horizon Predictive Trust-Based Consensus Protocols (ADC).