Papers
Topics
Authors
Recent
2000 character limit reached

Hypergraph Multi-Party Payment Channels (H-MPCs)

Updated 19 December 2025
  • H-MPCs are off-chain payment channels modeled as hypergraphs, enabling multi-party concurrency without a central coordinator, and achieving a 94.69% success rate in tests.
  • They utilize a proposer-ordered, leaderless DAG state machine with cryptographic state management to ensure safety, prevent routing deadlocks, and eliminate timeouts.
  • Experimental evaluations on a 150-node hyperedge demonstrate that H-MPCs overcome liquidity fragmentation by maintaining fungible funds and stable balance skew.

Hypergraph-based Multi-Party Payment Channels (H-MPCs) are an off-chain payment channel construction designed to address fundamental scalability and efficiency barriers of public blockchains. H-MPCs generalize traditional Payment Channel Networks (PCNs), which link users via bilateral channels, by leveraging hypergraph topology and collectively funded hyperedges for true multi-party concurrency. By integrating cryptographic state management over a proposer-ordered directed acyclic graph (DAG), H-MPCs achieve leaderless, fully concurrent payments between any participants within a hyperedge, significantly improving upon the flexibility and routing success rates of prior approaches. Empirical evaluation on a 150-node implementation demonstrates a transaction success rate of approximately 94.69%, with failure modes limited to sender insufficiency and no occurrences of timeouts or routing deadlocks (Nainwal et al., 12 Dec 2025).

1. Formalization: Hypergraph Channel Model

H-MPCs represent off-chain payment networks as hypergraphs, denoted by H=(V,E)H = (V, E), where V={u1,,uN}V = \{u_1, \ldots, u_N\} is the set of on-chain participants and EE is the set of hyperedges. Each hyperedge eEe \in E encodes one collectively funded, multi-party payment channel:

  • e=(P,B)e = (P, B), where PVP \subseteq V, P=n2|P| = n \geq 2 is the participant set of hyperedge ee, and B=[b1,,bn]B = [b_1, \ldots, b_n] represents current off-chain available balances for all nn users.
  • Invariant: k=1nbk=\sum_{k=1}^n b_k = funding UTXO value.

This generalizes bilateral channels to arbitrary, overlapping sets of nn parties, enabling a hypergraph structure HH where multi-participant channels can co-exist and interconnect.

2. H-MPC Hyperedge Construction

2.1 Funding and On-chain Representation

  • All nn participants collaboratively lock funds into a single on-chain UTXO with:

FundingTx:(in1,,inn)(outH)\text{FundingTx}: (\text{in}_1, \ldots, \text{in}_n) \rightarrow (\text{out}_H)

where outH\text{out}_H is controlled by an mm-of-nn multisignature or Taproot script (for PP).

2.2 State Commitment and Closing

  • Local state at epoch tt is committed by a Merkle root R^t\hat{R}_t over per-participant leaves:

Lk=H(uikbk)R^t=MerkleRoot(L1,,Ln)L_k = H(u_{i_k} \Vert b_k) \quad \Rightarrow \quad \hat{R}_t = \text{MerkleRoot}(L_1, \ldots, L_n)

  • To close, one executes:

CloseTx:(outH)(out1,,outn)\text{CloseTx}: (\text{out}_H) \rightarrow (\text{out}_1, \ldots, \text{out}_n)

with outk=bkT\text{out}_k = b_k^T at final epoch TT.

2.3 Balance Evolution

  • At any tt, hyperedge holds kbk(t)\sum_k b_k^{(t)} locked value. Reserved funds are always accurately tracked; symbolic updates via protocol suffice until checkpointing.

3. Leaderless DAG Update Protocol

H-MPC employs a proposer-ordered, leaderless DAG state machine:

3.1 Structure and Concurrency

  • Each participant uiu_i maintains a DAG GiG_i of dagleaves and dagroots.
  • Linear sequence per participant; cross-participant chains run in parallel, encoding full concurrency.

3.2 Dagleaf Proposal Protocol

When uiu_i initiates a payment of vv to uju_j:

  1. Sample new rrevr_\mathrm{rev}, compute H(rrev)H(r_\mathrm{rev}).
  2. Compute ΔB=[vfi at i,+v at j,+fi/(n2)\Delta B = [-v-f_i \text{ at } i, +v \text{ at } j, +f_i/(n-2) to others]].
  3. Assemble

τ=ui,uj,v,fi,H(rprev),H(rrev)\tau = \langle u_i, u_j, v, f_i, H(r_\mathrm{prev}), H(r_\mathrm{rev}) \rangle

and sign.

  1. Send (τ,σi)(\tau, \sigma_i) to uju_j; upon signature, reveal rprevr_\mathrm{prev} and broadcast.

Pseudocode (sender):

1
2
3
4
5
6
7
8
9
10
procedure ProposeLeaf(u_i, u_j, v, f_i):
  r_prev  last_revocation_secret(u_i)
  r_rev   random()
  ΔB     ComputeDelta(n, i, j, v, f_i)
  τ      (u_i,u_j,v,f_i, H(r_prev), H(r_rev))
  σ_i    Sign_i(τ)
  send (τ, σ_i) to u_j
  wait for σ_j from u_j
  reveal r_prev; broadcast (τ, σ_i, σ_j)
end

3.3 DAG Finalization and Security

  • Nodes append only leaves with correct H(rprev)H(r_\mathrm{prev}).
  • Every TT seconds, nodes batch tips into a new dagroot:

dagroott+1=(prev_root=dagroott,parent_tips,MerkleRoot(Bt+ΣΔB),signatures)\text{dagroot}_{t+1} = (\text{prev\_root} = \text{dagroot}_t, \text{parent\_tips}, \text{MerkleRoot}(B_t + \Sigma \Delta B), \text{signatures})

with >2n/3>2n/3 threshold participant signature.

Cryptographic Primitives

  • H()H(\cdot) modeled collision-resistant.
  • All signatures are unforgeable.
  • Revealed rprevr_\mathrm{prev} precludes chain forking at past tips.

Formal Safety and Liveness

  • Safety (No Double-Spending): Under <n/3<n/3 Byzantine faults, two conflicting finalized dagroots with different balances cannot occur: threshold signatures enforce uniqueness.
  • Liveness: Honest leaves are included in some finalized dagroot within O(T)O(T) time under continuous participation.

4. Comparison with Prior Multi-Party Channels

H-MPCs remove the dependency on leaders or coordinators required by prior MPC designs (e.g., Perun, Sprites), precluding single points of failure and obviating the need for watchtowers. Each node independently proposes and sequences its chain segment; as DAG concurrency is unbounded, only checkpoint finalization is collective. Disputes are resolved via cryptographically enforced revocable secrets and threshold signatures, eliminating external monitors. Intra-hyperedge transfers are instantaneous; inter-hyperedge routing utilizes hop-by-hop conditional dagleaves.

5. Routing, Liquidity, and Fragmentation

5.1 Liquidity Fragmentation

Classical PCNs suffer from fragmentation: locked funds on edge (u,v)(u,v) are siloed and not fungible elsewhere, increasing routing failure probability (as certain paths may be depleted). Channel depletion (one-sided drain) further reduces success rates.

In H-MPC, all funds in (P,B)(P,B) are fungible: participants can pay any subset of PP without fragmentation, and balance skew is minimized.

5.2 Skewness Metric

Balance skew at epoch tt:

S(B(t))=1nk=1nbk(t)bˉ(t)bˉ(t),bˉ(t)=1nkbk(t)S(B^{(t)}) = \frac{1}{n}\sum_{k=1}^n \frac{|b_k^{(t)} - \bar{b}^{(t)}|}{\bar{b}^{(t)}}, \quad \bar{b}^{(t)} = \frac{1}{n}\sum_k b_k^{(t)}

with $0 \leq S(B^{(t)}) \leq S_\max(n) = 2(n-1)/n$.

Empirical $S_\max(\mathrm{obs}) \approx 0.70 \ll 1.986$ in 150-node tests, indicating no severe depletion after 10510^5 payments.

5.3 Transaction Success Rates

Observed routing success, compared to prior work:

Protocol Success Rate Failure Sources
Lightning 50–80% Routing, HTLC expiry
SpeedyMurmurs 60–90% Path selection, liquidity
Flare 65–85% Incomplete view, routing
Spider 80–95% Congestion, imbalance
Perun (2-party) 90–95% Endpoint limits
H-MPC 94.69% Only balance insufficiency

All failures in H-MPC derive solely from local sender insufficiency; all routing deadlocks and HTLC expiry causes are eliminated.

6. Implementation, Evaluation, and Performance Metrics

6.1 Experimental Design

  • Go simulator, single hyperedge (n=150n=150).
  • 100,000100{,}000 off-chain payments in $100$ batches of 1,0001{,}000.
  • New dagroot finalized after each batch.
  • Metrics: batch success, balance, skewness, runtime.

6.2 Empirical Results

  • Success: 94,69094{,}690; failure: 5,3105{,}310 (94.69%94.69\%).
  • Per-batch: min $908$ (90.8%90.8\%), max $1,000$ (100%100\%).
  • Skewness stable, 0.7\approx 0.7.

6.3 Statistical Analysis

Success modeled Bernoulli(pp), N=100,000N = 100{,}000, p^=0.9469\hat{p} = 0.9469, standard error σ0.00071\sigma \approx 0.00071. 95% CI: [94.53%,94.86%][94.53\%, 94.86\%].

6.4 Throughput and Latency

  • Failures: only when proposal would induce negative sender balance.
  • No HTLC timeouts, no watchtower delays.
  • Batch latency: TT. Throughput: batch_size/T\text{batch\_size}/T; with T=1T = 1 s, yields 1,0001{,}000 tx/s per 150-party hyperedge.

7. Open Problems and Future Directions

Identified limitations and research opportunities include:

  • Scalability to true multi-hyperedge topologies (E>1|E| > 1) with real-world networking latency remains untested.
  • Incentive compatibility for relay fee structure fi/(n2)f_i/(n-2) requires comprehensive economic analysis.
  • On-chain implementations utilizing advanced primitives such as Taproot, together with gas/fee measurement for dispute resolution cases, are open.
  • Dynamic hyperedge creation and optimal liquidity allocation across intersecting hyperedges is an open line for future study (Nainwal et al., 12 Dec 2025).
Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Hypergraph-based Multi-Party Payment Channels (H-MPCs).