Hypergraph Multi-Party Payment Channels (H-MPCs)
- H-MPCs are off-chain payment channels modeled as hypergraphs, enabling multi-party concurrency without a central coordinator, and achieving a 94.69% success rate in tests.
- They utilize a proposer-ordered, leaderless DAG state machine with cryptographic state management to ensure safety, prevent routing deadlocks, and eliminate timeouts.
- Experimental evaluations on a 150-node hyperedge demonstrate that H-MPCs overcome liquidity fragmentation by maintaining fungible funds and stable balance skew.
Hypergraph-based Multi-Party Payment Channels (H-MPCs) are an off-chain payment channel construction designed to address fundamental scalability and efficiency barriers of public blockchains. H-MPCs generalize traditional Payment Channel Networks (PCNs), which link users via bilateral channels, by leveraging hypergraph topology and collectively funded hyperedges for true multi-party concurrency. By integrating cryptographic state management over a proposer-ordered directed acyclic graph (DAG), H-MPCs achieve leaderless, fully concurrent payments between any participants within a hyperedge, significantly improving upon the flexibility and routing success rates of prior approaches. Empirical evaluation on a 150-node implementation demonstrates a transaction success rate of approximately 94.69%, with failure modes limited to sender insufficiency and no occurrences of timeouts or routing deadlocks (Nainwal et al., 12 Dec 2025).
1. Formalization: Hypergraph Channel Model
H-MPCs represent off-chain payment networks as hypergraphs, denoted by , where is the set of on-chain participants and is the set of hyperedges. Each hyperedge encodes one collectively funded, multi-party payment channel:
- , where , is the participant set of hyperedge , and represents current off-chain available balances for all users.
- Invariant: funding UTXO value.
This generalizes bilateral channels to arbitrary, overlapping sets of parties, enabling a hypergraph structure where multi-participant channels can co-exist and interconnect.
2. H-MPC Hyperedge Construction
2.1 Funding and On-chain Representation
- All participants collaboratively lock funds into a single on-chain UTXO with:
where is controlled by an -of- multisignature or Taproot script (for ).
2.2 State Commitment and Closing
- Local state at epoch is committed by a Merkle root over per-participant leaves:
- To close, one executes:
with at final epoch .
2.3 Balance Evolution
- At any , hyperedge holds locked value. Reserved funds are always accurately tracked; symbolic updates via protocol suffice until checkpointing.
3. Leaderless DAG Update Protocol
H-MPC employs a proposer-ordered, leaderless DAG state machine:
3.1 Structure and Concurrency
- Each participant maintains a DAG of dagleaves and dagroots.
- Linear sequence per participant; cross-participant chains run in parallel, encoding full concurrency.
3.2 Dagleaf Proposal Protocol
When initiates a payment of to :
- Sample new , compute .
- Compute to others.
- Assemble
and sign.
- Send to ; upon signature, reveal and broadcast.
Pseudocode (sender):
1 2 3 4 5 6 7 8 9 10 |
procedure ProposeLeaf(u_i, u_j, v, f_i): r_prev ← last_revocation_secret(u_i) r_rev ← random() ΔB ← ComputeDelta(n, i, j, v, f_i) τ ← (u_i,u_j,v,f_i, H(r_prev), H(r_rev)) σ_i ← Sign_i(τ) send (τ, σ_i) to u_j wait for σ_j from u_j reveal r_prev; broadcast (τ, σ_i, σ_j) end |
3.3 DAG Finalization and Security
- Nodes append only leaves with correct .
- Every seconds, nodes batch tips into a new dagroot:
with threshold participant signature.
Cryptographic Primitives
- modeled collision-resistant.
- All signatures are unforgeable.
- Revealed precludes chain forking at past tips.
Formal Safety and Liveness
- Safety (No Double-Spending): Under Byzantine faults, two conflicting finalized dagroots with different balances cannot occur: threshold signatures enforce uniqueness.
- Liveness: Honest leaves are included in some finalized dagroot within time under continuous participation.
4. Comparison with Prior Multi-Party Channels
H-MPCs remove the dependency on leaders or coordinators required by prior MPC designs (e.g., Perun, Sprites), precluding single points of failure and obviating the need for watchtowers. Each node independently proposes and sequences its chain segment; as DAG concurrency is unbounded, only checkpoint finalization is collective. Disputes are resolved via cryptographically enforced revocable secrets and threshold signatures, eliminating external monitors. Intra-hyperedge transfers are instantaneous; inter-hyperedge routing utilizes hop-by-hop conditional dagleaves.
5. Routing, Liquidity, and Fragmentation
5.1 Liquidity Fragmentation
Classical PCNs suffer from fragmentation: locked funds on edge are siloed and not fungible elsewhere, increasing routing failure probability (as certain paths may be depleted). Channel depletion (one-sided drain) further reduces success rates.
In H-MPC, all funds in are fungible: participants can pay any subset of without fragmentation, and balance skew is minimized.
5.2 Skewness Metric
Balance skew at epoch :
with $0 \leq S(B^{(t)}) \leq S_\max(n) = 2(n-1)/n$.
Empirical $S_\max(\mathrm{obs}) \approx 0.70 \ll 1.986$ in 150-node tests, indicating no severe depletion after payments.
5.3 Transaction Success Rates
Observed routing success, compared to prior work:
| Protocol | Success Rate | Failure Sources |
|---|---|---|
| Lightning | 50–80% | Routing, HTLC expiry |
| SpeedyMurmurs | 60–90% | Path selection, liquidity |
| Flare | 65–85% | Incomplete view, routing |
| Spider | 80–95% | Congestion, imbalance |
| Perun (2-party) | 90–95% | Endpoint limits |
| H-MPC | 94.69% | Only balance insufficiency |
All failures in H-MPC derive solely from local sender insufficiency; all routing deadlocks and HTLC expiry causes are eliminated.
6. Implementation, Evaluation, and Performance Metrics
6.1 Experimental Design
- Go simulator, single hyperedge ().
- off-chain payments in $100$ batches of .
- New dagroot finalized after each batch.
- Metrics: batch success, balance, skewness, runtime.
6.2 Empirical Results
- Success: ; failure: ().
- Per-batch: min $908$ (), max $1,000$ ().
- Skewness stable, .
6.3 Statistical Analysis
Success modeled Bernoulli(), , , standard error . 95% CI: .
6.4 Throughput and Latency
- Failures: only when proposal would induce negative sender balance.
- No HTLC timeouts, no watchtower delays.
- Batch latency: . Throughput: ; with s, yields tx/s per 150-party hyperedge.
7. Open Problems and Future Directions
Identified limitations and research opportunities include:
- Scalability to true multi-hyperedge topologies () with real-world networking latency remains untested.
- Incentive compatibility for relay fee structure requires comprehensive economic analysis.
- On-chain implementations utilizing advanced primitives such as Taproot, together with gas/fee measurement for dispute resolution cases, are open.
- Dynamic hyperedge creation and optimal liquidity allocation across intersecting hyperedges is an open line for future study (Nainwal et al., 12 Dec 2025).