Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 150 tok/s
Gemini 2.5 Pro 42 tok/s Pro
GPT-5 Medium 23 tok/s Pro
GPT-5 High 21 tok/s Pro
GPT-4o 87 tok/s Pro
Kimi K2 195 tok/s Pro
GPT OSS 120B 443 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

High Performance Consensus without Duplication: Multi-pipeline Hotstuff (2205.04179v4)

Published 9 May 2022 in cs.DC

Abstract: The state-of-the-art HotStuff operates an efficient pipeline in which a stable leader drives decisions with linear communication and two round-trips of message. However, the unifying proposing-voting pattern is not sufficient to improve the bandwidth and concurrency performance of the modern system. In addition, the delay corresponding to two rounds of message to produce a certified proposal in that scheme is a significant performance bottleneck. Thus, this study developed a new consensus protocol, Multi-pipeline HotStuff, for permissioned blockchain. To the best of the authors' knowledge, this is the first protocol that combines multiple HotStuff instances to propose batches in order without a concurrent proposal, such that proposals are made optimistically when a correct replica realizes that the current proposal is valid and will be certified by quorum votes in the near future. Because simultaneous proposing and voting are allowed by the proposed protocol without transaction duplication, it produced more proposals in every two rounds of messages. In addition, it further boosted the throughput at a comparable latency with that of HotStuff. The evaluation experiment conducted confirmed that the throughput of Multi-pipeline HotStuff outperformed that of the state-of-the-art protocols by approximately 60\% without significantly increasing end-to-end latency under varying system sizes. Moreover, the proposed optimization also performed better when it suffers a bad network condition.

Citations (2)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Questions

We haven't generated a list of open questions mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.