Papers
Topics
Authors
Recent
AI Research Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 74 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 13 tok/s Pro
GPT-5 High 20 tok/s Pro
GPT-4o 87 tok/s Pro
Kimi K2 98 tok/s Pro
GPT OSS 120B 464 tok/s Pro
Claude Sonnet 4 40 tok/s Pro
2000 character limit reached

Scaling Data Center TCP to Terabits with Laminar (2504.19058v1)

Published 27 Apr 2025 in cs.NI and cs.OS

Abstract: Laminar is the first TCP stack designed for the reconfigurable match-action table (RMT) architecture, widely used in high-speed programmable switches and SmartNICs. Laminar reimagines TCP processing as a pipeline of simple match-action operations, enabling line-rate performance with low latency and minimal energy consumption, while maintaining compatibility with standard TCP and POSIX sockets. Leveraging novel techniques like optimistic concurrency, pseudo segment updates, and bump-in-the-wire processing, Laminar handles the transport logic, including retransmission, reassembly, flow, and congestion control, entirely within the RMT pipeline. We prototype Laminar on an Intel Tofino2 switch and demonstrate its scalability to terabit speeds, its flexibility, and robustness to network dynamics. Laminar reaches an unprecedented 25M pkts/sec with a single host core for streaming workloads, enough to exceed 1.6Tbps with 8K MTU. Laminar delivers RDMA-equivalent performance, saving up to 16 host CPU cores versus the TAS kernel-bypass TCP stack with short RPC workloads, while achieving 1.3$\times$ higher peak throughput at 5$\times$ lower 99.99p tail latency. A key-value store on Laminar doubles the throughput-per-watt versus TAS. Demonstrating Laminar's flexibility, we implement TCP stack extensions, including a sequencer API for a linearizable distributed shared log, a new congestion control protocol, and delayed ACKs.

Summary

We haven't generated a summary for this paper yet.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube