Regular Octopus CXL Topologies
- Regular Octopus topologies are defined by a rigorous combinatorial structure as biregular, diameter-two bipartite graphs equivalent to a 2-(n, d, 1) BIBD.
- They enable each host pair to share memory through a unique multi-headed CXL device, ensuring uniform single-hop latency and balanced bandwidth.
- The design balances pod size, device port count, and cost, offering a scalable and cost-effective solution for composable memory systems.
A regular Octopus topology defines a class of scalable, low-cost Compute Express Link (CXL) memory pooling networks distinguished by rigorous combinatorial structure and explicit performance-versus-cost trade-offs. Formally, a regular Octopus topology is a biregular, diameter-two, bipartite graph , directly corresponding to a 2- balanced incomplete block design (BIBD) with tightly constrained parameters. This configuration enables each pair of hosts to share memory through a unique multi-headed CXL device (MHD), without necessitating full all-to-all connectivity or reliance on expensive, high-port switches, thereby achieving memory pooling efficiency comparable to conventional architectures but at substantially reduced cost and complexity (Berger et al., 15 Jan 2025).
1. Graph-Theoretic Model
The CXL pod is formalized as a bipartite graph
where denotes the set of hosts (servers), the set of pools (MHDs), and the host–pool connectivity edges (physical CXL links). In this framework, each node in interfaces only with pool nodes in and vice versa; there are no intra-set links. This representation encodes the strict regularity and sharing constraints endemic to Octopus topologies.
2. Biregularity and Key Constraints
A regular Octopus enforces degree regularity on both sides: meaning every host connects to exactly MHDs and every MHD is attached to exactly hosts. Additionally, the critical property requires that
ensuring every distinct pair of hosts shares exactly one common pool. This uniquely positions each host-pair at distance two in , with a single mutual rendezvous device, and prohibits congestion or ambiguity in mediation paths.
3. Construction and Balanced Incomplete Block Designs
The construction of regular Octopus topologies leverages the theory of balanced incomplete block designs (BIBDs). Specifically, an Octopus configuration with parameters transforms into a 2- BIBD, treating hosts as “treatments” and pools as “blocks” of size . The admissible parameter sets are governed by characteristic BIBD identities: with divisibility conditions:
- $1< d < n$
Classical existence results and explicit combinatorial constructions (e.g., via projective planes or difference sets) provide infinite families and practical construction recipes under these arithmetic constraints.
| Parameter | Role in Topology | BIBD Interpretation |
|---|---|---|
| # of hosts | # of “treatments” | |
| # of MHDs/pools | # of “blocks” | |
| Host degree | # of blocks per treatment | |
| Pool degree | Block size |
4. Performance, Connectivity, and Pooling Semantics
The condition imposes that the associated bipartite graph has diameter two: any two hosts connect via a unique shared pool. Consequently, the maximum communication path length (excluding intra-host/inter-pool aspects) is two hops. This yields several direct properties:
- Pooling semantics: Each host-pair can directly collaborate or share memory via their sole common MHD, which supports uniform single-hop (distance-two) latency for critical shuffle or 1:1 messaging patterns.
- Bandwidth balance: Each host’s links allow even memory interleaving across its assigned MHDs, and regular port distribution ensures no network hotspots.
- Memory allocation: The topology’s structure enables straightforward stripe-based interleaving and compute-to-memory mapping with load balance.
5. Cost-Benefit Trade-Offs and Resource Scaling
The principal trade-off in regular Octopus architectures is between pod size (), device port count (), and per-host cost. Cost per host is proportional to , and the topology allows amortization over many inexpensive, small-port MHDs. Increasing (larger-port MHDs) raises pod capacity per
but with rising device and intrinsic latency costs; increasing improves host connectivity but imposes greater network interface cost per host. Optimal selection of places the design on the cost–pod-size–latency Pareto frontier. This suggests design flexibility for datacenter operators to closely tailor deployments to application-level and economic constraints.
6. Examples and Existence Guarantees
Canonical infinite families of regular Octopus networks include:
- Finite projective planes of order , yielding .
- Cyclic difference sets, enabling cyclic BIBDs for many .
- Wilson’s theorems, which guarantee the existence of solutions for sufficiently large matching the divisibility criteria.
These constructions allow selection of practical parameters for real-world CXL pods and guide both hardware procurement and topology planning.
7. Practical Implementation and Empirical Results
Simulation with realistic production traces demonstrates that Octopus topologies achieve memory savings on par with more expensive, fully-pooled designs. Hardware evaluation confirms that Octopus configurations reduce RPC latency by a factor of relative to RDMA (Berger et al., 15 Jan 2025). The formal structure and allocation algorithms developed for these graphs underpin robust, production-ready CXL pooling fabrics, enabling cost-effective scaling without performance compromise.
In summary, regular Octopus topologies offer a mathematically rigorous framework for designing uniform-latency, low-cost, diameter-two CXL pooling fabrics. Their equivalence to special BIBDs ensures concrete existence/falsifiability criteria for design parameters, guiding efficient hardware realization and scalable deployment for composable memory systems.