Sionna RK: GPU-Accelerated 5G PHY Platform
- Sionna RK is a comprehensive, GPU-accelerated simulation and experimental platform designed for rigorous 5G PHY research and AI/ML-based radio access prototyping.
- It integrates advanced hardware like NVIDIA Jetson AGX Orin with open-source software stacks such as OpenAirInterface and the TensorFlow-based Sionna Library for real-time wireless research.
- The platform supports end-to-end differentiability, extensible modules including PyJama for jamming studies, and reproducible benchmarking to drive next-generation PHY innovations.
The Sionna Research Kit (RK) is a comprehensive, GPU-accelerated experimental and simulation platform designed to enable rigorous research and rapid prototyping for next-generation physical-layer (PHY) design, including 5G New Radio (NR), AI/ML-driven radio access networks (AI-RAN), and adversarial wireless scenarios. Developed around NVIDIA’s Sionna physical-layer library, the Jetson AGX Orin edge AI hardware, OpenAirInterface (OAI) software-defined 5G stack, and extensible ML/AI pipelines (notably TensorRT), Sionna RK unifies standards-compliant real-world 5G operation with scalable, differentiable, and reproducible link-level simulation—including integration of extensions such as PyJama for jamming and anti-jamming studies (Hoydis et al., 2022, Cammerer et al., 19 May 2025, Ulbricht et al., 2024).
1. Platform Architecture and Core Principles
Sionna RK is constructed as a modular, software-defined research environment centered on the following elements:
- Hardware: NVIDIA Jetson AGX Orin (12-core ARM Cortex-A78AE CPU, 2048 CUDA cores, 64 Tensor Cores, 32 GB LPDDR5), supporting unified memory space between CPU and GPU, with integration of SDR front-ends such as Ettus USRP B210 for RF I/O (Cammerer et al., 19 May 2025).
- Software Stack:
- OpenAirInterface (OAI): Real-time, open 5G NR stack (gNB and UE), allowing dynamic offload of PHY modules to the GPU. OAI is recompiled to interface directly with CUDA and TensorRT under RK (Cammerer et al., 19 May 2025).
- Sionna Library: GPU-accelerated, TensorFlow-based collection of link-level building blocks (OFDM, MIMO, FEC, channel models, neural network layers) exposed as Keras Layers for rapid composition, simulation, and integration with AI/ML training (Hoydis et al., 2022).
- TensorRT: C++ inference engine for deploying trained neural PHY models as highly optimized, low-latency GPU kernels for real-time operation within the OAI stack (Cammerer et al., 19 May 2025).
- End-to-End Differentiability: All simulation components, with very limited exceptions (mainly a subset of custom kernels), are differentiable, allowing gradient-based optimization for both standard and ML-enhanced PHY chains (Hoydis et al., 2022).
Table: Selected hardware and system specifications
| Component | Key Specs | Supported Functionality |
|---|---|---|
| Jetson AGX Orin | 12c ARM CPU, 2,048 CUDA, 64 Tensor Cores | Real-time PHY + AI inference, unified memory |
| SDR Front-End | Ettus USRP B210 (OAI-compatible) | Standards-compliant RF I/O, SDR Rx/Tx at 5G NR rates |
| Memory | 32 GB shared (204 GB/s bandwidth) | Low-latency CPU/GPU data movement |
This architecture eliminates PCIe latency overheads and enables in-place manipulation of large resource-grid tensors, supporting thousands of parallel Monte Carlo trials or real-time subframe processing (Cammerer et al., 19 May 2025, Hoydis et al., 2022).
2. Principal PHY Components and Functional Modules
The Sionna RK platform incorporates a wide breadth of tested modules and supports extensive customization:
- Channel Models: AWGN, flat-fading (with optional correlation), full MIMO (with ), 3GPP 38.901 TDL/CDL (UMa, UMi, RMa), and import of arbitrary CIRs (e.g., ray-tracing) (Hoydis et al., 2022). PyJama further extends with independently parameterizable jamming channels and mobility models (Ulbricht et al., 2024).
- Forward Error Correction: 5G LDPC (encoding, belief-propagation, min-sum), Polar (SC, SCL, SCL-CRC), convolutional (Viterbi), Reed-Muller, CRC, with batch-parallel tensor implementations and custom CUDA acceleration for bottleneck functions (Hoydis et al., 2022).
- OFDM/MIMO Processing: IFFT/FFT, cyclic-prefix insertion/removal, 5G-style flexible slot/frame with arbitrary pilot patterns, least-squares channel estimation, ZF/MMSE/MRC, multi-user/cell support (Hoydis et al., 2022).
- AI/ML Model Integration: Native replacement of any chain block with Keras-based neural models—mapper/demapper, channel estimator, end-to-end auto-encoders—supported through tf.GradientTape and ONNX/TensorRT export workflow for deployment (Cammerer et al., 19 May 2025).
- Jamming/Anti-Jamming Extensions: PyJama introduces frequency/time-domain differentiable jamming models, anti-jamming spatial filtering (POS/Ian-LMMSE), and simulation of robust or adversarial link chains (Ulbricht et al., 2024).
3. AI-Enhanced Workflow: From Offline Learning to Inline Inference
Sionna RK provides an integrated pipeline for AI/ML development and deployment in standards-compliant environments:
- Data Generation: Sionna's differentiable link-level blocks and/or OAI+RF SDR setup generate labeled waveforms (e.g., DMRS, resource grids) under arbitrary SNR, MCS, and channel conditions (Cammerer et al., 19 May 2025).
- Model Training: Researchers define and jointly train models (e.g., MPNN-based neural receivers, transformer channel estimators) in TensorFlow/Keras, exploiting Sionna's batch-differentiable operations. Typical training involves cross-entropy or MSE losses over Monte Carlo batches (Cammerer et al., 19 May 2025, Hoydis et al., 2022).
- Deployment: Trained models are exported via ONNX and converted to TensorRT engines, achieving inference speeds compatible with 1 ms 5G subframe deadlines, with near-peak GPU occupancy and sub-3 ms E2E latency (Cammerer et al., 19 May 2025).
A detailed demonstration uses a Var-MCS NRX neural receiver: input resource grids pass through Conv1D and message-passing layers before dense LLR output, trained on Sionna-generated TDL-C samples and deployed as a drop-in replacement for MMSE+log-MAP in OAI’s RX pipeline (Cammerer et al., 19 May 2025).
4. Real-World and Link-Level Application Scenarios
Sionna RK enables:
- In situ 5G NR AI/ML testbeds: The coordinated use of Jetson RK, OAI, and TensorRT permits real-time deployment and benchmarking of neural receivers, beamformers, and ML-based PHY blocks with standards-compliant hardware and commercial UE devices (Cammerer et al., 19 May 2025).
- Edge and AI-RAN Use Cases: Configurations such as O-CU/O-DU split functions, support for near-RT RIC xApps (scheduling/power/beam control), and edge scenarios including robotics, V2X, and immersive XR are supported due to the on-board AI acceleration (Cammerer et al., 19 May 2025).
- Adversarial PHY Simulation: With PyJama, differentiable jamming and anti-jamming setting can be natively integrated, enabling co-optimization of jammer/defense strategies via end-to-end gradient flows, including L1/MSE-based jamming risk maximization and anti-jam receiver adaptation (Ulbricht et al., 2024).
5. Reproducibility, Benchmarking, and Extensibility
Sionna RK emphasizes reproducible open science:
- Performance Metrics: Standard metrics (BER, BLER, SER, FER) are provided by utility functions, with random seeds controlling reproducibility across platforms (Hoydis et al., 2022).
- Custom/State-of-the-Art Algorithms: Built-in SOTA decoders and detectors (5G LDPC BP, min-sum, Polar SCL, MIMO ZF/MMSE) are available for fair and efficient benchmarking (Hoydis et al., 2022), while new ML blocks or signal-processing kernels can be implemented via Keras, Python, or CUDA custom ops (Cammerer et al., 19 May 2025).
- Codebase and Community: Source code, hardware scripts, and extensive examples are open-sourced (Apache-2.0), with public tutorials and a maintainable issue tracker for community-driven extension (e.g., new FECs, MIMO detectors, or ML primitives) (Hoydis et al., 2022, Cammerer et al., 19 May 2025).
6. Installation, Integration, and Practical Usage
Installation and bring-up procedures are streamlined:
- Software Installation: Sionna is installable via PyPI (
pip install sionna) or from source (with custom CUDA ops auto-built when supported). PyJama is installable via GitHub or PyPI as well. OAI, TensorRT plugins, and UHD drivers are deployed via setup scripts in the RK repository (Hoydis et al., 2022, Cammerer et al., 19 May 2025, Ulbricht et al., 2024). - Hardware Setup: Configuration scripts match SDR front-end and Jetson RK connectivity (NVMe SSD, GbE, PCIe, USRP), with recommended usage of commercial UEs for baseline compliance testing (Cammerer et al., 19 May 2025).
- Operational Flow: Walk-throughs and tutorials guide from simple Jupyter-based link-level examples to full real-time OAI gNB boot with GPU offload, code hooks for custom CUDA/ML kernel injection, and scaling to distributed, multi-node testbeds (Hoydis et al., 2022, Cammerer et al., 19 May 2025).
- Integration of Extensions: Adding jamming/anti-jamming involves simple imports into Sionna workflows, leveraging PyJama classes as interchangeable modules in otherwise standard pipelines (Ulbricht et al., 2024).
7. Future Directions
Ongoing roadmap items for Sionna RK include:
- Advanced ray-tracing extensions (Sionna RT) for physical, time-varying geometric channel generation.
- Modules targeting terahertz-band channel phenomena, RIS layers, and integrated sensing/communication (e.g., automotive radar emulation) (Hoydis et al., 2022).
- Enhanced integration with OAI/ORAN ecosystems to support xApp-based 6G experimentation and scalable edge ML deployments (Cammerer et al., 19 May 2025).
By unifying differentiable simulation, high-throughput hardware, and open software-defined networking stacks, Sionna RK establishes a flexible basis for reproducible, standards-compliant, and AI-driven PHY-layer research (Hoydis et al., 2022, Cammerer et al., 19 May 2025, Ulbricht et al., 2024).