Resource Allocation for AI-RAN Coexistence on Shared O-Cloud

Develop resource management mechanisms that allocate GPU cycles, memory, and I/O across sensing dApps, AI inference models, and the communication stack on shared O-Cloud infrastructure while preventing quality-of-service degradation.

Background

The proposed architecture colocates sensing dApps and communication functions on shared accelerated O-Cloud infrastructure, creating contention for GPUs, memory, and I/O. Dynamic partitioning is needed to satisfy both sensing and connectivity requirements.

The authors explicitly state that AI-RAN coexistence raises open resource-allocation challenges that require mechanisms to ensure QoS.

References

Several open challenges remain. AI-RAN coexistence poses resource-allocation challenges: sensing dApps and AI inference models compete with the communication stack for GPU cycles, memory, and I/O on shared O-Cloud infrastructure, requiring mechanisms to prevent QoS degradation.

Enabling Programmable Inference and ISAC at the 6GR Edge with dApps  (2603.29146 - Polese et al., 31 Mar 2026) in Section 6, Conclusion and Open Challenges