Papers
Topics
Authors
Recent
Search
2000 character limit reached

Cloud-native and Distributed Systems for Efficient and Scalable Large Language Models -- A Research Agenda

Published 19 Apr 2026 in cs.DC | (2604.17227v1)

Abstract: The rapid rise of LLMs has revolutionized various AI applications, from natural language processing to code generation. However, the computational demands of these models, particularly in training and inference, present significant challenges. Traditional systems are often unable to meet these requirements, necessitating the integration of cloud-native and distributed architectures. This paper explores the role of cloud platforms and distributed systems in supporting the scalability, efficiency, and optimization of LLMs. We discuss the complexities of LLM deployment, including data management, resource optimization, and the need for microservices, autoscaling, and hybrid cloud-edge solutions. Additionally, we examine emerging research trends, such as serverless inference, quantum computing, and federated learning, and their potential to drive the next phase of LLM innovation. The paper concludes with a roadmap for future developments, emphasizing the need for continued research, standardization, and cross-sector collaboration to sustain the growth of LLMs in both research and enterprise applications.

Summary

  • The paper presents a research agenda outlining cloud-native architectures to efficiently scale large language models while optimizing resource utilization.
  • It proposes adaptive methods including elastic scheduling, energy-aware placement, and AI-driven orchestration to manage heterogeneous hardware and bursty workloads.
  • It identifies challenges such as inter-service overhead, privacy, and standardization that must be addressed to support reliable LLM deployments.

Cloud-Native and Distributed Systems for Scalable LLMs

Motivation and Architectural Foundations

LLMs now underpin mission-critical AI workloads with unparalleled computational intensity and high-throughput operational requirements. The scale of transformer-based models (e.g., GPT-4, PaLM, LLaMA) yields training and inference phases consuming substantial hardware, memory, storage, and network resources at both the cloud datacenter and distributed edge. This architectural shift diverges fundamentally from traditional ML deployments by requiring dynamic, multi-tenant, heterogeneous cluster management; static, monolithic cloud deployments no longer suffice.

The architectural paradigm for LLM system support integrates microservices, containerization, elastic orchestration, and runtime resource accounting. This cloud-native stack, spanning diverse hardware (GPUs, TPUs, NPUs), distributed scheduling substrates, and scalable runtime intermediaries, enables both fine-grained modularity and workload phase-specific optimizations. Atop this infrastructure, LLMs are deployed via standardized APIs ensuring integration with downstream applications, allowing infrastructure to mask internal complexity and present robust, scalable services. Figure 1

Figure 1: Hierarchical stack for LLM deployment detailing hardware, orchestration, resource management, and interface layers.

Systemic and Operational Challenges

LLMs fundamentally alter system assumptions via bursty, user-driven inference, hybrid context lengths, multi-modal retrieval pipelines, and agility requirements (latency, throughput, SLOs). The paper identifies several core challenges:

  • Computational challenges: Efficient partitioning across heterogeneous accelerators, adaptive scheduling strategies (precision scaling, iterative batching, speculative decoding), and computation-communication co-design are not optional optimizations; suboptimal resource mapping or static scheduling severely reduces system throughput and increases tail latency.
  • System/software challenges: Microservices, though modular, introduce inter-service communication overhead and can inhibit LLM-aware cross-layer optimizations. Lack of cross-layer observability obstructs effective performance diagnosis and fine-grained resource adaptation, especially under real-world, multi-framework workloads.
  • Operational/resource management: SLO adherence, fairness, and energy/carbon awareness must be enforced jointly amidst resource heterogeneity, bursty demand, and inter-node communication bottlenecks. Batch scheduling mechanisms (prompt/decode disaggregation, topology-aware placement) are not sufficient; auto-scaling, fairness in allocation, and active adaptation require sophisticated feedback and predictive control.
  • Data and privacy: LLMs are coupled to heterogeneous upstream data, both at training and inference (RAG, user context), exposing new governance and compliance obstacles. Federated and decentralized approaches mitigate privacy exposure but exacerbate coordination, bandwidth, and consistency challenges. Figure 2

    Figure 2: Illustrative breakdown of multi-level challenges arising from supporting LLMs at the system and operational levels.

Resource Management and Optimization Mechanisms

The authors systematize resource management into four pivotal strategies:

  1. Elastic Scheduling for Heterogeneous Hardware: Contemporary serving systems implement prefill/decode phase separation, disaggregation, and specialized hardware allocation to extract peak throughput, while adaptive autoscaling (e.g., coordinated pool balancing, token-velocity-driven scaling) maintains SLOs amidst variable demand.
  2. Energy- and Carbon-Aware Placement: Dynamic runtime scheduling now integrates region-level carbon telemetry, energy metrics, and cluster-level power efficiency into allocation logic, aiming to minimize emissions (e.g., via time-shifting, cooling-aware placement).
  3. Adaptive QoS in Multi-Tenant Contexts: Workload classification enables differentiated handling—interactive, batch, streaming, and tool-augmented inference requests—while fairness-aware algorithms (e.g., token-level VTC, quota-based isolation) prevent saturation from heavy tenants and improve utilization under dynamic concurrency.
  4. AI/DRL-Driven Orchestration: Reinforcement learning (e.g., AWARE, DRPC) and AI-augmented control automate scaling, placement, and real-time adaptation by ingesting fine-grained telemetry, with hybrid control guardrails ensuring reliable deployment. Figure 3

    Figure 3: Taxonomy of resource management and optimization strategies central to modern LLM serving and infrastructure orchestration.

Several architectural and algorithmic frontiers are identified:

  • Serverless LLM inference systems—leveraging batch size optimization, cold start elimination, and energy-centric scheduling—delegate operational complexity to the underlying platform, drastically improving TCO and elasticity.
  • Federated and decentralized training enables privacy-preserving, distributed refinement, using techniques such as parameter-efficient adaptation (LoRA, embedding tuning), workload-aware gradient aggregation, and local execution enclaves for compliance and regulatory adherence.
  • Hybrid quantum and neuromorphic architectures are emerging as candidates for bottlenecked optimization tasks and energy-constrained inference, but practical advantages currently hinge on integration overheads, orchestration complexity, and the effective partitioning of LLM workloads. Figure 4

    Figure 4: Survey of emerging architectural innovations impacting system support for LLM scalability and efficiency.

Future Directions and Research Frontiers

Key forward-looking research areas include:

  • Hardware acceleration for sparse matrix multiplications: Exploiting sparsity for SpMM on GPUs/FPGA will reduce energy and memory barriers in LLM training/inference, subject to co-design across sparsity pattern induction, dataflow scheduling, and accelerator capabilities.
  • Quantum computing integration: While beneficial for selected subroutines (non-convex optimization, sampling), broad quantum acceleration faces major practical and interface challenges. Hybrid classical–quantum orchestration remains the active research focus.
  • System software with intelligent LLM agents: LLM-powered reasoning modules are being embedded into cloud kernel schedulers, offloading policies, active anomaly detection, and system diagnosis pipelines, offering greater adaptability than static, rules-based controllers; explainability and guardrail verification become essential.
  • Standardization and benchmarking: Current system–model interface fragmentation hinders reproducibility and portability. The development of unified intermediate representations (ONNX/MLIR) and workload-representative, system-level benchmarks is critical for robust, cross-provider deployment and fair evaluation.
  • Information and context management: Prompt engineering and context/routing optimization (compression, adaptive selection, memory lifecycle management) are pivotal for controlling inference cost and accuracy, particularly under long-context or hybrid RAG/long-context model deployments.
  • Personalization and LLM@Home: Parameter-efficient adaptation strategies (profile-to-PEFT, embedding-to-prefix) enable scalable user personalization, while memory- and energy-aware serving models foster practical deployment on edge/home devices. Figure 5

    Figure 5: Roadmap outlining principal future directions in cloud-native, distributed infrastructure and LLM system co-design.

Conclusion

LLMs demand a departure from legacy ML/DL infrastructure, necessitating LLM-aware cloud-native and distributed system architectures that co-optimize resource utilization, system reliability, energy sustainability, privacy, and developer/tenant abstraction. The research agenda synthesizes a layered vision—spanning hardware acceleration, adaptive orchestration, federated training, self-managing clouds, and system-wide standardization—needed to ensure LLMs remain both scalable and operationally viable as foundation models proliferate in research, enterprise, and edge domains (2604.17227).

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.